0% found this document useful (0 votes)
42 views9 pages

Investigating The Limitations of Some of The Most Popular Lines and Points Based Camera Calibration Techniques in Photogrammetry and Computer Vision

The need for camera calibration has been a fundamental requirement since the foundation of photogrammetry. As the number of photogrammetry applications grows and the technology advances, camera calibration became more complex to undertake. However, when measurements derived from the imagery is used for scene modelling purposes, the consequences of small imaging errors can be significant on the accuracy of derived models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views9 pages

Investigating The Limitations of Some of The Most Popular Lines and Points Based Camera Calibration Techniques in Photogrammetry and Computer Vision

The need for camera calibration has been a fundamental requirement since the foundation of photogrammetry. As the number of photogrammetry applications grows and the technology advances, camera calibration became more complex to undertake. However, when measurements derived from the imagery is used for scene modelling purposes, the consequences of small imaging errors can be significant on the accuracy of derived models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Investigating the Limitations of Some of the Most


Popular Lines and Points Based Camera Calibration
Techniques in Photogrammetry and Computer Vision
Guy Blanchard Ikokou
Tshwane University of Technology

Abstract:- The need for camera calibration has been a The aim of camera calibration is to determine camera
fundamental requirement since the foundation of internal and external parameters that enabled the mapping of
photogrammetry. As the number of photogrammetry a 3D object space onto the 2D image space. Without
applications grows and the technology advances, camera accurate modelling of these camera internal and external
calibration became more complex to undertake. parameters it is impossible to achieve a good geometric
However, when measurements derived from the imagery description of the projection from the 3D world scene onto
is used for scene modelling purposes, the consequences of the 2D image plan. The accuracy of the modeled
small imaging errors can be significant on the accuracy transformation depends on a certain number of parameters
of derived models. Thus, the development of cheaper including the mathematical components of the distortion
lenses such as those of consumer grade cameras and model, the number of parameters considered by the model
their integration in the Photogrammetry process and the robustness of the mathematical solution used to
requires from camera calibration approaches to solve for the camera parameters. This study will try to
accurately model the projection process from the 3D investigate some of the mostly employed lines and points
scene onto the 2D image plan and also offer robust based camera calibration approaches in order to identify
solutions to derive with high accuracy the various their short-comings of the current camera calibration
camera parameters. Several line based and points based methods when dealing consumer grade digital cameras. The
camera calibration methods have been proposed in study will also investigate the suitability of Brown’s radial
literature and reported producing promising results but distortion model when it comes to modelling pincushion,
the majority of such approaches were found either barrel profiles that are not always symmetric with reference
numerically instable or suffer from serious limitations to the distortion center. Finally, the study will analyze the
when it comes to removing distortions at the edges of effects of the number of additional parameters in lens
imagery. The fact that these techniques rely of the distortion models, on the accuracy of the calibration
traditional brown’s model which assumes symmetric procedure.
radial distortions make them no suitable for consumer
grade digital cameras which are known for their instable II. CAMERA CALIBRATION APPROACHES
internal geometry. This study found undisputable that
new analytical camera calibration techniques more A. Ahmed and Farag Calibration Method
adapted to the internal geometry of consumer grade In Ahmed and Farag (2005), a new camera calibration
cameras are needed. approach based on the slope of a distorted line was
proposed. The camera model is based on the perspective
I. INTRODUCTION projection of a straight line in which every point on a line
satisfies the linear equation expressed by the following:
The need for camera calibration has been a
fundamental requirement since the foundation of ax  by  c  0 [1]
photogrammetry. With the increasing number of close range
photogrammetry applications and advances in imaging Where a, b and c are constants for a specific line l, and
technologies, camera calibration methods are becoming
a
more complex to perform. Indeed with the introduction of s the slope of the line. Considering the origin of the
consumer grade cameras which possess off the shelves b
lenses into the photogrammetry processes, the need for high image coordinate system O x0 , y 0  , the proposed model
quality metrics from photographs has grown over the past
decades, making camera calibration a very crucial task. relates a point Pu  xu , yu  on the undistorted line l and its
distorted corresponding Pd xd , yd  on the distorted line by
using the traditional radial and decentring distortions models
originally proposed by Brown (1971) as follows:

IJISRT23OCT784 www.ijisrt.com 700


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

    2 2

f xu , y u   a x d  x d  x0  k1 rd  k 2 r 4  k 3 r 6  p1 rd  2x d  x0   2 p 2 x d  x0  y d  y 0  1  p3 rd
2
 2

      c
[2]
 b y b   y d  y 0  k1 rd  k 2 rd  k 3 rd  p 2 rd  2 y d  y 0   2 p1 x d  x0  y d  y 0  1  p3 rd
2 4 6 2 2 2

By calculating the elemental change of f f  from With  the distance from the origin of the line to the
equation [2] one obtains the following equation: end point and  is the angle this line makes with the
horizontal axis. Once the equation of the undistorted line is
 x x   y y  established, a search process to identify distorted points in
f  a  u xd  u y d   b u xd  u y d  [3] the neighborhood of the undistorted line is performed. This
 xd y d   xd y d  process groups pixels based on their grey level and spatial
connectedness on Line Support Regions (LSRs). The set of
This equation represents the tangent to the distorted identified pixels are then used as input in a line fitting
curve and its slopes can be estimated by the equation [4] as process which leads to the estimation of radial distortion
follows: parameters using a Least Squares technique. The obtained
distortion parameters are then employed in a wrap function
yu yu  yd which removes distortions from the imagery by mapping
 distorted points onto their ideal locations. The wrap
x d y d  xd
sxd , y d   [4] parameters estimated by the technique are only limited to
xu xu  yd the two first coefficients of radial distortion k1 , k 2 and the

x d y d  xd coordinates of the distortion center xc , y c . However, a
limitation of this approach is the fact that it ignores other
Given a chain of edge points on the distorted
x 
types of distortion such as decentering or film deformation
, yd , i 1,2, ....N , the error E between points
i i
line d
distortions. Moreover, the radial distortion model employed
positions on the distorted line can be estimated through the by the technique is not adequate for consumer grade
squared difference between slopes of the line. This can be cameras as it relies on consistent radial distance which
expressed as follows: assumes symmetric radial distortions. The other limitation
of the technique is the influence of image noise that can be

   
mistaken as a distorted point during the point search in the
i 2 i 1 i 1 2
E  S xd , y d  S xd
i
, yd [5] neighborhoods of the undistorted line model. Wang et al.,
(2009) proposed another line-based technique which
In the case where several lines are considered on the employs rational models model derived from the traditional
distorted image the error in [8] would be estimated as the Brown’s radial distortion model and described in Fitzgibbon
sum of errors as follows: (2001). The model relates the distorted and undistorted
points on an image by the following functions:

   
N 2

E   S xd  xd  S xd
i 1 i 1
 xd
i i
[6] xd yd
xu  and y u  , [8]
i 2
1  rd 1  rd
2 2

Under the correct values of the distortion parameters, Where  is the first coefficient of radial distortion.
the error estimated in (6) should be zero. This error can be These produce the equation of the undistorted line as
minimized using non-linear optimization algorithm starting follows:
with initial guess values of distortion parameters. Although
the proposed projection model was reported robust, the yd xd
distortion model used in the complete model assumes a a b [9]
1  rd 1  rd
2 2
symmetry of radial distortion and produces strong
correlation between distortion parameters and the model
only models distortions at image corners (Jacobsen, 2003). After reformulation the relation in [8] gives:
A variation of the technique was earlier proposed in 1997 by
Prescott and McLean. The two stages techniques starts by
establishing an undistorted line model by joining two end
yd  axd  b  b xd  yd  2 2
 [10]
points representing a line using an edge detector program.
The detected edge points’ coordinates are then used to build Which after development produces the following:
the equation of the undistorted line given by:
a 1 1
xd  y d  xd  yd   0
2 2
[11]
x cos   y sin    [7] b b 

IJISRT23OCT784 www.ijisrt.com 701


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
The equation [11] is the equation of a circle showing Where f , the principal distance is measured from the
the graphics of a distorted straight line (Wang et al., 2009). centre of projection to the image plane and
By extracting three straight lines from the image and by
determining parameters that satisfy their equations then X u , Yu representing the undistorted image coordinates.
substituting those parameters into [10] the technique can
recover the parameters of radial distortion. Although the The quantities xc , y c , z c represent the coordinates
technique was reported suitable to deal with severe measured in the camera coordinate system. Tsai model only
distortions, it still relies on a consistent radial distance considers radial distortions and the relationships between the
which describes symmetric radial distortions. Moreover, distorted  X d , Yd  and undistorted image coordinates are
other distortions such decentring distortions have not been
considered by the technique. expressed using the traditional Brown’s radial distortion
model as follow:

 
B. Tsai Calibration Method
 X u   X d  X d k1 r  k 2 r  ... 
2 4
In 1987, Tsai proposed a calibration technique based
on a single image. The two stages calibration technique    [13]
   
 
estimates the intrinsic camera parameters using a
 Yu   Yd  Yd k1 r  k 2 r  ... 
2 4
perspective projection before refining them in a second
stage by applying a non-linear optimisation technique. The
first stage of the camera model proposed by Tsai requires The proposed technique requires the position and
coordinates of 3D points to estimate the ideal coordinates of altitude of the calibration targets to be recovered with
projected 2D points. With known ideal 2D coordinates and respect to the camera coordinate system. For instance, if
the observed coordinates, the second phase of the model
estimates the radial distortion parameters (Horn, 2000). Tsai xs , ys  are the coordinates of a point measured in the
scene coordinate system and  xc , yc  the coordinates
model start by estimating the relationship between camera
coordinates and ideal image coordinates (undistorted) by the
following expression: measured in the camera coordinate system, the perspective
projection model expressing the relationship between the
 xc  two coordinates is given as follows:

X u   zc 
   f   xc , yc   R xs , ys  t [14]
    [12]
 Yu   yc 
  Where t is the translation and R.... the rotation

 zc 
 matrix. This can be reformulated by the following
expression:

 r11 x s  r12 y s  r13 z s  t x 


X d 
 X d k1 r  k 2 r  ... 
2 r x
4
  r32 y s  r33 z s  t z 
  f  
31 s

    [15]

 Yd 
 Yd k1 r  k 2 r  ... 
2 4

 r21 x s

  r22 y s  r23 z s  t y 


 r31 x s  r32 y s  r33 z s  t z 

The conversion from distorted coordinates to image proposed by Tsai was reported viable for 3D machine vision
pixel is done by the following model: measurements and produced acceptable calibration results
(Horn, 2010), a certain number of limitations were
s x xd identified. Firstly, the technique is very laborious to
u   cx implement as it requires very large amounts of points to
dx perform well. Moreover, the proposed model assumes the
[16]
y image centre to be the centre of projection, which does not
v  d  cy hold true for consumer grade cameras (Li et al., 2014). The
dx other limitation is the pinhole perspective projection model
which does not characterise the physical and optical
With s x the scale factor introduced by the image behaviour of consumer grade cameras with unstable internal
geometry. Furthermore, Tsai technique only works perfectly
capture hardware, d x , d y the horizontal and vertical for symmetric radial distortions as it assumes the radial
distances between centres of adjacent cells in the CCD distance constant. The model also ignores the decentring
array. The expression in [15] can be substituted in [14] to component of distortions.
estimate the coordinates in pixels. Although the model

IJISRT23OCT784 www.ijisrt.com 702


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
C. Weng Calibration Method u ,  u   u u , v 
The camera model proposed by Tsai (1987) only    [19]
models radial distortion. The accuracy of this method was  ,  
reported sufficient for most of photogrammetry applications  v   v   v u , v 
(Horn, 2000). However, in some cases where the camera
lens needs to be accurately modelled, a simple radial
approximation is not sufficient. Following this, Weng Where u , v are the image undistorted coordinates and
' '
(1992) proposed a two steps calibration approach in which u , v their respective distorted coordinates. The distortion
the transformation from the 3D space coordinate to the 2D expressions  u u, v  and  v u, v  comprise radial
image coordinates is firstly modelled by a perspective
projection composed of a rotation and a translation as distortion functions, decentring distortion and thin prism
follows: distortions along the u and v axes. The radial distortion
functions are given by the following:
 xc   x
 y   R  y  T  u  k1u u  v 
2 2
 
 c  
[17]    [20]
  
z    
 c
z  
 v   k1v u  v 
2 2

Where R is a 3 3 rotation matrix defining the In addition, the decentring distortion functions are
camera orientation and T is a translation vector defining the given by:
camera position. Considering the image plane coordinate
system 0' , u, v  and the camera coordinate system

 ud   p1 3u  v  2 p 2 uv 
2 2

oc , xc , yc , z c  with
0' representing the principal point    [21]
   
and u , v axes respectively parallel to xc and y c , the
 vd  2 p1uv  p 2 u  3v 
2 2
 
transformation between the image coordinates and the
camera coordinates is given by the following: Finally, the thin prism distortions along u and v
directions that arises from the imperfection in lens design
 xc  and manufacturing as well as camera assembly are given by:
z 
u   c
   f

 [18] 
 up   s1 u 2  v 2 
     
v   yc    
     
[22]

 zc 
   
 vp   s 2 u  v
2
 2


As geometrical distortions concern the positions of
image points in the image plane, the relations in [16] do not
hold true and should be replaced by the relations in [19] as With s
1 and s
2 the coefficients of thin prism
follow: distortions. The total amount of distortions along u and v
axes is then expressed as the sum of radial, decentring and
thin prism distortions as follows:

 r11 x  r12 y  r13 z  t y 


      
 k1u u 2  v 2  p1 3u 2  v  2 p 2 uv  s1 u 2  v 2   
 r31 x  r32 y  r33 z  t z  u ,   
f    ,    [23]
 r x  r y  r z  t  v   
 21 22 23 y

 r31 x  r32 y  r33 z  t z 

   
 k1v u  v  p 2 u  3v  2 p1uv  s 2 u  v
2 2 2 2 2 2
  

Although the model is reported to perform better than Tsai’s model it relies on the perspective projection which does not
characterise the physical behaviour of consumer grades cameras. Moreover, the technique uses the traditional Brown’s model
which assume consistent radial distance.

IJISRT23OCT784 www.ijisrt.com 703


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
D. The Tommaseli Calibration Method
Tommaseli et al., (2012) proposed a line-based camera model based on the equivalence between the vector normal to the
projection plane in the image space and the vector normal to the projection plane in the object space. In order to build that
equivalence the authors considered a perspective centre point PC , two objects points P1 and P2 in the 3D object space and their

images p1 and p 2 in the image space. In the figure1 below, the authors showed the vector n normal to the projection plane in
  
the image space, the vector N also normal to the projection plane in the object space and the two vectors p1 p 2 and PCp1 .

Fig 1 Projection Plane and Normal Vectors

This vector is expressed by the equation (22) below:

 x p 2  x p1   x1   f ( y p2  y p1 )

n   y p2  y p2    y1    f ( x p2  x p1 )  [24]
 0   f   x2 y1  x1 y 2 

  
The projection plane in the object space is composed of the vector P1 P2 and P1 PC , the vector normal N to the object

projection plane is expressed by the equation [25] below:

 X P1  X P1   X P1  X PC  ( YP2  YP1 ) ( Z P1  Z PC )  (YP1  YPC ) ( Z P2  Z P1 ) 



      [25]
N   YP2  YP1    YP1  YPC   ( X P1 X PC ) ( Z P2  Z P1 )  ( X P2  X P1 ) ( Z P1  Z PC ) 
 Z P  Z P   Z P  Z PC  ( Z  Z ) (Y  Y )  ( X  X ) ( Y  Y )
 2 1   1   P2 PP1 P1 PC P1 PC P2 P1 

The equivalence between the two vectors can be obtained by equalling equation [24] and [25] and gives equation [26] as
follows:

 f ( y p2  y p1 ) (YP2  YP1 ) ( Z P1  Z PC )  YP1  PPC Z P2  Z P1   



 f ( x  x )   ( x  X ) ( Z  Z )  X  X ( Z  Z )  
 p2 p1   p1 PC P2 P1 P2 P1 P1 PC 
[26]
 x2 y1  x1 y 2   
( Z P1  Z PC ) (YP1  YPC )  ( X P1  X PC ) (YP2  YP1 ) 

However, due to the opposite directions between the two normal vectors and fact that the 3D object plane and the 2D image
are linked by a scale relationship, the equation [27] can be written as :

.n  R.N [27]

IJISRT23OCT784 www.ijisrt.com 704


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

With  is a scale factor and R is the rotation matrix defined by the sequence .R x  , R y  , R z   of the rotation. This
relationship is now expressed as:

  f ( y p2  y p1 )  
(YP2  YP1 ) ( Z P1  Z PC )  YP1  YPC Z P2  Z P1 
 
 

  f ( x p2  x p1 )


  R ( x p1  X PC ) ( Z P2  Z P1 )  X P2  X P1 ( Z P1  Z PC )  [28]
x p y p  x p y p   
 2 1 1 2  ( Z P1  Z PC ) (YP1  YPC )  ( X P1  X PC ) (YP2  YP1 ) 

By introducing the following new variables:

 
 YP2  YP1 Z P1  Z PC  YP1  Y pc Z P2  Z P1     
 
 N1   
 
 N    X P1  X PC Z P2  Z P1  X P2  X P1 Z P1  Z PC     
 2   [29]
 N 3   
 
 Z P1  Z PC YP1  YPC  X P1  X Pc YP2  YP1     
 
 
And by expending [30] one obtains:


  f y p2  y p1  11 N 1  12 N 2  13 N 3 
   
   

  x p2  x p1    21 N 1   22 N 2   23 N 3 
   [30]
  

 x p2 y p1  x p1 y 2  
   31 N 1   32 N 2   33 N 3 
   
  

In order to eliminate the scale factor  the first two equations in [28] are divided by the last equation and give:

  f y p2  y p1     11 N1  12 N 2  13 N 3 




  x p2 yp1  x p1 y p2   
   31 N1   32 N 2   33 N 3 

   [31]
 f x  x
 p2  p1    N   N
  21 1 22 2   33 N 3 

 
  x p y p  x p y2
2 1 1
    31 N1   32 N 2

  33 N 3 

And by expending [30] one obtains:

  
 f y p1  y p2  31 N1   32 N 2   33 N 3   x p1 y p2  x p2 y1 11 N1  12 N 2  33 N 3  0 
     [32]
     
   
 f x p2  x p1  31 N1   32 N 2   33 N 3   x p1 y p2  x p2 y1  21 N1   22 N 2   33 N 3  0

To introduce radial and decentring distortions into the model, the authors considered the Conrady-Brown model of radial and
decentring distortions expressed by:

   
 xu  x0   xd  x0   xd  x0  k1 r  k 2 r  k 3 r  p1 r  2xd  x0   2 p 2 xd  x0  y d  y 0 
2 4 6 2 2

   [33]
   
2 4
 6 2 2
 
 y d   y d  y 0    y d  y 0 k1 r  k 2 r  k 3 r  p 2 r  2 yu  y 0   2 p1 xd  x0  yu  y 0   

IJISRT23OCT784 www.ijisrt.com 705


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Where xu , y u are the undistorted coordinates and model, the fourteen parameters Brown’s Physical model and
the sixteen parameters Borwon’s model.
xd , y d the observed distorted coordinates in the image
reference system, k1 , k 2 , k 3 are the coefficients of radial The Bauer model has three additional parameters
including two parameters that describes the extent of affine
distortions and P1 , P2 are the coefficients of decentring deformation on the image and one parameter that describes
the symmetric radial distortion (Anguilar et al., 2010;
distortion while x0 , y 0 are the coordinates of the principal
Blazquez and Colomnia, 2010). The distortions applied to
point. Although the technique was reported producing good the x and y coordinates are given by the following
estimates of distortion parameters, it does not guaranty the distortion model:
accuracy of the 3D points coordinates involved in the model

 
as any inaccuracy in points’ measurements can be
propagated into the estimated distortion parameters.  a1 x r 2  r0 2  a 2 x 
x   
y   
Moreover the Brown-Conrady model used is a model [34]

  
 
suitable for symmetric lens distortions as it assumes
consistent radial distance.  1
a y r 2
 r0
2
 a 2 y  a 3 
x 

E. Additional Parameters Model. By combining the Bauer’s distortion model into the
Additional parameters are the term of a polynomial collinearity equation one obtains the complete distortion
expression incorporated in the collinearity equations in order model presented as follows:
to model various systematic errors including lens
distortions. Five types of additional parameters models have
been proposed in the literature. This includes the Bauer
simple model, the Jacobsen model, the Ebner’s orthogonal

 xu  x 0  
 2

 r11  X w  X 0   r12 Yw  Y0   r13 Z w  Z 0    a1 x r  r0  a 2 x 
2


y  y   f  r31  X w  X 0   r32 Yw  Y0   r33 Z w  Z 0     [35]
 
 u 0
  
 a1 y r 2  r0  a 2 y  a3 x 
2

With xu , yu the undistorted coordinates, x0 , y 0 the coordinates of the perspective centre and X 0 , Y0 , Z 0 the origin of the
3D coordinate system.

The Jacobsen simple model is similar to the Bauer’s model but presents one additional parameter and compensate for the
first and second order distortion associated with affine deformation and lens distortions (Passini and Jacobsen, 2008). The affine
deformations and lens distortion applied to the x and y coordinates are given by the distortion model as follows:

x  
 
 a1 x r 2  r0 2  a 2 x  a3 y 

y     [36]
  
 
a1 y r  r0  a 2 y  a3 x  a 4 x 
2 2 2

Similar to Bauer’s model, the Jacobsen calibration approach relies on the collinearity equation to form its complete distortion
model.

The Ebners’s orthogonal model is a twelve additional parameters model which compensates for various types of systematic
errors such as scanner errors, affine deformation and film deformation. (Leica Geosystem User Guide, 2015). These additional
parameters are orthogonal to one another and to the exterior orientation parameters under circumstances where the ground surface
is flat. The model is given by the following:

IJISRT23OCT784 www.ijisrt.com 706


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

  2 4b 2   2b 2   2b 2   2b 2  
a1 x  a 2 y  a3  2 x    a 4 xy  a5  y 2    a 7 x y 2    a9 y x 2    
  3   3   3   3  
  2
 2
 
a11  x 2  2b  y 2  2b  
  3  3  
x    [37]
y     2 4b 2   2 2b 2   2 2b 2   2 2b 2  
   a1 y  a 2 x  a3 xy  a 4  2 y    a6  x    a8 y x    a10 x y  
 3   3   3   3  
 
  2 2b 2  2 2b 2  
a12  x   y   
  3  3  
 

The Brown’s fourteen additional parameters compensate for linear and non-linear forms of plate film deformation and lens
distortions. The total distortion applied to x and y image coordinates is given by the following distortion model.

x  
   1 a x  a 2 y  a 3 xy  a 5 x 5
y  a 6 xy 2
 a 7 x 2 2
y  a 13
x3 y
f

 a14 x x 2  y 2 
   [38]
   a xy  a x 2  a x 2 y  a xy 2  a x 2 y 2  a y x  a y x 2  y 2   
3 2

y   8 9 10 11 12 13 14 
 f 

Like the previous models, Brown’s fourteen model relies on the collinearity equation to compose the complete distortion
model. Moreover, another Brown’s model presenting sixteen additional parameters has been proposed to deal with affine
distortion, film and lens distortions.


  2
 2
 2
 2 2


a11 x x 2  y 2 

a12 x 3 y 2 a13 x x 4  y 4 


 1 a xy a 2 y a 3 x y a 4 xy a 5 x y
 f f f 
 a xr  r   a xr  r 2  a xr  r 3  
x   14 0 15 0 16 0
 [39]
y    
  
 a 6 xy  a 7 x 2  a8 x 2 y  a9 xy 2  a10 x 2 y 2  11

a xx y 2 2
 a x y
 12
3 2
a xx y
 13
4
 2 


 f f f 
a y r  r   a r  r 2  a r  r 3  
 14 0 15 0 16 0 

The above model does not only differ from the for consumer grade cameras with instable internal geometry.
previous one in terms of the number of degrees of freedom This poor performance extended to other line based
but also in the functions composing those degrees of calibration approaches investigated in this study in
freedom. Although the additional parameter models occurrence of non-symmetric radial lens distortions as the
discussed above deal with variety of distortions, they suffer proposed algorithms assume a constant radial displacement
from numerical instability due to their large number of measure for all the distorted points within the image. Which
parameters. does not always hold true with distortion profiles such as
barrel and pincushion distortions that create a larger or small
III. CONCLUSION towards the edges of the photograph in comparison to the
measure around the image centre. Line based camera
From the lined-based calibration approaches studied in calibration approaches also rely on optimization algorithms
this investigation, the method proposed by Prescott and to perfect the numerical estimates of calibrated camera
McLean (1997) would produce the poorest camera parameters. Although point based calibration method offer
parameter accuracies due to the fact that it is very sensitive better numerical stability due to the fact that they do not
to image noise. In fact, during the search process, there is a need any sophisticated algorithm to measure the coordinates
high chance when dealing with low resolution images that of the projections of the 3D points onto the image plan, they
image noise could be mistaken as distorted pixel locations. require a large amount of calibration points in order to
This is associated to the dependence of the edge detection achieve an optimal calibration results. Moreover, most of
algorithm to pixel grey levels. Moreover, the radial point based calibration approaches require some results
distortion function employed by the technique is not suitable optimization through an iteration process and iterative

IJISRT23OCT784 www.ijisrt.com 707


Volume 8, Issue 10, October – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
camera calibration suffers the limitation of requiring very [13]. Li W, Gee T, Friedrich H and Delmas P, 2014. “A
accurate initial estimates of the calibrated parameters, which practical comparison between Zhang's and Tsai's
are not always available. From the above, it is evident that calibration approaches”, Proc. 29th International
there is a need to develop new point based camera Conference Image and Vision Computing New
calibration methods that offer analytical solutions without Zealand, Hamilton, New Zealand, November 19 – 21,
any intervention of iterative processes and that can achieve 2014
satisfactory calibration results with very few calibration [14]. Ma L, Chen Y and Moore KL, 2003,” A new analytical
points. radial distortion model for camera calibration”, IEEE
Computer Society Conference on computer vision and
REFERENCES pattern recognition, IEEE, New York.
[15]. Mitishita E,Cortes J, Centeno J, Machado AML,
[1]. Ahmed M, Farag A, 2005. ”Non-metric calibration of Martins M,2010, “ Study of stability analysis of the
camera lens distortion: differential methods and robust interior orientation parameters from the small-format
estimation”, IEEE Transactions on Image Processing, digital camera using on-the-job calibration” In:
V.14 No.8, p.1215-1230, August 2005. Canadian Geomatics Conference, Calgary, Alberta,
[2]. Anguilar T and Manuel A, 2010.”Self-calibration 2010. Anais. 2010.
methods for sing historical aerial photographs with [16]. Passini R and Jacobsen K, 2008. “Geometric Analysis
photogrammetric purposes” accessed online March, on digital Photogrammetric cameras”, ASPRS 2008
2014 at annual convention, Portland, Orengon, USA.
www.ofertacientifica.ual.es/descargas/investigacion/do [17]. Prescott B, McLean, GF, 1997. “Line-based correction
cumentos/RNM368-94pdf). of radial lens distortion, Graphical Models and Image
[3]. Blazquez M and Colomina I, 2010. “On the role of Processing 59 (1) 39–47.
self-calibration functions in integrated sensor [18]. Sanz-Ablanedo E, Rodríguez-Pérez JR, Armesto J,
orientation”, In proceedings of the XXXVIII Eurocow Taboada M.Á, 2010, “Geometric stability and lens
conference, Castelldefels, Spain, 2010. decentring in compact digital
[4]. Brown DC, 1971. “Close-range camera calibration”, cameras”, Sensor 10: 1553–1572.
Photogrammetric Engineering and Remote Sensing [19]. Tardif JP and Sturm P, 2006, “Self-Calibration of a
1971; 42:855–66. General Radially Symmetric Distortion Model,” Proc.
[5]. Claus D and A. Fitzgibbon A, 2005, “A rational Ninth European Conf. Computer Vision, 2006
function lens distortion model for general cameras”, In [20]. Tsai R Y, 1987. “A Versatile Camera Calibration
Proc. CVPR-2005, June 2005. Technique for High- Accuracy 3D Machine Vision
[6]. Chen YQ, Ma L, Moore K L 2004, “Rational radial Metrology Using Off-the-Shelf TV Cameras and
distortion models of camera lenses with analytical Lenses,’’ IEEE Journal of Robotics and Automation,
solution for distortion correction”, International Vol. RA–3, No. 4, August 1987, pp. 323–344.
Journal of Information Acquisition (IJIA), 1(2), pp [21]. Tommaselli AMG, Berveglieria 2014, “Automatic
135-147, 2004. orientation of multi-scale terrestrial images for 3D
[7]. Fitzgibbons AW2001, “Simultaneous linear estimation reconstruction”, Remote Sensing, v. 6, n. 4, p. 3020–
of multiple view geometry and lens distortion,” in 3040, 2014.
Proc. IEEE Conf. Computer Vision and Pattern [22]. Tommaselli AMG, Junior JM and Telles SSS, 2012.”
Recognition, 2001 Camera calibration using straight lines: Assessment of
[8]. Fryer JG, Clarke TA and Chen J, 1994,” Lens a model based on plane equivalence”, The
distortion for simple “C” mounted lenses”. Photogrammetry Journal of Finland, Vol.23, No.1,
International achieves of Photogrammetry and Remote 2012.
Sensing, 30(5)97-101. [23]. Wackrow R and Chandler, JH. 2008, A convergent
[9]. Heikkilä J. and Silven O, 1997. “A four-step camera image configuration for DEM extraction that
calibration procedure with implicit image correction”, minimizes the systematic effects caused by an
In Proc. IEEE computer Society Conference on inaccurate lens model. Photogrammetric Record,
computer vision and pattern recognition, San Juan, 23(121): 6–18.
Puerto Rico, pp. 1106-1112. [24]. Wang J, Shi F, Zhang J and Liu Y, 2009,”A new
[10]. Horn BKP, 2000. “Tsai’s camera calibration method calibration model of camera lens distortion”, Pattern
revisited”, Technical report, MIT Artificial Intelligence Recognition, Vol.41, Issue2.
Laboratory website. [25]. Weng J, Cohen P and Herniou M, 1992, “Camera
[11]. Hugemann W, 2010,” Correcting lens distortion in calibration with distortion models and accuracy
digital photographs” EVU, 2010. evaluation”, IEEE Trans. On PAMI, Vol. 14(10), pp.
[12]. Leica Geosystem User Guide, Leica Photogrammetry 965-980
Suite Orthobase and Orthobase Pro User Guide
(accessed in June 2015).

IJISRT23OCT784 www.ijisrt.com 708

You might also like