0% found this document useful (0 votes)
15 views5 pages

Camera Resectioning

Uploaded by

hsguan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

Camera Resectioning

Uploaded by

hsguan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Camera resectioning

Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating
the camera that produced a given photograph or video; it determines which incoming light ray is associated
with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.

Usually, the camera parameters are represented in a 3 × 4 projection matrix called the camera matrix. The
extrinsic parameters define the camera pose (position and orientation) while the intrinsic parameters
specify the camera image format (focal length, pixel size, and image origin).

This process is often called geometric camera calibration or simply camera calibration, although that
term may also refer to photometric camera calibration or be restricted for the estimation of the intrinsic
parameters only. Exterior orientation and interior orientation refer to the determination of only the
extrinsic and intrinsic parameters, respectively.

The classic camera calibration requires special objects in the scene, which is not required in camera auto-
calibration. Camera resectioning is often used in the application of stereo vision where the camera
projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by
both cameras.

Formulation
The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is
often represented by the series of transformations; e.g., a matrix of camera intrinsic parameters, a 3 × 3
rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a
camera's image space with locations in 3D world space.

Homogeneous coordinates
In this context, we use to represent a 2D point position in pixel coordinates and is
used to represent a 3D point position in world coordinates. In both cases, they are represented in
homogeneous coordinates (i.e. they have an additional last component, which is initially, by convention, a
1), which is the most common notation in robotics and rigid body transforms.

Projection
Referring to the pinhole camera model, a camera matrix is used to denote a projective mapping from
world coordinates to pixel coordinates.
where . by convention are the x and y coordinates of the pixel in the camera, is
the intrinsic matrix as described below, and form the extrinsic matrix as described below.
are the coordinates of the source of the light ray which hits the camera sensor in world coordinates, relative
to the origin of the world. By dividing the matrix product by , the theoretical value for the pixel
coordinates can be found.

Intrinsic parameters

The contains 5 intrinsic parameters of the specific camera model. These parameters encompass focal
length, image sensor format, and camera principal point. The parameters and
represent focal length in terms of pixels, where and are the inverses of the width and height of a
pixel on the projection plane and is the focal length in terms of distance. [1] represents the skew
coefficient between the x and the y axis, and is often 0. and represent the principal point, which
would be ideally in the center of the image.

Nonlinear intrinsic parameters such as lens distortion are also important although they cannot be included in
the linear camera model described by the intrinsic parameter matrix. Many modern camera calibration
algorithms estimate these intrinsic parameters as well in the form of non-linear optimisation techniques. This
is done in the form of optimising the camera and distortion parameters in the form of what is generally
known as bundle adjustment.

Extrinsic parameters

are the extrinsic parameters which denote the coordinate system transformations from 3D world
coordinates to 3D camera coordinates. Equivalently, the extrinsic parameters define the position of the
camera center and the camera's heading in world coordinates. is the position of the origin of the world
coordinate system expressed in coordinates of the camera-centered coordinate system. is often mistakenly
considered the position of the camera. The position, , of the camera expressed in world coordinates is
(since is a rotation matrix).

Camera calibration is often used as an early stage in computer vision.

When a camera is used, light from the environment is focused on an image plane and captured. This process
reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored
on a 2D image). Each pixel on the image plane therefore corresponds to a shaft of light from the original
scene.

Algorithms
There are many different approaches to calculate the intrinsic and extrinsic parameters for a specific camera
setup. The most common ones are:

1. Direct linear transformation (DLT) method


2. Zhang's method
3. Tsai's method
4. Selby's method (for X-ray cameras)

Zhang's method
Zhang's method[2][3] is a camera calibration method that uses traditional calibration techniques (known
calibration points) and self-calibration techniques (correspondence between the calibration points when they
are in different positions). To perform a full calibration by the Zhang method, at least three different images
of the calibration target/gauge are required, either by moving the gauge or the camera itself. If some of the
intrinsic parameters are given as data (orthogonality of the image or optical center coordinates), the number
of images required can be reduced to two.

In a first step, an approximation of the estimated projection matrix between the calibration target and the
[4]
image plane is determined using DLT method. Subsequently, self-calibration techniques are applied to
obtain the image of the absolute conic matrix.[5] The main contribution of Zhang's method is how to, given
poses of the calibration target, extract a constrained intrinsic matrix , along with instances of and
calibration parameters.

Derivation
Assume we have a homography that maps points on a "probe plane" to points on the image.

The circular points lie on both our probe plane and on the absolute conic .
Lying on of course means they are also projected onto the image of the absolute conic (IAC) , thus
and . The circular points project as

We can actually ignore while substituting our new expression for as follows:

Tsai's algorithm
Tsai's algorithm, a significant method in camera calibration, involves several detailed steps for accurately
determining a camera's orientation and position in 3D space. The procedure, while technical, can be
generally broken down into three main stages:

Initial Calibration
The process begins with the initial calibration stage, where a series of images are captured by the camera.
These images, often featuring a known calibration pattern like a checkerboard, are used to estimate intrinsic
camera parameters such as focal length and optical center. [6]

Pose Estimation
Following initial calibration, the algorithm undertakes pose estimation. This involves calculating the
camera's position and orientation relative to a known object in the scene. The process typically requires
identifying specific points in the calibration pattern and solving for the camera's rotation and translation
vectors.

Refinement of Parameters
The final phase is the refinement of parameters. In this stage, the algorithm refines the lens distortion
coefficients, addressing radial and tangential distortions. Further optimization of internal and external
camera parameters is performed to enhance the calibration accuracy.

This structured approach has positioned Tsai's Algorithm as a pivotal technique in both academic research
and practical applications within robotics and industrial metrology.

Selby's method (for X-ray cameras)


Selby's camera calibration method[7] addresses the auto-calibration of X-ray camera systems. X-ray camera
systems, consisting of the X-ray generating tube and a solid state detector can be modelled as pinhole
camera systems, comprising 9 intrinsic and extrinsic camera parameters. Intensity based registration based
on an arbitrary X-ray image and a reference model (as a tomographic dataset) can then be used to determine
the relative camera parameters without the need of a special calibration body or any ground-truth data.

See also
3D pose estimation
Augmented reality
Augmented virtuality
Eight-point algorithm
Mixed reality
Pinhole camera model
Perspective-n-Point
Rational polynomial coefficient

References
1. Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in Computer Vision.
Cambridge University Press. pp. 155–157. ISBN 0-521-54051-8.
2. Z. Zhang, "A flexible new technique for camera calibration'" (http://research.microsoft.com/en
-us/um/people/zhang/Papers/TR98-71.pdf) Archived (https://web.archive.org/web/20151203
110350/http://research.microsoft.com/en-us/um/people/zhang/Papers/TR98-71.pdf) 2015-12-
03 at the Wayback Machine, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol.22, No.11, pages 1330–1334, 2000
3. P. Sturm and S. Maybank, "On plane-based camera calibration: a general algorithm,
singularities, applications'" (http://www.vision.caltech.edu/bouguetj/calib_doc/papers/sturm9
9.pdf) Archived (https://web.archive.org/web/20160304050100/http://www.vision.caltech.edu/
bouguetj/calib_doc/papers/sturm99.pdf) 2016-03-04 at the Wayback Machine, In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
pages 432–437, Fort Collins, CO, USA, June 1999
4. Abdel-Aziz, Y.I., Karara, H.M. "Direct linear transformation from comparator coordinates into
object space coordinates in close-range photogrammetry (https://www.ingentaconnect.com/c
ontentone/asprs/pers/2015/00000081/00000002/art00001?crawler=true) Archived (https://we
b.archive.org/web/20190802154011/https://www.ingentaconnect.com/contentone/asprs/pers/
2015/00000081/00000002/art00001%3Fcrawler%3Dtrue) 2019-08-02 at the Wayback
Machine", Proceedings of the Symposium on Close-Range Photogrammetry (pp. 1-18), Falls
Church, VA: American Society of Photogrammetry, (1971)
5. Luong, Q.-T.; Faugeras, O.D. (1997-03-01). "Self-Calibration of a Moving Camera from Point
Correspondences and Fundamental Matrices" (https://doi.org/10.1023/A:1007982716991).
International Journal of Computer Vision. 22 (3): 261–289. doi:10.1023/A:1007982716991 (h
ttps://doi.org/10.1023%2FA%3A1007982716991). ISSN 1573-1405 (https://www.worldcat.or
g/issn/1573-1405).
6. Roger Y. Tsai, "A Versatile Camera Calibration Technique for High-Accuracy 3D Machine
Vision Metrology Using Off-the-Shelf TV Cameras and Lenses," IEEE Journal of Robotics
and Automation, Vol. RA-3, No.4, August 1987
7. Boris Peter Selby et al., "Patient positioning with X-ray detector self-calibration for image
guided therapy" (https://doi.org/10.1007%2Fs13246-011-0090-4) Archived (https://web.archi
ve.org/web/20231110063713/https://link.springer.com/article/10.1007/s13246-011-0090-4)
2023-11-10 at the Wayback Machine, Australasian Physical & Engineering Science in
Medicine, Vol.34, No.3, pages 391–400, 2011

External links
Zhang's Camera Calibration Method with Software (http://research.microsoft.com/en-us/um/p
eople/zhang/Calib/)
Camera Calibration (https://campar.in.tum.de/twiki/pub/Far/AugmentedRealityIISoSe2004/L3
-CamCalib.pdf) - Augmented reality lecture at TU Muenchen, Germany

Retrieved from "https://en.wikipedia.org/w/index.php?title=Camera_resectioning&oldid=1235555272"

You might also like