0% found this document useful (0 votes)
27 views

Calib Notes

The document discusses several topics related to camera calibration: 1. Zhang's calibration algorithm is sensitive to pixel noise and relies on accurate feature detection. Including lens distortion parameters improves accuracy. 2. Multiple camera views from different angles facilitate intrinsic parameter initialization and allow hand-eye calibration. At least 10 views are often recommended. 3. The best calibration is achieved when the calibration pattern aspect ratio is known precisely. Erroneous intrinsic or extrinsic parameters cannot compensate for errors in the aspect ratio.

Uploaded by

phuocminhvo
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Calib Notes

The document discusses several topics related to camera calibration: 1. Zhang's calibration algorithm is sensitive to pixel noise and relies on accurate feature detection. Including lens distortion parameters improves accuracy. 2. Multiple camera views from different angles facilitate intrinsic parameter initialization and allow hand-eye calibration. At least 10 views are often recommended. 3. The best calibration is achieved when the calibration pattern aspect ratio is known precisely. Erroneous intrinsic or extrinsic parameters cannot compensate for errors in the aspect ratio.

Uploaded by

phuocminhvo
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

 How to estimate homography for each images: flexible camera calibration by

viewing a plane from unknown orientation


 Zhang’s algorithm was found to be more sensitive to pixel coordinate noise and
hence more dependent on high accuracy calibration feature detection
 Zhang’s algorithm, errors in distortion is compensated by camera pinhole model.
This compensated is only true for the camera location of calibration process. Lens
distortion and camera parameters satisfy the set of training data. When data change,
the computed model does not satisfy them.
 Including the two decentering distortion components generally guaranteed a high
calibration accuracy for a camera with unknown lens distortions.

Institute of Robotic and aerospace, germany:


1. Geometrical camera calibration with diffractive optical elements
 The perspective distortion due to an inclination of 37degree is more pronounced
because different ranges appear and differently distant parts project in different sizes,
which makes the relative pose estimation better conditioned.
 In extrinsic camera calibration, at least three views (specifically two movements
with nonparallel rotation axes) are required.
 a reduction of the AOV (without relocating the camera) implies that a smaller area of
the scene will be seen, and therefore that there will be less evidence for accurate
estimation not withstanding some more precision in the measurements.
 perpendicular images may be useful for reliable lens distortion estimation. However,
it is inconvenient to further omit tilted images since image processing performs
worse with strong perspective distortion.
 Extreme correlation between range and focal length estimations and no correlation
between orientation and focal length estimations. This is because the projective
effects of camera rotations and focal length adaption are clearly differentiated.
 Two evils come on scene because of minimizing RMS pixel error: erroneous camera
parameters and wrongful reprojection residuals, thus wrongful assessment. For
perfect model parametrization, this RMS reprojection error should average the
 x2   y2  .56 p
Gaussian image noise , but this only happens for wide AOVs; for
narrow AOV the residuals fairly surpass that level
 the best substantiation of significance of a calibration approach is necessarily the
improvement of accuracy – if it ever exists. That is because eventual erroneous
operation of the end applications originates in part in erroneous calibration (along
with noisy operation or model simplifications), but not in any way in the simplicity
or the speed of the calibration process
 Multiple vantage points are not always necessary for camera calibration, but they
facilitate intrinsic initialization and allow for hand eye calibration. In addition to
that, the central limit theorem states that when the amount of independent and
identically distributed (i.i.d.) data grows, the error distribution tends to Gaussianity,
which eventually facilitates optimal estimation (especially the hand-eye [10]). In
fact, at least 10 vantage points are repeatedly recommended in the literature
 However, on occasions (e.g. with very low resolution images, high noise, or narrow
field of view) it may be meaningful to estimate the intrinsic parameters and the
hand-eye transformation at the same time in order to make full use of the extrinsic
positioning accuracy
 the authors actually recommend to decrease the number of unknowns in this first
estimation. It is a pointless effort to aim at success in accurately estimating very
sensitive parameters, such as the skew or the principal point prior to the estimation
of the lens distortion.
 Since the released extrinsic parameters cannot compensate for erroneous
scales/ranges in all the absolute extrinsics at the same time, the simultaneous
estimation of the hand-eye transformation and the scaling factor for multiple images
tends to restore the error distribution to its reputed unbiased Gaussian nature, and the
calibration along with it to optimal (unbiased) operation
 the erroneous intrinsic and absolute extrinsic parameters cannot completely
compensate for erroneous knowledge of the aspect ratio of the calibration pattern
 In general systems, where the Gaussian image noise assumption largely holds,
optimal intrinsic camera calibration is only attained if the aspect ratio of the pattern
is perfectly known.
 As to which method to use, the fact that the projection residuals in the images are
mostly numerous and small, and conversely the camera vantage points fewer
(typically 10 to 15) and their positioning errors of arbitrary size (depending on the
system), suggests that the former errors distributions present much more a Gaussian
nature than the latter ones. Therefore, Method #1 should usually perform more
accurately than Method #2 for the codetermination of the aspect ratio. As regards the
extrinsic calibration, the user should also opt for one of these methods if he or she is
not able to determine the size of the plate with an accuracy of say one part in a
thousand (i.e. 0.3mm accuracy in 30 cm), which actually is very often the case.
2. Improved robust and accurate camera calibration method used for machine vision
application
 Zhang proposed a flexible camera calibration method using the perspective
projection model, which is one of the well-known algorithms in this area. This
method is flexible freely. However, the results of this method may fail when the
rotation angle of the pattern model plane is small, and the distortion parameters of
the perspective projection model are difficult to use in practice. In addition, the best
performance seems to be achieved with the maximum rotation angle around 45 deg.
But the precision of the corner detection would decrease with an increase of rotation
angle and distance of the pattern plane with respect to the camera, which is not taken
into consideration.
 an ill-conditioned matrix, which may cause the results to fail14 when the rotation
angles of the model plane are very small. We propose to solve this ill-conditioned
problem using the GA, and more robust results can be obtained. The features of the
GA take the advantage of the Darwinian survival of the fittest principle. The
ergodicity of the biological operators used for the GA make it potentially effective at
performing a global search. In addition, the GA has been found to be powerful and
robust at solving a variety of optimization problems that are not well suited for
standard optimization algorithms,17,18 including problems in which the objective
function or some constraint functions are discontinuous, non-differentiable,
stochastic, or highly nonlinear.

3. Which Pattern Biasing Aspects of Planar calib patterns and detection methods
 control points detected using the centroid recovery principle, can potentially be
corrupted by both perspective bias and distortion bias, with the likelihood of greater
bias magnitude in a typical camera.
 the effect of bias for circular dot patterns is negligible if the pixel diameter of the dot
is roughly less than ten pixels.
 The hierarchy of potential bias sources is then: those emanating from lens distortion,
those from perspective distortion and lastly from erroneous conic fitting.

4. Improved robust and accurate calib method used for machine vision application
 The best performance seems to be achieved with the maximum rotation angle round
45 deg. But the precision of the corner detection would decrease with an increase of
rotation angle and distance of the pattern plane with respect to the camera.

5. A New Easy Camera Calibration Based on Circular Points


 It is worth noting that generally speaking, using the Euclidean distance as the
minimizing criterion will outperform that using algebraic distance in feature
extraction.
6. Do we really need an accurate calib pattern to achieve reliable camera calib?
 We have noticed that multiple views taken with a 90 degree rotation around the
optical axis leads to a better estimate ( convergence and accuracy) of the calib paras.
7. Four step camera calibration procedure with implicit image correction. For non zero
radius, there are only some special case when the ellipse centroids and projected circles
center are the same, i.e. a_31=a_32=0. ( This paper has a perfect eq to prove this fact)
8. Robust Camera Calibration using Inaccurate Targets:
 From a general point of view this means that the optimization was not able to fit the theoretical
checkerboard model with a zero-mean Gaussian error, which in turn highlights the presence of
some systematic error source. Basically, this source can be correlated with three causes: a
localization bias introduced by the subpixel corner detector, the inability of the adopted camera
model to fully capture the real image formation model (namely lens distortions), or some unknown
discrepancy between the theorical checkerboard model and the printed one.
 We can see that, notwithstanding the large average measurement error, the setup that uses the
fronto-parallel images obtains a very low reprojection error. This apparent nonsense is indeed due
to the lower error introduced by the corner detector as, when the model is correctly estimated, this
zero-mean Gaussian error does not hinder a correct camera calibration, yet it contributes to the
magnitude of the reprojection RMS error. On the other hand, the use of weakly angulated images
poses less constraints to parameters and thus leads to both a less accurate mono calibration and
stereo reconstruction. In this sense, the reprojection error is not always a good indicator since,
when the calibration setup is good, it tends to be dominated by localization errors.
 Orthogonal roll angles must be present to break the projective coupling between IO and EO parameters.
Although it might be possible to achieve this decoupling without 90o image rotations, through provision of a
strongly 3D object point array, it is always recommended to have ‘rolled’ images in the self-calibration
network.

You might also like