Error Analysis in Homography Estimation by First Order
Error Analysis in Homography Estimation by First Order
DOI 10.1007/s10851-008-0113-2
Abstract This paper shows how to analytically calculate space based refinement stage. Comparison with the straight-
the statistical properties of the errors in estimated parame- forward subspace refinement approach (without taking into
ters. The basic tools to achieve this aim include first order account the statistical properties of the homography coef-
approximation/perturbation techniques, such as matrix per- ficients) shows that our statistical characterization of these
turbation theory and Taylor Series. This analysis applies for errors is both correct and useful.
a general class of parameter estimation problems that can be
abstracted as a linear (or linearized) homogeneous equation. Keywords Error analysis · Matrix perturbation theory ·
Of course there may be many reasons why one might Singular value decomposition · Low rank matrix
which to have such estimates. Here, we concentrate on the approximation · Homography · First order approximation ·
situation where one might use the estimated parameters to Mahalanobis distance
carry out some further statistical fitting or (optimal) refine-
ment. In order to make the problem concrete, we take ho-
mography estimation as a specific problem. In particular, we 1 Introduction
show how the derived statistical errors in the homography
coefficients, allow improved approaches to refining these Parameter estimation is a common problem in engineering.
coefficients through subspace constrained homography es- In some applications, parameter estimation is the ultimate
timation (Chen and Suter in Int. J. Comput. Vis. 2008). goal, while in others, estimated parameters are further fed
Indeed, having derived the statistical properties of the er- to follow-up procedures. In the latter case, knowledge of the
rors in the homography coefficients, before subspace con- statistical properties of the errors in estimated parameters
strained refinement, we do two things: we verify the cor- (such as knowing the covariance matrix) helps one “intelli-
rectness through statistical simulations but we also show gently” design algorithms that use these parameters for fur-
how to use the knowledge of the errors to improve the sub- ther calculations.
Suppose the parameter θ = f(x) estimation problem
(given data x) can be abstracted as:
P. Chen ()
School of Information Science and Technology, Sun Yat-sen F (θ, x) = 0 (1)
University, Guangzhou, China
e-mail: [email protected] We are specifically interested in the special case
P. Chen Aθ = 0 (2)
Shenzhen Institute of Advanced Integration Technology,
CAS/CUHK, Shenzhen, China where A is a linear operator (matrix) formed by the data
D. Suter
values.
ARC Centre for Perceptive and Intelligent Machines in Complex If we had the more simple (to analyze) case
Environments, Department of Electrical and Computer Systems
Engineering, Monash University, Melbourne, Australia θ = f(x) (3)
282 J Math Imaging Vis (2009) 33: 281–295
then conceptually, the computation of the covariance matrix tion problems that can be abstracted as solving a (homoge-
of the errors in θ can be expressed as: neous) linear equation (2), where the singular vector associ-
T ated with the least singular value is the estimate of the pa-
∂θ ∂θ rameters of interest (essentially the Direct Linear Transform
θ = Cx (4)
∂x ∂x or DLT algorithm—see below). In particular, we focus on
homography estimation. This is because if one derives a se-
where Cx is the covariance matrix of x.
ries of such homographies, from the same pair of images; in
Although (4) looks direct and simple, it is not always the
case that there are direct ways available to compute the par- principle, one can improve the accuracy of the estimated ho-
tial differentiation in (4). In particular, we do not have such mographies by further statistical refinement (by exploiting a
an explicit expression when calculating the singular vector rank constraint). However, such a refinement stage requires
of a matrix associated with its least singular value. estimates of the error correlations between the homography
In related work [13, 14], Haralick proposed to calculate coefficients—which is thus a useful example to illustrate our
the propagations of perturbations in observed variables x, in analysis.
the minimization problem of In Sect. 2, we first review the normalized Direct Linear
Transformation (DLT) algorithm [15] and the subspace con-
min F (θ, x) (5) strained homography estimation [4]. In Sect. 3, we present
how to analytically compute the statistical property of the er-
The perturbation of x results in an error in θ , θ , as: rors in estimated homography parameters, and generally in
−1 other linearized parameter estimation problems. In Sect. 4,
∂ 2F ∂ 2F
θ = − x (6) we present simulations that fit the analytically calculated
∂θ 2 ∂θ ∂x statistics very well. In Sect. 5, the usefulness of this statis-
Then, the covariance matrix of θ can be calculated as tical analysis is demonstrated in the subspace constrained
approach to homography estimation.
−1 2 T 2 −1
∂ 2F ∂ 2F ∂ F ∂ F
Cθ = Cx (7)
∂θ 2 ∂θ ∂x ∂θ ∂x ∂θ 2
2 Normalized DLT Algorithm and Subspace
However, in many cases, two difficulties prevent us from
Constrained Homography Estimation
directly employing (6) and (7) to calculate Cθ . First, as men-
tioned above, there is no explicit formula for the partials
in (6), for example, when calculating the singular vectors. 2.1 Normalized DLT
2
Second, ∂∂θF2 is not always invertible, as in homography esti-
mation and other parameter estimation problems where the In this section, we review the normalized direct linear trans-
degree of freedom of the parameters is less than the parame- formation (DLT) algorithm [15] in general (and for homog-
ter number. raphy estimation in particular).
We note that there is much work that concentrates on The DLT approach is essentially to take the singular sub-
how to estimate the optimal parameters: including Taubin space as the solution to (2) (and this is invariably by SVD).
method [31], the renormalization method [17–19], the HEIV In what follows, we concentrate on homography estimation
method [22], the FNS method and its variants [7–9], and the but, as far as the DLT goes, the methodology of this paper
equilibration method [24–27]. In [21], it is reported that a applies to any setting (e.g. estimation of fundamental matri-
more accurate estimate can be obtained by taking into ac- ces) that leads to the same form (homogeneous linear equa-
count higher-order error. In [6, 20, 21], a rigorous KCR tion as in (2))—it is just that the particular instantiation of A
(Kanatani-Cramer-Rao) bound for the uncertainty in the es- will be different.
timated parameters is given. For comprehensive reviews on It is well known that the direct approach (via SVD) does
parameter estimation and its applications in computer vi- not always produce good results and that a “normalizing”
sion, see [5, 10, 11, 16, 20, 21, 28, 29, 35–37]. However, step generally improves the results—see the end of this sub-
though many of these target the same general forms as above section (which is explained in the case of homography esti-
(e.g. (2)), they do not generally characterize the error in mation but easily generalizes to other cases).
those estimated parameters. For a homography
In contrast, in this paper our primary focus is to ana- ⎡ ⎤
lytically characterize the uncertainty of the estimated pa- h1 h2 h3
⎢ ⎥
rameters. As said earlier, this is useful when those esti- H=⎢
⎣ h4 h5 h6 ⎥
⎦
mated parameters are further fed to follow-up statistical fit-
ting/refinements. We consider the class of parameter estima- h7 h8 h9
J Math Imaging Vis (2009) 33: 281–295 283
which maps x = [x1 x2 1]T on the first view as x = It is well known that one can collect the coefficients of the
[x1 x2 1]T on the second view: x = λHx. homographies between two views into a large, rank four ma-
From x × Hx = 0, each pair of the matches, {xi , xi }, pro- trix H. A brief review of this can be found in Appendix A.
duces a 3 × 9 matrix: In this section, we review how to calculate the homogra-
⎡ xT ⎤
phy embedded in a dimension four subspace [4]. Suppose
0 −xTi x2,i i (just for now) the dimension four subspace basis U ∈ R 9,4
⎢ ⎥
Ai = ⎢ ⎣ x T
i 0 −x xT ⎥
1,i i ⎦ (8) is known and the linearization matrix is A as in (9). The
subspace constrained DLT solution is as follows: First, cal-
xT
−x2,i xT
x1,i 0
i i culate the solution of AUx = 0 as x̂ (standard smallest sin-
gular value way). Second, take the Ux̂ as the solution of
which satisfies Ai h = 0, with h = [h1 h2 . . . h9 ]T . Stack the homography, which is obviously embedded in the sub-
{Ai } as space U.
T As in the normalized DLT, we also use a normalization
A = AT1 ... ATn (9) step in this dimension-four constrained homography estima-
tion. Suppose n (n ≥ 4) planes are available. The subspace
Ah = 0 holds. The solution of h is the right singular vector constrained algorithm is:
of A, associated with the least singular value. This is the
DLT algorithm [15] for homography estimation. 1. Taking all the feature points in the n planes as a whole
In [15], a normalization step is recommended. It con- set, calculate the normalization transforms T and T , for
sists of a translation and a scaling, so that the centroid of the first view and the second view respectively.
the transformed points is the 2. For each normalized plane, calculate its homography.
√ origin (0, 0) and the average
distance from the origin is 2. Suppose the centroid of the 3. Calculate the dimension four subspace U of these homo-
original points is (c1 , c2 ) and the average distance to this graphies.
centroid is l. The normalization transform T is 4. For each normalized plane, calculate its subspace-U con-
⎡1 ⎤ strained homography.
l 0 − cl1 5. Calculate the denormalized homographies for all the
⎢ ⎥ planes, as in the denormalization step of the normalized
⎢ 0 1 − c2 ⎥ (10)
⎣ l l ⎦ DLT.
0 0 1
There are two approaches1 to calculate the dimension-
Similarly, there exists a normalization transform for the sec- four subspace U, in step 3 above. One obvious approach is
ond view, T . to employ the SVD [12] to calculate the dimension four sub-
The normalized DLT algorithm takes the DLT algorithm space2 U of the rank-four matrix H [33, 34]. We refer to
as its core algorithm. First, calculate the transformed points this approach as the SVD-based subspace constrained ap-
for each view and their associated normalization transforms proach, or SVD-Sub-Cnstr, if the SVD is employed in step 3.
T and T . Second, using DLT, calculate the homography H However, the errors in estimated parameters produced by the
from the normalized matches. Last, in the denormalization DLT (step 2), can not be modeled as independent (much less
step, set as i.i.d. Gaussian). Thus, although the estimated subspace
by the SVD method is the “best” in terms of the Frobenius-
H = T−1 HT (11) norm distance, it is generally not optimal.
In the other approach, the statistical properties of the er-
as the homography in the original views. rors in estimated parameters (homographies from step 2) are
utilized to more optimally calculate the subspace U. We re-
2.2 Homography Estimation Embedded in a Dimension fer to this approach as the statistical subspace constrained
Four Subspace approach or Sta-Sub-Cnstr. More formally, with the covari-
ance matrix of the error, we first employ the bilinear ap-
proach [1, 3] or the alternating projection approach [23] to
Thus far, we are not saying anything new. The above proce-
dure can now be considered routine in the computer vision
1 For a practical approach other than Sta-Sub-Cnstr, refer to the algo-
community. However, the settings we are really concerned
with are ones where the output of the DLT is to be used rithm in our companion paper [4], where more constraints are utilized
to produce a more accurate estimate. The algorithm in [4] can be ap-
in further estimations/refinements. Here, one does generally plied, even to the case of as few as three planes.
need to take account of the correlations in the outputs of the 2 We remind the reader that in such a scheme there are now two stages
DLT (and hence our focus on calculating those correlations). of SVD calculation. First in the DLT for individual homographies
We illustrate with homography refinement. (step 2) and here for step 3.
284 J Math Imaging Vis (2009) 33: 281–295
calculate weighted rank-four approximation matrix of H: of xi and xi is corrupted with the noise of (εi,1 , εi,2 ), and
H4 (see Appendix B). Then, the subspace of H4 can rea- , ε ), respectively.3
(εi,1 i,2
sonably be taken as a solution of U. The essence of the analysis below is to represent the er-
rors in the estimated homography in terms of the random
, ε } for (1 ≤ i ≤ n). Here, we use
variables {εi,1 , εi,2 , εi,1 i,2
3 A Statistical Analysis of the Errors in Estimated the second subscript in εi,• (or εi,• ) to denote the x or y
⎡ ⎤
0 0 0 −εi,1 −εi,2 0 Ei,{1,7} Ei,{1,8} εi,2
⎢ ⎥
Ei = ⎢
⎣ εi,1 εi,2 0 0 0 0 Ei,{2,7} Ei,{2,8} ⎥
−εi,1 ⎦ (13)
Ei,{3,1} Ei,{3,2}
−εi,2 Ei,{3,4} Ei,{3,5}
εi,1 0 0 0
3 Note that x in Sect. 2.1 is used for the homogeneous representation where 0 is a 3 × 9 zero matrix and only the ith block
of a feature point. By a slight abuse of notation, we will also use x to 4(i−1)+j 4(i−1)+j
represent the feature points in non-homogeneous form: with x and y E is nonzero. From (13), the 3 × 9 matrix E
coordinates as its two entries. can be calculated: see Appendix C.
J Math Imaging Vis (2009) 33: 281–295 285
Then, for each 3n × 9 matrix Ei , calculate Ci as 3.1.1 Replacing Noise Free Data
The technique above can be generalized to other parame- where the (n − 1)/n is the ith components. Thus, the error
ter estimation problem, which is abstracted as solving a lin- in the inverse of l is:
ear or linearized system. Suppose the data matrix A ∈ R m,r n
1 1
and the parameter θ ∈ R r satisfy the constraint of Aθ = 0, =− 3 x i,1 x i,1 + x i,2 x i,2 (33)
l 2nl
which approximately holds in practice because the data ma- i=1
trix A is generally corrupted with an error of E. By analyz- x i,•
ing the The normalized image feature is l . The error in it is
linearization process, E is generally represented as x (x i,• )
E = i εi Ei , where εi is a random variable, representing ( i,•
l ) = x i,• ( 1l ) + l , which can be expressed as
J Math Imaging Vis (2009) 33: 281–295 287
pTi,• e. Similarly, the error in the second normalized view is In matrix terms, (38) can be expressed as:
pT T
i,• e. We stack the vectors p and p as
T
h (h) hhT (h)
= −
P = [pi,1 pi,2 p1,1 p1,2 ... pn,1 pn,2 pn,1 pn,1 ]T h F h F h 3F
Pe are the errors in the normalized coordinates. Accord- (h) hhT
= I9 − (39)
ing to (21), h F h 2F
H projects each point {xi,1 , xi,2 } of the first view on the sec- Due to the noise, the projection upon the second view is
as an example:
ond view as, by taking xi,1
h1 ∗ xi,1 + h2 ∗ xi,2 + h3
xi,1 = (40)
h7 ∗ xi,1 + h8 ∗ xi,2 + h9
b+b = b +
According to first order approximation, a+a −
a a
b can be analytically calculated while calculating the homog-
ab raphy using the normalized DLT algorithm.
b2
approximately holds. From this, (41) equals to
We compare this theoretical result with simulations. Of
A BD course we know the “ground truth data” in simulations.
xi,1 + − 2 (42)
E E Thus, first from noise free data, we calculate its homogra-
phy and the covariance matrix for the errors in estimated
where A = h1 εi,1 +h2 εi,2 +xi,1 (h1 )+xi,2 (h2 )+(h3 ),
homography from (23). After adding i.i.d. Gaussian noise to
D = h1 ∗ xi,1 + h2 ∗ xi,2 + h3 , B = h7 εi,1 + h8 εi,2 +
feature points, we similarly calculate its estimated homogra-
xi,1 (h7 ) + xi,2 (h8 ) + (h9 ), and E = h7 ∗ xi,1 + h8 ∗
phy and the covariance matrix from noisy data. This process
xi,2 + h9 . Note that second order terms, like (h• )εi,◦ , have
, the with noisy data repeats 20,000 times to obtain enough data
been dropped. Including the noise in the observed xi,1
. It can be repre-
for statistical properties. Note that the ground truth data is
projection error is actually E A
− BD
E2
− εi,1 the same for these 20,000 times and each time random noise
sented as qTi,1 e. Similarly, from the projection of the second is added to the feature points.
coordinate, qTi,2 e can be obtained. Stack qTi,• as Suppose, by (23), the covariance matrices are C and C (C
is different every time), calculated from noise free feature
Q = [q1,1 q1,2 ... qn,1 qn,2 ]T (43) points and noisy ones respectively:
In practice, the projection error is actually available, C = U diag{λ1 λ2 ... λ8 0}UT and
as μ. Then, Qe = μ approximately holds. Because the i.i.d.
Gaussian noise is assumed here, qi,• 2F σ 2 = μ22(i−1)+• . C = U diag{λ̃1 λ̃2 ... λ̃8 0}UT
Then, the noise level is estimated as:
We calculate three types of indexes, for 1 ≤ i ≤ 8:
μ F
σ̂ = (44) (ũ9 − u9 )T ũi
Q F ρ̃i = (45)
λ̃i
Note, the noise levels in two views can be assumed as dif-
ferent, up to a known scale. Suppose σ1 and σ2 are the noise (ũ9 − u9 )T ui
ρi = √ (46)
levels in the first and second views, respectively; however, λi
unknown. Suppose, further, σ1 = κσ2 . Then, by multiply- (ũ9 − u9 )T ui − (ũ9 − u9 )T ũi
ing the (4 • +1)th and (4 • +2)th columns of Q by a factor τi = (47)
|(ũ9 − u9 )T ui |
of κ, we can calculate σ̂2 , according to (44). Consequently,
σ̂1 = κ σ̂2 . Note that ũ9 and u9 are the estimated homography from
noisy data and the ground truth homography, respectively,
from the analysis in Sect. 3.1.1. The numerator parts of ρ̃i
4 Simulations of the Errors in the Homography and ρi are the errors projected upon the directions of ui and
Coefficients ũi , respectively. Because C and C are the covariance matri-
ces of the errors of ũ9 − u9 , ρ̃i and ρi should be of 0-mean-
The above concludes our methodology. Our purpose is now 1-variance Gaussian distribution. τi quantifies the difference
to confirm the validity of the correlation information the caused by the replacement of noisy data for noise free data.
above methodology provides, and then demonstrate the ef- The simulations in Fig. 1 and Fig. 2 show that both ρ̃i and
fectiveness of the information. ρi obey the 0-mean-1-variance Gaussian distribution. From
In this section, we carry out simulations to confirm the the fact that ρ̃i in Fig. 1 is a 0-mean-1-variance Gaussian
statistical analysis in Sect. 3. From the theory in Sect. 3, the variable, we can draw a conclusion that the replacement of
statistical properties of the errors in estimated homography noisy data for noise free data can be overlooked. This is also
J Math Imaging Vis (2009) 33: 281–295 289
T
confirmed by the simulations in Fig. 3, where we can see the ˜ = ũ9 − u9 ũ9 − γ (49)
magnitude of τi exceeds 0.01 in very few cases.
Consider now the 9th direction. As discussed in Note that in practice, we can only calculate E(γ ). We cal-
Sect. 3.1.2, we scale the normalized homography up to a culate ˜ only for the purpose of validating (28).
factor of 1 − E(γ ) and set the 9th singular value of C as From the analysis in Sect. 3.1.2, ˜ should be almost 0,
var(γ ), as in (30) and (31). Here, we use simulations to val- compared with , because from (27) ˜ only has the 2nd and
idate this approach. We only simulate the projection of the 3rd order terms of γ ; and is a Chi-square like random vari-
errors upon the direction ũ9 (because of (ũ9 − u9 )T ũ9 = able. From the simulations of and ˜ in Fig. 4, the error of
−(ũ9 − u9 )T u9 ). We know its expectation E(γ ) and in sim- ˜ can be totally overlooked, compared with (with a scale
ulations, we can furthermore calculate γ from (29) and (28). up to 10−5 ). This means that the error of (ũ9 − u9 )T ũ9 is al-
Thus, we calculate the following indexes: most modeled by γ in (28). The simulation of shows that
T is a Chi-square like random variable, also confirming the
= ũ9 − u9 ũ9 − E(γ ) (48) rationality of (30) and (31).
290 J Math Imaging Vis (2009) 33: 281–295
5 Simulation Result of Subspace Constrained be improved by employing the statistical properties of the
Homography Estimation homography coefficients. Because we need the ground truth
data in the comparison, we also resort to simulations.
It has been shown [2] that, for the case of > 4 planes over First, we compare the subspace constrained homography
2 views, the accuracy of the homographies can be improved
estimation in two cases. One is to use the SVD [12] to cal-
by utilizing the rank 4 constraint. However, the experimen-
culate the rank 4 subspace from more than 4 homographies,
tal setting in [2] was impractical so that we could avoid the
complications that the current paper now addresses (SVD then use the subspace constrained method to refine each ho-
being sub-optimal in the presence of non-i.i.d.-Gaussian mography. We refer to this method as SVD-Sub-Cnstr. An-
noise). In this section, we will show that the mapping ac- other is same except we use the correlation information, de-
curacy of the homographies, in more practical setting, can rived in this paper, in calculating the rank 4 subspace. More
J Math Imaging Vis (2009) 33: 281–295 291
formally, we use the Bilinear approach5 in [1, 3] to calcu- The next experiment shows how the statistical properties
late the rank 4 weighted approximation matrix: H4 . Then can be “intelligently” employed in calculating the rank 4
the subspace spanning H4 is taken as the solution of the subspace. In the simulations above, we add the same level
subspace. We refer to this second method as Sta-Sub-Cnstr. of noise to feature points in all planes. In the following ex-
From Fig. 5, we can see that the general SVD based method periment, we add equal-level noise to the first n − 1 planes,
SVD-Sub-Cnstr even increases the mapping error, compared
and add a much stronger noise in the last plane. By instinct,
with the normalized DLT algorithm. The superiority of the
in the SVD based method SVD-Sub-Cnstr, other planes with
Sta-Sub-Cnstr can be easily seen in Fig. 5.
weak noise will be affected by the plane that is severely pol-
luted by noise. What will happen in the Sta-Sub-Cnstr? Note
5 The alternate projection (AP) approach in [23] achieves the same aim.
that the plane with stronger noise is treated the same as oth-
However, the Bilinear approach in [1, 3] is preferred here because the
errors in each homography can be reasonably assumed to independent ers when using subspace constrained methods, in the SVD-
to those in another homography. Sub-Cnstr or the Sta-Sub-Cnstr; i.e., we do not know the
292 J Math Imaging Vis (2009) 33: 281–295
Fig. 6 Simulations of mapping errors, compared with the normalized DLT algorithm, when one plane is severely corrupted with noise
Appendix B: Definition of Weighted Rank-r 8. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.:
Approximation Matrix A new approach to constrained parameter estimation applicable to
some computer vision problems. Image Vis. Comput. 22(2), 85–
91 (2004)
We suppose 0-mean noise in the entries of the matrix M ∈ 9. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.:
R m,n but we do not assume row or column independence. From fns to heiv: a link between two vision parameter estimation
In order to characterize the noise in M, we first rearrange M methods. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 264–268
as a vector vec(M) ∈ R mn,1 . Suppose the covariance matrix (2004)
10. Chum, O., Pajdla, T., Sturm, P.: The geometric error for homogra-
for the noise in vec(M) is C. The weighted rank-r approxi- phies. Comput. Vis. Image Underst. 97(1), 86–102 (2005)
mation matrix of M is defined to be Mr that has properties: 11. Chum, O., Werner, T., Matas, J.: Two-view geometry estimation
rank(Mr ) = r and Mr minimizes the objective function of unaffected by a dominant plane. In: Proc. Conf. Computer Vision
(vec(M − X))T C− vec(M − X). Methods for finding rank r and Pattern Recognition (1), pp. 772–779 (2005)
12. Golub, G.H., Loan, C.F.V.: Matrix Computations, 3nd edn. Johns
approximation matrix can be found, as the Bilinear approach Hopkins Press, Baltimore (1996)
in [1, 3] and the alternate projection (AP) approach in [23]. 13. Haralick, R.M.: Propagating covariance in computer vision. In:
Proc. of 12th ICPR, pp. 493–498 (1994)
14. Haralick, R.M.: Propagating covariance in computer vision. Int. J.
4(i−1)+k Pattern Recogn. Artif. Intell. 10(5), 561–572 (1996)
Appendix C: Definition of E in (17) in Sect. 3.1 15. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Com-
puter Vision, 2nd edn. Cambridge Univ Press, Cambridge (2003)
16. Jain, A.K., Mao, J., Duin, R.: Statistical pattern recognition: A re-
⎡ ⎤ view. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 4–37 (2000)
0 0 0 −1 0 0 xi,2 0 0 17. Kanatani, K.: Unbiased estimation and statistical analysis of 3-d
⎢ ⎥
=⎢ 0⎥
4(i−1)+1 rigid motion from two views. IEEE Trans. Pattern Anal. Mach.
E ⎣ 1 0 0 0 0 0 −xi,1 0 ⎦ Intell. 15(1), 37–50 (1993)
−xi,2 0
0 xi,1 0 0 0 0 0 18. Kanatani, K.: Statistical bias of conic fitting and renormalization.
⎡ ⎤ IEEE Trans. Pattern Anal. Mach. Intell. 16(3), 320–326 (1994)
0 0 0 0 −1 0 0 xi,2 0 19. Kanatani, K.: Statistical Optimization for Geometric Computa-
⎢ ⎥
=⎢ 0⎥
4(i−1)+2 tion: Theory and Practice. Elsevier, Amsterdam (1996)
E ⎣0 1 0 0 0 0 0 −xi,1 ⎦ 20. Kanatani, K.: Uncertainty modeling and model selection for geo-
0 −xi,2 0 0
xi,1 0 0 0 0 metric inference. IEEE Trans. Pattern Anal. Mach. Intell. 26(10),
⎡ ⎤ 1307–1319 (2004)
0 0 0 0 0 0 0 0 0 21. Kanatani, K.: Statistical optimization for geometric fitting: Theo-
⎢ ⎥
=⎢ −1 ⎥
4(i−1)+3 retical accuracy bound and high order error analysis. Int. J. Com-
E ⎣0 0 0 0 0 0 −xi,1 −xi,2 ⎦ put. Vis. (2008, in print)
0 0 0 xi,1 xi,2 1 0 0 0 22. Leedan, Y., Meer, P.: Heteroscedastic regression in computer vi-
⎡ ⎤ sion: Problems with bilinear constraint. Int. J. Comput. Vis. 37(2),
0 0 0 0 0 0 xi,1 xi,2 1 127–150 (2000)
⎢ ⎥
=⎢ 0⎥
4(i−1)+4 23. Manton, J.H., Mahony, R., Hua, Y.: The geometry of weighted
E ⎣ 0 0 0 0 0 0 0 0 ⎦ low-rank approximations. IEEE Trans. Signal Process. 51(2),
−xi,1 −xi,2 −1 0 0 0 0 0 0 500–514 (2003)
24. Mühlich, M., Mester, R.: A considerable improvement in non-
iterative homography estimation using tls and equilibration. Pat-
tern Recogn. Lett. 22(11), 1181–1189 (2001)
25. Mulich, M., Mester, R.: The role of total least squares in motion
References analysis. In: ECCV, pp. 305–321 (1998)
26. Mulich, M., Mester, R.: Subspace methods and equilibration in
1. Chen, P.: An investigation of statistical aspects of linear subspace computer vision. In: Scandinavian Conference on Image Analysis
analysis for computer vision applications. Ph.D. Thesis, Monash (2001)
University (2004) 27. Mulich, M., Mester, R.: Unbiased errors-in-variables estimation
2. Chen, P., Suter, D.: An analysis of linear subspace approaches using generalized eigensystem analysis. In: ECCV Workshop
for computer vision and pattern recognition. Int. J. Comput. Vis. SMVP, pp. 38–49 (2004)
68(1), 83–106 (2006) 28. Nadabar, S.G., Jain, A.K.: Parameter estimation in Markov ran-
3. Chen, P., Suter, D.: A bilinear approach to the parameter estima- dom field contextual models using geometric models of objects.
tion of a general heteroscedastic linear system, with application to IEEE Trans. Pattern Anal. Mach. Intell. 18(3), 326–329 (1996)
conic fitting. J. Math. Imaging Vis. 28(3), 191–208 (2007) 29. Nayak, A., Trucco, E., Thacker, N.A.: When are simple ls estima-
4. Chen, P., Suter, D.: Rank constraints for homographies over two tors enough? An empirical study of ls, tls, and gtls. Int. J. Comput.
views: Revisiting the rank four constraint. Int J. Comput. Vis. Vis. 68(2), 203–216 (2006)
(2008, to appear) 30. Stewart, G.W., Sun, J.G.: Matrix Perturbation Theory. Academic
5. Chernov, N.: On the convergence of fitting algorithms in computer Press, San Diego (1990)
vision. J. Math. Imaging Vis. 27(3), 231–239 (2007) 31. Taubin, G.: Estimation of planar curves, surfaces, and nonplanar
6. Chernov, N., Lesort, C.: Statistical efficiency of curve fitting algo- space curves defined by implicit equations with applications to
rithms. Comput. Stat. Data Anal. 47(4), 713–G728 (2004) edge and range image segmentation. IEEE Trans. Pattern Anal.
7. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.: On Mach. Intell. 13(11), 1115–1138 (1991)
the fitting of surfaces to data with covariances. IEEE Trans. Pattern 32. Wilkinson, J.H.: The Algebraic Eigenvalue Problem. Clarendon,
Anal. Mach. Intell. 22(11), 1294–1303 (2000) Oxford (1965)
J Math Imaging Vis (2009) 33: 281–295 295
33. Zelnik-Manor, L., Irani, M.: Multi-view subspace constraints on research interest includes subspace analysis in computer vision, struc-
homographies. In: Proc. Int’l Conf. Computer Vision, pp. 710–715 ture from motion, and wavelet application in image processing.
(1999)
34. Zelnik-Manor, L., Irani, M.: Multi-view subspace constraints on David Suter holds the position
homographies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), of Professor of Computer Science
214–223 (2002) in the School od Computer Sci-
35. Zhang, Z.: Parameter estimation techniques: A tutorial with appli- ence at The University of Adelaide,
cation to conic fitting. Image Vis. Comput. 15, 59–76 (1997) South Australia. He was previously
36. Zhang, Z.: Determining the epipolar geometry and its uncertainty:
Professor of Computer Systems in
A review. Int. J. Comput. Vis. 27(2), 161–195 (1998)
the Department of Electrical and
37. Zhang, Z.: On the optimization criteria used in two-view motion
Computer Systems Engineering at
analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 717–729
Monash University. During 2008-
(1998)
2010, Professor Suter will also serve
as a member of the Australian Re-
search Council College of Experts.
Pei Chen received two Ph.D. de-
He is a Senior Member of the IEEE.
grees, on wavelets and computer vi-
His main research interests are Im-
sion, respectively, from Shanghai
age Processing, Computer Vision,
Jiaotong University in 2001, and
Video Compression, Computer Graphics and Visualization, Data Min-
from Monash University in 2004.
ing and Artificial Intelligence. He currently serves on the editorial
He worked as a postdoctoral re-
board of four international journals: Journal of Mathematical Imag-
searcher with Monash University;
ing and Vision; Machine Vision and Applications, IPSJ Transactions
as a Senior Research Engineer with
on Computer Vision and Applications, and the International Journal of
Motorola Labs; then as a Research
Computer Vision. He was previously a member of the editorial board
Professor with Shenzhen Institute of
of the International Journal of Image and Graphics.
Advanced Integration Technology,
CAS/CUHK, China. He is currently
a Professor with School of Informa-
tion Science and Technology, Sun
Yat-sen University, China. His main