TLS Tutorial
TLS Tutorial
I. INTRODUCTION
Detecting geometric features (lines, circles, surfaces, etc.)
from data points is a fundamental task in several fields of
science and engineering; for instance, metrology, computer
vision, mobile robotics, etc.
Let = { , , } be a set of measurements or points
where each point = , is represented by its rectangular
coordinates. A linear relation between
and
is usually
written as
(1)
=
+
where is the slope of the straight line and is the y-axis
intersection. In the classic Least Squares (LS) the abscissa
data ( , = 1, , ) are assumed to be known exactly while
the uncertainties of the ordinate data ( ) are used as weights
for fitting the line , , given by (1), to the set of
measurements .
The solution to fit a line using the least squares regression,
appears with complete derivations in textbooks at many levels:
calculus, linear algebra, numerical analysis, probability,
statistics, and others.
However, measured data are never free of uncertainty. This
means, in order to determine a best fit to a line, a method is
and
data
required which takes the uncertainties of the
into account [3]. The Total Least Squares regression (TLS)
was introduced by Golub and Van Loan [2] to deal with both
uncertainties. Despite its usefulness and its simplicity, TLS
has not yet appeared in numerical analysis, statistics or linear
algebra texts.
Introducing students to TLS is the purpose of this tutorial,
and it may complement the usual courses in numerical
analysis, statistics or linear algebra, or serve as a transition
from such courses to a more advanced and specialized course.
L. Romero Muoz, Facultad de Ingeniera Elctrica, Ciudad Universitaria,
Universidad
Michoacana,
58000,
Morelia,
Mxico
(e-mail:
[email protected]).
M. Garca Villanueva. Facultad de Ingeniera Elctrica, Ciudad
Universitaria, Universidad Michoacana, 58000, Morelia, Mxico (e-mail:
[email protected]).
C. Gmez Surez. Facultad de Ingeniera Elctrica, Ciudad Universitaria,
Universidad
Michoacana,
58000,
Morelia,
Mxico
(email:
[email protected])
+ sin
(3)
Fig. 1. Line parameters in the normal form. The shortest distance from the
.
origin to the line is =
=0.
cos
sin
( cos
= 0 first
Lets do
( cos
of
=0
)=0
( )
( )=0
( )
1
( ) + sin
) )
sin
( ) + sin
cos
cos
sin
( ) + sin
cos
(6)
=0
( ) =0
(10)
(11)
III.
from point
to line .
+ sin
=0
= cos + sin
(12)
) cos
2 [(
) sin
[(
(
2(
) 2 cos
)(
sin
+(
) sin
(9)
2 cos
2(
sin
)(
) sin ] )
+(
+(
+ 2(
. So, lets do
=0
) sin ]
) cos ] = 0
)(
) cos
) 2 cos
sin
[(
=0
) ( ) ] +
)(cos
sin
)=0
(14)
(
Using the follo
owing trigonom
metric identitiees
sin 2 = 2 co
os
sin
, co
os 2 = cos
D. M
Matrix form to obtain the anggle
sin
(15)
(
)(
[((
sin 2
2 cos 2
) (
(17)
) ] +
) cos
c 2 =0
) (
) ] +
[(
)=0
)(
) )]
sin 2
2 [( )(
2
=
[( ) ( ) ]
cos 2
1
2 [( )( )]
= arctan
[(
2
[ ) ( ) ]
tan 2 =
(16)
(
Fig. 3. Linne fitting minimiziing orthogonal disstances from pointss to line (TLS).
C. Example
Consider the data given in Table I. We want
w
to determ
mine
thee line of total least squares fo
or these points.
TAB
BLE I
AN EXAMPLE WITH
W
7 POINTS.
pooint
3
7
4
7
5
11
6
11
7
15
5
8
16
9
19
1
Fig. 4. Linne fitting minimiziing vertical distances from points to line (LS).
where
2,
is a matrix of dimension
is a vecctor,
cos
sin
+ +
(18)
with
(19)
where
denotes the transpose of vector . Using the inner
product to compute the norm, eq. (17) can be written as
(
(;Z)=
)(
)(
(23)
,
,
=
=
,
,
0
,
,
and second
Let
be a matrix which first column is
column is ; and let be a diagonal matrix with elements
and . Using these matrices we can write last equation in a
simpler form
(26)
=
The orthonormal matrix has an interesting property: its
inverse is its transpose (
=
= , where is the
identity matrix). Using this property and equation (26), the
matrix can be expressed in terms of and ,
(27)
=
Replacing the matrix , given by the equation (27), into
equation (22),
,
{ (
)
0
0
) +
) }
(29)
,
are
(30)
(31)
cos
sin
cos
+ (1 cos
)}
(32)
,
,
= cos
(; ) =
(28)
To see the maximum and minimum value of , suppose
< . Taking into account that the inner product of
that
two vectors with coordinates and is defined as
)(
(; ) =
(; ) =
(; ) =
, of dimension 2 2,
(; ) =
(21)
(; ) =
= [cos
,
= cos + sin
+
, =
, +
0 = ( ) , + ( )
=( )
,
,
(36)
where = [
] , and = [ ] is the centroid of points.
must be orthogonal
Note that vector with coordinates
to vector , in order to satisfy equation (36).
E. Example (cont.)
Continuing the example from Section III-C, we can
compute the matrix ,
28.000 58.000
(37)
=
58.000 125.429
= ( )
= +
,
,
= ( )
= +
(44)
(45)
Fig. 5. Finding the best lines of a set of points given by an IR sensor of a small
mobile robot.
(41)
V. SOME EXTENSIONS
A. Weighted total least squares
In section III-A we consider the same uncertainty for all
for point
=
points. If we consider an uncertainty
1, , , the best line minimizes
( , )
(; ) =
(46)
Following a procedure similar to section III-B, we can get
the solution
sin
cos
(42)
= cos
1
=
2
(43)
Let
and
be the maximum and minimum values
respectively of all coordinates
, in
=[
], =
1, , . The points
= [0
] and
= [0
] are the
+ sin
2
[(
( )( )
) ( ) ]
(47)
(48)
where
=
(49)
(50)
= 1
are the weighted means; with individual weights
for each measurement. This approach is known as weighted
least squares.
B. Fitting a set of points to a plane
The method to find the best line in the total least squares
cos
+ cos
+ cos
(51)
to plane
is given
cos
(52)
by
( , )=
cos
cos
(53)
( , )
Doing
= cos
VI. CONCLUSION
[1]
points.
+ cos
[3]
[4]
[5]
[6]
(54)
where
[7]
(55)
VIII. BIOGRAPHIES
3,
is a matrix of dimension
is a vector,
cos
cos
=
cos
(57)
Note
that
is
a
unit
vector,
because
|| ||= cos + cos + cos = 1.
The best plane is given by =
=[ ,
,
, ] , the
of matrix
eigenvector associated to the smallest eigenvalue
. From =
we can obtain , using equation (54),
=
(58)
=[
+
=(
,
] , and
=[
+
)
(59)
(60)
] is the centroid of