0% found this document useful (0 votes)
4 views

an algorithm of orthogonal set

The document presents an algorithm utilizing the Gram-Schmidt orthonormalization procedure to derive an orthogonal set of individual degrees of freedom for error from a set of observations. This method is particularly useful in data analysis from designed experiments where the correlation among residuals complicates interpretation. The algorithm is detailed with mathematical formulations and examples to illustrate its application.

Uploaded by

xigewol345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

an algorithm of orthogonal set

The document presents an algorithm utilizing the Gram-Schmidt orthonormalization procedure to derive an orthogonal set of individual degrees of freedom for error from a set of observations. This method is particularly useful in data analysis from designed experiments where the correlation among residuals complicates interpretation. The algorithm is detailed with mathematical formulations and examples to illustrate its application.

Uploaded by

xigewol345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

-------

JOURNAL OF RESEARCH of the Na tional Burea u of Standards- B. Mathematics and Mathematical Physics
Vol. 67B, No. I , January- March 1963

An Algorithm for Obtaining an Orthogonal Set of


Individual Degrees of Freedom for Error
Joseph M. Cameron
(November 20, 1962)

This note prese nts an algorit hm based on th e Gram-Sc hmid t o rthono rm ali zatio n pro-
cedure for producing t he coefficients of linear comb inations of obse rvation s "'hich cnn be used
for comp uting an orthogonal set of individual d egrees of fr eedom for error from a set of ob-
servat.ions.

In the analysis of data from designed experime n ts it is becoming common Lo compute the
deviations between the observed and predicted valu es. Because of th e correlation among these
residuals it is sometimes easier to interpret a set of independent individual degrees of freedom,
particularly if the residuals are to be used to study the state of statistical control of a meaSUl'e-
ment process. This note presents an algorithm for producing such an orthogonal set.
Let the observations Y Y z, . . . Y n have expected valu es
j ,

E(Yj ) = (3oXOI + tJ 1Xli + + tJkXU


E(Y2) = {30X OZ+ tJI X 1Z+ + tJkX k2

which can be written in matrix notation as

E(y )= XtJ

where y is the vector of obser vations, X the alT ay of X i/ s and tJ a vector of parameters.
The deviation , 0;, between t he observed and predicted valu e is then

or in matrix rorm
o = y - X~

where ~ is the vector of estimates of the parameters of the system. Noting that til e ~ i ar e
functions of the observations each 0i can also be written as a [unction o[ the observation , say

o= Ay.

To say that there ar e m degrees of freedom for errol' is Lo say that A h iclS nLnk m or t hat tb e ),ows
of A are linear com bin ations of a set of m orthogonnl rows. These m orthogonal rows gi ve
the coefficients necessary for computin g t he indi vidual degrees or freedo m for err or.
The orthogonalization can be c!llTiecl out by iLn application of the Gram-Schmidt process, 1

1 Philip D avis aDd Philip Habinowitz. A. mul tiple purpose orthonormalizing code and its uses, Journal of Lhe Association [or Comp utin g
M ach inery 1, 183- 191 (19.14 ).

19
a modification of which follows , The deviations can be wri tten as function of the observations
as follows :
01 = allYl + a I2Y2+ + alnY"
+
02= a21Yl a2ZYZ + + aznY1I

Consider now the vectors


a l = (all , a1 2,
a2= (aZI, a22 ,

and denote the sums of squar es an d cross products as follows

n
(ak • ak) =:::s a~i
i= 1

n
(ak • al ) =:;:c akiali.
i= 1
Form the vectors
b1 = a1
b2 = (a1 ,al) a2- (ai ' a2)al
b3= (ai' a1)a3- (ai ' a3)al

where the vectors bi are)he vector sums of the indicated vectors, e,g" the elements of b2 ar e

Next form
CI = bl
C2= b2
Ca = (bz,b2 )b3 - (b2 ,b3 )b2
c4 = (bz,b2 )b4 - (b2 ,b4 )b2

Then

dl = CI
d2 = C2
d3= C3
d4= (C3' C3)C4- (C3 ' C4)C3
ds= (C3,C3)C 5- (C3'CS)C3

If in the pro cess one of the vectors becomes all zeros, it should b e transferred to the end
and the vectors renwnbered accordingly,
This process is continued for m steps at which poin t the last n-m vectors will b e zero
vectors and the first m will be the desired orthogonal set, It is easy to verify that at th e kth
stage the first (k- l ) vectors are mutually orthogonal and orthogonal to all vectors b elow.
20
To illustrate, consider the ues ign where all differen ces among four objects are measured
i.c. ,
E(Yj)= (3J - (32
E( Y2) = {3J - (33
E(Y3 ) = {31 - (34
E(Y4)= {32- {33
E(Y5 ) = {32- {34
E(Y6) = {33 - {34'

The following table shows the estimates of the (3 's under th e assumption that L;{3 = 0 .

YJ Y 2 Y3 r 4 Ys Y6
A
4{3j 1 1 1 0 0 0
A
4{32 - 1 0 0 1 1 0
A
4{33 0 - 1 0 - 1 0 1
A
4{34 0 0 - 1 0 - 1 - 1
--------------------- --- -

401 = 4Yj- 4 (~I - ~2) 2 - 1 - 1 1 1 0


402= 4Y2- 4 (~l- ~3) - 1 2 - 1 - 1 0 1
A A
40 3= 4Y3- 4({31- (34) - 1 - 1 2 0 - 1 - 1
404 = 4Y4- 4 (~2- ~3) 1 - 1 0 2 - 1 1
40s= 4Ys- 4(~2- ~4) 1 0 - 1 - 1 2 - 1
406= 4Y6- 4 (~3- ~4) 0 1 - 1 1 - 1 2

From this table one has, for example:

~J = HYI + Y2+ Y3)


01= H2YJ- Y 2 - Y 3 + Y 4 + r s)

The matrix A is thcn

al =( 2 - 1 - 1 1 1 0) (alai) = 8
a2=( - 1 2 - 1 - 1 0 1) (aja2) =- 4
a3=( - ) - 1 2 0 - 1 - 1) (aja3)=- 4
a4= ( 1 - 1 0 2 - 1 1) (aja 4) = 4
as= ( 1 0 - 1 - 1 2 - 1) (ajas) = 4
a6= ( 0 1 - 1 1 - 1 2) (ala6) = 0

The first step gives

bl= al = (2 - 1 - 1 1 1 0)
b2=H8a2+ 4al ) = (0 3 - 3 - 1 1 2) (b2b2) = 24
b3= H8a3+ 4al) = (0 -3 3 1 - 1 - 2) (b2 b3)= -24
b4=H8a4-4al) = (0 - 1 1 3 -3 2) (b2 b4 )=- 8
bs=H8a s- 4aJ) = (0 1 - 1 -3 3 -2) (b2 b5 ) = 8
b6 = t(8a 6 )=(0 1 - 1 1 - 1 2) (b2b6) = 8.

The vectors b2 , 63 , are multiplied by factors (e.g., 1/4, 1/4 , . . . ) to keep the numbers
conven ien tly small.
21
The second step gives

Cl = b1 = (2 - 1 - 1 1 1 0)
Cz= bz= (O 3 - 3 - 1 1 2)
C3= 24b 3+ 24b z= (0 0 0 0 0 0)
C4 = 6~ (24b4 + Sb 2) = (0 0 0 1 - 1 1)
cs= -J-;r (24b ;- Sb z) = (0 0 0 - 1 1 - 1)
c6 = -g-~(24b 6- Sb2) = (0 0 0 1 - 1 1)

Vector C3 is transfened to the end to give

c1 = (2 -1 - 1 1 1 0)
Cz= (0 3 - 3 - 1 1 2)
C3= (0 0 0 1 - 1 1) (C3C3) = 3
C4 = (0 0 0 - 1 1 - 1) (C3C4) =-3
cs= (O 0 0 1 - 1 1) (C3C;) = 3
C6= (0 0 0 0 0 0) (C3C6) = 0

The next stage gives

d1 = c1 = (2 - 1 - 1 1 1 0)
dz= Cz = (0 3 - 3 - 1 1 2)
d3= C3= (0 0 0 1 - 1 1)
d4= 3c4+ 3c3= (0 0 0 0 (I 0)
ds= 3cs- 3c3= (0 0 0 0 0 0)
d 6= 0= (0 0 0 0 0 0)

with d] ) d2 ) d3 ) being the desired set of orthogonal vectors.


It is worth noting that one need not start with the deviations ) but mfty begin with ftny set
of lineal' fun ctions having zero expectation and mnk equal to the numb er of degrees of freedolll
for error. For the example the three vectors below could have been used.

al = (l - 1 0 1 0 0)
az= (0 0 0 1 - 1 1)
a3= (0 1 - 1 o 0 1)

The steps in the orthogonalization ftre

b1 = al = ; 1 - 1 0 1 0 0)
b2= 3az- al = (- 1 1 0 2 - 3 3)
b3= 3a3+ al = ( 1 2 -3 1 0 3)

C] = b] = ( 1 -1 0 1 0 0)
C2 = b2 = (- 1 1 0 2 - 3 3)
c3= it;(24b 3- 12b 2)= ( 1 1 - 2 o 1 1) .

(Paper 67Bl- S9)

22

You might also like