100% found this document useful (12 votes)
258 views14 pages

Linear Models and Design Direct Download

This book provides a graduate-level introduction to linear models and experimental design, emphasizing the connections between these two subjects. It covers classical results in linear model theory, factorial designs, and confounding, while also offering a mathematical appendix for background support. The text is intended for theoretical courses rather than applied statistics, and it includes numerous references for further exploration of advanced topics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (12 votes)
258 views14 pages

Linear Models and Design Direct Download

This book provides a graduate-level introduction to linear models and experimental design, emphasizing the connections between these two subjects. It covers classical results in linear model theory, factorial designs, and confounding, while also offering a mathematical appendix for background support. The text is intended for theoretical courses rather than applied statistics, and it includes numerous references for further exploration of advanced topics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Linear Models and Design

Visit the link below to download the full version of this book:

https://medipdf.com/product/linear-models-and-design/

Click Download Now


To my parents
– J. S. Bach, Cantata 29, mvt. 2
Preface

This book is intended to give a graduate-level introduction to the theory of linear


models and to basic concepts and results in the design of experiments. It takes a
distinctly different approach to its subject from other texts, and the purpose of this
preface is to alert the reader and provide a road map into the material.
Linear models encompass both regression models and factorial models. Ideally,
the reader will thus have some familiarity with regression and the analysis of
variance, along with a solid grounding in linear algebra and general mathematical
sophistication. Chapters 5 and 6 make significant but elementary use of finite fields
and commutative groups. An extensive mathematical appendix provides background
for the topics covered, and the reader is encouraged to consult it as well as the
comprehensive index, as needed. That said, everything is developed from scratch,
with examples to help make the material easy to absorb.
The text combines two subjects that, while related, are usually treated separately
at this level. Books on linear model theory typically don’t go into detail about topics
in experimental design, while texts in the latter generally give a relatively brief
introduction to linear models, or may even simply assume that material. One goal
of this text is to bring out the deep connections between these subjects.
While the reader will find many numerical examples in this book, it is probably
not suitable as a main text for an applied statistics course. Rather, it is intended for
use in a course emphasizing theory, and as a reference on the foundations of linear
models and design of experiments. Chapters 1 through 4 develop the basic, classical
results of linear model theory for both regression and factorial (or ANOVA) models,
with Chap. 2 concentrating on the latter. The emphasis is on fixed-effects models,
and only two sections are devoted to random effects (or variance components).
These four chapters form a reasonable one-semester course on linear models.
The design part of the book is in Chaps. 5 and 6. Chapter 5 deals with the
general theory of factorial designs and then develops the theory of confounding
with blocks. Chapter 6, which depends heavily on Chap. 5, is devoted to fractional
designs, aliasing, and resolution, along with an introduction to relative aberration
and other topics. Those two chapters are really the heart of the book, and were
my initial motivation for writing it. Excluding the appendix, these two chapters are

ix
x Preface

almost half of the text, and along with Sects. 2.1–2.3 they could be the basis for a
special topics course in design.
This book does not by any means cover all topics of interest in linear models and
statistical design. My intention is to give the reader enough background to read more
advanced literature intelligently and critically. A major goal of the text is to provide
complete treatments of topics covered. Numerous references to the literature are
provided for students interested in pursuing a topic further.
A particular concern of mine in this field is terminology. I have addressed this
by (a) using common, consistent mathematical terminology and (b) explaining, in
numerous remarks and in passages such as Sects. 1.2.1 and 6.7, the terminology
that one may find in the literature. Hopefully, this book gives the student a helpful
lexicon.
Aside from its unusual combination of topics, this text presents the material in a
manner that differs in essential ways from other expositions. I want to flag some of
these differences now.
• Recall that a linear model is defined by an equation

E(Y) = Xβ,

where Y is an N × 1 vector of observations, β is a p × 1 vector of parameters


(p ≤ N), and X is a design matrix (in factorial models) or a regression matrix.
We will frequently view X not as a matrix but as a linear transformation from Rp
(parameter space) to RN (observation space). For reasons explained in the text,
we will generally assume that X has full rank, and thus that the transformation is
one-to-one.
• As a map, X has an inverse of sorts, namely a linear transformation from RN to
Rp that is indeed an inverse when restricted to the range of X. This is the Moore-
Penrose inverse of X, and is given by the matrix T = (X X)−1 X . It is introduced
in Sect. 3.2, and makes a surprising number of appearances in the text (see the
index). Except for a couple of exercises, this is the only “generalized inverse” we
use. Generalized inverses typically arise in linear models because of the need to
invert X X, but that matrix will usually be invertible as X usually has full rank. I
have included a brief discussion of generalized inverses near the end of Sect. 3.1,
where I follow Rao [113].
• I have separated the treatment of the identifiability of a linear expression c β 1
from its estimability. See Proposition 1.29 and Lemma 3.10.
• A linear hypothesis is often described as a set of equations of the form c β = 0,
or as an equation C β = 0 where C is a matrix. Instead, we view such hypotheses
as being statements of the form β ⊥ U or β ∈ W , where U and W are subspaces
of Rp . This is justified in Sect. 1.7, and I define the term effect in Sect. 2.3 simply
to be U . This is a basis-free way to describe such hypotheses, and my hope is that

1 We view vectors as columns, and  denotes transpose, so c β is the dot product of c and β.
Preface xi

the utility of this approach will become clear in later chapters. The subspaces U
and W , together with their images under X, are our main focus of study beginning
in Chap. 4. They are the unifying theme of the book, uniting the two main topics
of the title.
• In factorial models, the subspaces U will consist of contrast vectors (Defini-
tion 1.30). Among these are the spaces UB where B is a partition (or blocking)
of the set T of treatment combinations. We define these block effects and develop
their basic properties in Sect. 5.2. They are fundamental in defining main effects
and interaction, and they allow us to express the notion of confounding in a very
general setting (Definition 5.40).
• Partially ordered sets, and in particular lattices, pervade the text. They arise
rather naturally with regard to linear hypotheses in both regression and factorial
models—see Sect. 4.3—and are necessary in any careful discussion of adjusted
and sequential sums of squares.
The lattice of partitions of a finite set (for us, the set of treatment combina-
tions) is of central interest. The ordering on partitions that we use is actually the
opposite of the more common ordering, but more natural; see Theorem 5.5 and
Remark A.8.
Theorem 5.5 introduces one of two lattice maps that play a crucial role. The
other map is induced by the linear transformation X; see Proposition 4.15.
• When it comes to factorial models, we follow a cell-means approach fairly
strictly. This means that rather than writing models such as E(Yij k ) = μ + αi +
βj +γij (which we call a factor-effects parametrization), we formulate everything
in terms of the cell means μij = E(Yij k ) (μij is the mean for cell ij ). Bose [22]
defined main effects and interactions in exactly this way, and the approach we
use is essentially his.
Factor-effects models are discussed in Sect. 2.5 and the cell-means approach
in Sect. 2.6. The factor-effects parametrization is very common, and this
exposition should give students facility in toggling between that and the cell-
means approach.
• If T is the set of cells (treatment combinations) in a factorial experiment, we
may denote the mean response in cell t by μt (as above) or by μ(t). Thus,
the set of mean responses can be represented as a vector μ with components
indexed by T or as a function μ defined on T . Section 5.1 leads the reader
to this functional point of view and replaces the parameter space Rp by the
space RT consisting of real-valued functions on T . This risks adding a layer
of unfamiliarity to the material, but I have found that it is a more natural way to
deal with factorial experiments. Each approach has its advantages, and students
should feel comfortable with both.
(This approach is of course necessary in extending linear model theory to
functional data, but that is beyond the scope of this book.)
• There are two distinct views of aliasing and resolution in the literature. One
common view defines these concepts in terms of estimability and bias. We
emphatically do not follow this view, which we discuss extensively in Sect. 6.7.
The approach we follow springs essentially from Rao’s seminal 1947 paper
xii Preface

[111], in which he introduced the fundamental parameter of strength. Implicit in


his writing is the operation of restriction, which we formalize at the beginning of
Chap. 6. This allows us to define aliasing in a very general way (Definition 6.4).
From this we proceed in two directions.
1. We derive the well-known theory of aliasing and resolution in so-called
regular fractions (Sects. 6.1 and 6.2). It should be noted that these were
originally developed without the use of the results in Rao’s 1947 paper:
aliasing in 2-level and 3-level designs in a 1945 paper [54] by Finney, and
resolution in 2-level designs in a 1961 paper by Box and Hunter [27].2
2. We develop the theory of aliasing and resolution in arbitrary fractions
(Sect. 6.5). Our general definition of aliasing allows us to interpret Box and
Hunter’s definition of resolution (Definition 6.25) in this general context, from
which we prove a Fundamental Theorem of Aliasing (Theorem 6.43) and a
host of related results. It is interesting to note how close Rao came to the
concept of resolution in his 1947 paper.

I have paid some attention throughout the text to the history of certain ideas,
notably the discovery by Barnard of a useful group structure in so-called 2k
experiments. Her 1936 paper [10] led to generalizations over the next decade using
either geometry [23] or group theory [58, 59] and sparked an interest in applications
of algebra to design that continues to this day. The beginning of Chap. 5 includes
some biographical information about her, and I hope this book helps to spotlight the
significance of her contribution.
There are quite a few topics not covered in this text, including robustness,
nonnormality, dummy variables, and various designs (BIB and PBIB, split plot,
crossover, . . . ). On the other hand, I have included some topics that are less usual:
Identifiability and its relation to estimability. We include a careful discussion of
linear functions of a parameter.
Choice of weights in a two-factor design (Sect. 2.5.2). This analyzes Scheffé’s
discussion of the factor-effects model [121, pages 93–94].
Three elementary hypotheses (Sect. 2.2.1). These are used to build the usual
contrasts that define main effects in a two-factor experiment.
Associated hypotheses (Sect. 4.5.2). These were coined by Searle in [124] in his
critique of sequential sums of squares. We prove a general theorem showing that
these always exist and giving a method to find them.
The linear parametric function most responsible for rejecting H0 (Sect. 4.8.2).
Scheffé gives a form of this in [121, page 72].
Complex contrast vectors (Sect. 5.6.5). The components of these vectors are sth
roots of unity, and yield a generalization of Barnard’s original result to pk
factorial designs when s = p, a prime. They are studied here primarily to

2Box and Hunter include Rao’s paper in their bibliography but make no mention of it in their
exposition.
Preface xiii

lay the groundwork for so-called generalized wordlength patterns in nonregular


fractions of pk designs (Sect. 6.8.1).
An unexpected pattern in regular fractions (Sect. 6.4). This arises in “summing
contrasts over an alias set” in regular fractions, and offers a basis for the practice
of writing expressions such as A + BC for the alias set containing A and BC.
The main result (Theorem 6.35) is needed for an alternate approach to aliasing
discussed in Sect. 6.7 (see in particular Theorem 6.56).
Projections of regular fractions (Sect. 6.9). Theorem 6.80 (or its corollary) charac-
terizes such projections in arbitrary regular fractions. While the result is generally
accepted as known, a complete proof does not (until now) appear to be available
in the literature.

Notation
We write vectors and matrices in boldface. As mentioned above, vectors are
assumed to be column vectors except where explicitly stated otherwise, and 
indicates transpose. The notation | | means determinant if the argument is a matrix,
and cardinality if the argument is a set. The complement of the set E is denoted
by E c . 
 notation” is used to indicate summation: xij · =
 “Dot k xij k , xi·· =
j k x ij k , and so on. If the indices
 have ranges 1 ≤ i ≤   j ≤ J, 1 ≤
I, 1 ≤
k ≤ K, then we write x̄ij · = (1/K) k xij k , x̄i·· = (1/J K) j k xij k , etc., for
averages.
We sometimes use the abbreviation iff for “if and only if.” We assume the reader
is familiar with the quantifiers ∀ (“for all”) and ∃ (“there exists”).
Other notation is introduced in the text, and is referenced in the index.

Acknowledgments
I thank George Seber for a number of helpful discussions over the years concerning
the Wald statistic and other topics. This book was partly inspired by his text [128].
I also thank Angela Dean and Dan Voss for sharing their insights. Aside from their
publications, Voss’s thesis [144] provides a very readable guide to various methods
of confounding.
Thanks to Rahul Mukerjee and Boxin Tang for helpful discussions regarding
fractional designs. The recent book [98] by Mukerjee and Wu is an excellent intro-
duction to regular designs (and especially to minimum aberration), and influenced
some of my presentation of that material.
I am grateful to Terry Speed for alerting me to the survey article [53] containing
biographical information about Mildred M. Barnard.
Thanks to John Aldrich and Jeff Miller for help with historical information about
identifiability and the Gauss-Markov Theorem. A website on “Earliest Uses of Some
Words of Mathematics,” originally developed by Jeff, is now maintained at https://
mathshistory.st-andrews.ac.uk/Miller/mathword/.
I am indebted to the staff at Springer Nature, and especially to my editor,
Dr. Eva Hiripi, for their thoughtful handling of this book as they moved it from
xiv Preface

manuscript to finished product. I greatly appreciate their responsiveness to my


numerous questions.
I am grateful to the Department of Mathematical Sciences and the College of
Letters and Science at the University of Wisconsin—Milwaukee for the sabbaticals
(more than one!) necessary to see this work to completion. I would also like to thank
my students, especially Steffen Domke and Xinran Qi, for comments and questions
(and corrections) that led me to tighten my exposition.
Finally, thanks to my family—my wife Dena and our sons Daniel, Jesse, and
Naftali—for their love and support. My writing has benefitted both here and
elsewhere from Dena’s critical reading and keen suggestions. Special thanks to
Naftali for the graphics in Fig. 2.2, and to Daniel for the Bach quote above. This
book is dedicated to my parents, Bebe and Sy, who gave me my first exposure to
math and taught me to value critical thinking.
It has been suggested [133] that “No matter how many times you proofread
your book, after publication there will be roughly one typo per page.” With that
humbling thought in mind, I welcome questions, comments, and corrections, which
can be emailed to me at [email protected]. Should it be necessary, I will establish
a webpage with comments and corrections (and maybe hints to certain exercises),
located at https://sites.uwm.edu/beder/.

Milwaukee, WI, USA Jay H. Beder


Contents

1 Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.1 Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.2 Some Statistical Concepts .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7
1.2.1 Identifiability .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7
1.2.2 Estimation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8
1.2.3 Testing: Confidence Sets . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8
1.3 Linear Models.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 10
1.4 Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 13
1.5 Factorial (ANOVA) Models .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 15
1.6 Linear Parametric Functions . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 17
1.6.1 Identifiability .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 18
1.6.2 Contrasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 19
1.7 Linear Constraints and Hypotheses .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 20
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 25
2 Effects in a Factorial Experiment . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 27
2.1 One-Factor Designs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 27
2.2 Two-Factor Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 28
2.2.1 Three Elementary Hypotheses . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 28
2.2.2 The Main Effects Hypotheses .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 31
2.2.3 Effects in a Two-Factor Design . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 35
2.3 Multifactor Designs.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 36
2.4 Quantitative Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 39
2.5 The Factor-Effects Parametrization .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 41
2.5.1 The One-Factor Model . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 42
2.5.2 Models with Two Factors.. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 44
2.5.3 Models with Three or More Factors . . . . .. . . . . . . . . . . . . . . . . . . . 47
2.5.4 The Different Systems of Weights . . . . . . .. . . . . . . . . . . . . . . . . . . . 49
2.6 The Cell-Means Philosophy .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 53
2.7 Random Effects: Components of Variance . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 58

xv
xvi Contents

3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 61
3.1 The Method of Least Squares . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 62
3.2 Properties of Least Squares Estimators . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 66
3.2.1 Estimability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68
3.2.2 The Moore-Penrose Inverse of X . . . . . . . .. . . . . . . . . . . . . . . . . . . . 69
3.3 Estimating σ 2 : Sum of Squares, Mean Square .. .. . . . . . . . . . . . . . . . . . . . 70
3.4 t-Tests, t-Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 75
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 76
4 Testing . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 79
4.1 Testing a Linear Hypothesis .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 80
4.1.1 Testing in Constrained Models .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 85
4.1.2 Replication. Lack of Fit and Pure Error .. . . . . . . . . . . . . . . . . . . . 85
4.2 The Wald Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 88
4.3 The Lattice of Hypotheses . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
4.3.1 The Two-Predictor Regression Model . . .. . . . . . . . . . . . . . . . . . . . 92
4.3.2 The Three-Predictor Regression Model .. . . . . . . . . . . . . . . . . . . . 92
4.3.3 The Two-Factor Design . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 93
4.3.4 The Hypothesis of No Model Effect .. . . .. . . . . . . . . . . . . . . . . . . . 93
4.3.5 The Lattice in the Observation Space . . . .. . . . . . . . . . . . . . . . . . . . 94
4.4 Adjusted SS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 94
4.5 Nested Hypotheses. Sequential SS. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 98
4.5.1 Adjusted or Sequential? . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 106
4.5.2 Associated Hypotheses . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 108
4.6 Orthogonal Hypotheses.. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 112
4.6.1 Application: Factorial Designs .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 119
4.6.2 Application: Regression Models . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 123
4.7 Affine Hypotheses. Confidence Sets . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 124
4.8 Simultaneous Inference.. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 127
4.8.1 The Bonferroni Method . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 128
4.8.2 The Scheffé Method . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 130
4.9 Inference for Variance Components Models . . . . .. . . . . . . . . . . . . . . . . . . . 137
4.9.1 The One-Factor Design .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 137
4.9.2 The Two-Factor Design . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 141
4.9.3 Some Challenges.. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 144
4.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 146
5 Multifactor Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 151
5.1 Vectors and Functions: Notation . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 152
5.2 The General Theory of Block Effects.. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 153
5.3 The Multifactor Design: Reprise .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 161
5.4 The Kurkjian-Zelen Construction .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 166
5.4.1 Multilinear Algebra .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 168
5.4.2 Application to Factorial Designs . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 172
Contents xvii

5.5 Confounding with Blocks . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 176


5.5.1 Generalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 176
5.6 Confounding in Classical Symmetric Factorial Designs .. . . . . . . . . . . . 180
5.6.1 Special Blockings and Block Effects . . . .. . . . . . . . . . . . . . . . . . . . 180
5.6.2 Components of Interaction . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 185
5.6.3 Finding Generalized Interactions . . . . . . . .. . . . . . . . . . . . . . . . . . . . 190
5.6.4 Multiplicative Notation: The Effects Group .. . . . . . . . . . . . . . . . 193
5.6.5 Complex Contrasts. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 198
5.7 Confounding in Arbitrary Factorial Designs . . . . .. . . . . . . . . . . . . . . . . . . . 203
5.7.1 A Survey of Methods of Confounding .. .. . . . . . . . . . . . . . . . . . . . 204
5.7.2 The Problem of Confounding . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 211
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 213
6 Fractional Factorial Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 219
6.1 Aliasing in Regular Fractions . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 223
6.1.1 Multiplicative Notation Again: The Defining Subgroup .. . . 232
6.2 The Resolution of a Regular Fraction . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 236
6.3 Construction of Regular Fractions . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 241
6.4 An Unexpected Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 244
6.5 Aliasing and Resolution in Arbitrary Fractions . .. . . . . . . . . . . . . . . . . . . . 249
6.5.1 Restriction Maps . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 249
6.5.2 Strength and Aliasing .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 250
6.5.3 Resolution .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 256
6.6 Aliasing and Confounding . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 259
6.7 Aliasing, Estimability and Bias . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 260
6.8 Relative Aberration .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 267
6.8.1 In Nonregular Designs .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 271
6.9 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 279
6.9.1 In Non-regular Designs . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 286
6.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 287

A Mathematical Background .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 291


A.1 Functions .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 291
A.2 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 292
A.2.1 Partially Ordered Sets and Lattices . . . . . .. . . . . . . . . . . . . . . . . . . . 292
A.2.2 Equivalence Relations and Partitions . . . .. . . . . . . . . . . . . . . . . . . . 295
A.3 Algebra .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 297
A.4 Linear Algebra .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 300
A.4.1 Subspaces and Bases . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 301
A.4.2 Linear Transformations .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 303
A.4.3 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 308
A.4.4 The Transpose. Symmetric Maps . . . . . . . .. . . . . . . . . . . . . . . . . . . . 316
A.4.5 Positive Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 319
xviii Contents

A.4.6 The Spectral Theorem . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 319


A.4.7 Orthogonal Projections . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 322
A.4.8 Linear Algebra in Rn . Matrices . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 324
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 331
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 337
List of Figures

Fig. 2.1 The lattice of hypotheses of an a × b design. . . . .. . . . . . . . . . . . . . . . . . . . 34


Fig. 2.2 Basic blockings (partitions) of an a × b × c design. .. . . . . . . . . . . . . . . . 37
Fig. 3.1 The relationships between X and T = (X X)−1 X . . . . . . . . . . . . . . . . . . . 70
Fig. 4.1 Geometric representation of Y, Ŷ, and Ŷ0 . . . . . . . .. . . . . . . . . . . . . . . . . . . . 80
Fig. 4.2 Lattices of subspaces .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 95

xix

You might also like