0% found this document useful (0 votes)
23 views345 pages

Statistics For Linguistics With R A Practical Introduction 9783110216042 9783110205640 - Compress

The document is a book titled 'Statistics for Linguistics with R' by Stefan Th. Gries, aimed at providing a practical introduction to statistical methods specifically for linguists using the R programming language. It covers fundamental concepts of empirical research, descriptive and analytical statistics, and multifactorial methods, while emphasizing accessibility and practical application. The book includes a companion website for additional resources, exercises, and community engagement for readers to enhance their learning experience.

Uploaded by

iquitossanchez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views345 pages

Statistics For Linguistics With R A Practical Introduction 9783110216042 9783110205640 - Compress

The document is a book titled 'Statistics for Linguistics with R' by Stefan Th. Gries, aimed at providing a practical introduction to statistical methods specifically for linguists using the R programming language. It covers fundamental concepts of empirical research, descriptive and analytical statistics, and multifactorial methods, while emphasizing accessibility and practical application. The book includes a companion website for additional resources, exercises, and community engagement for readers to enhance their learning experience.

Uploaded by

iquitossanchez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 345

Statistics for Linguistics with R


Trends in Linguistics
Studies and Monographs 208

Editors
Walter Bisang
Hans Henrich Hock
Werner Winter

Mouton de Gruyter
Berlin · New York
Statistics for Linguistics with R
A Practical Introduction

by
Stefan Th. Gries

Mouton de Gruyter
Berlin · New York
Mouton de Gruyter (formerly Mouton, The Hague)
is a Division of Walter de Gruyter GmbH & Co. KG, Berlin.


앝 Printed on acid-free paper which falls within the guidelines
of the ANSI to ensure permanence and durability.

Library of Congress Cataloging-in-Publication Data

Gries, Stefan Thomas, 1970⫺


Statistics for linguistics with R : a practical introduction / by
Stefan Th. Gries
p. cm. ⫺ (Trends in linguistics. Studies and monographs ; 208)
Includes bibliographical references and index.
ISBN 978-3-11-020564-0 (hardcover : alk. paper)
ISBN 978-3-11-020565-7 (pbk. : alk. paper)
1. Linguistics ⫺ Statistical methods. 2. R (Computer program
language) I. Title.
P138.5.G75 2009
401.211⫺dc22
2009037473

ISBN 978-3-11-020564-0
ISSN 1861-4302

Bibliographic information published by the Deutsche Nationalbibliothek


The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data are available in the Internet at http://dnb.d-nb.de.

” Copyright 2009 by Walter de Gruyter GmbH & Co. KG, D-10785 Berlin
All rights reserved, including those of translation into foreign languages. No part of this
book may be reproduced or transmitted in any form or by any means, electronic or mechan-
ical, including photocopy, recording or any information storage and retrieval system, with-
out permission in writing from the publisher.
Cover design: Christopher Schneider, Laufen.
Printed in Germany.
Preface

This book is the revised and extended version of Gries (2008). The main
changes that were implemented are concerned with Chapter 5, which I am
now much happier with. I thank Benedikt Szmrecsanyi for his advice re-
garding logistic regressions, Stefan Evert for a lot of instructive last-minute
feedback, Michael Tewes for input and discussion of an early version of the
German edition of this book, Dagmar S. Divjak for a lot of discussion of
very many statistical methods, and the many students and participants of
many quantitative methods classes and workshops for feedback. I will
probably wish I had followed more of the advice I was offered. Last but
certainly not least, I am deeply grateful to the R Development Core Team
for R, a simply magnificent piece of software, and R. Harald Baayen for
exposing me to R the first time some years ago.
Contents
Preface .......................................................................................................... v

Chapter 1
Some fundamentals of empirical research .................................................... 1
1. Introduction.................................................................................. 1
2. On the relevance of quantitative methods in linguistics .............. 3
3. The design and the logic of quantitative studies .......................... 7
3.1 Scouting ....................................................................................... 8
3.2. Hypotheses and operationalization ............................................ 10
3.2.1. Scientific hypotheses in text form ............................................. 11
3.2.2. Operationalizing your variables ................................................. 14
3.2.3. Scientific hypotheses in statistical/mathematical form .............. 18
3.3. Data collection and storage ........................................................ 24
3.4. The decision ............................................................................... 29
3.4.1. Overview: discrete probability distributions.............................. 33
3.4.2. Extension: continuous probability distributions ........................ 44
4. The design of an experiment: introduction ................................ 48
5. The design of an experiment: another example ......................... 54

Chapter 2
Fundamentals of R ...................................................................................... 58
1. Introduction and installation ...................................................... 58
2. Functions and arguments ........................................................... 62
3. Vectors ....................................................................................... 66
3.1. Generating vectors in R ............................................................. 66
3.2. Loading and saving vectors in R................................................ 71
3.3. Editing vectors in R ................................................................... 74
4. Factors ....................................................................................... 82
4.1. Generating factors in R .............................................................. 83
4.2. Loading and saving factors in R ................................................ 83
4.3. Editing factors in R .................................................................... 84
5. Data frames ................................................................................ 85
5.1. Generating data frames in R ...................................................... 85
5.2. Loading and saving data frames in R......................................... 88
5.3. Editing data frames in R ............................................................ 90
viii Contents

Chapter 3
Descriptive statistics ................................................................................... 96
1. Univariate statistics.................................................................... 96
1.1. Frequency data ........................................................................... 96
1.1.1. Scatterplots and line plots .......................................................... 98
1.1.2. Pie charts.................................................................................. 102
1.1.3. Bar plots ................................................................................... 102
1.1.4. Pareto-charts ............................................................................ 104
1.1.5. Histograms ............................................................................... 105
1.2. Measures of central tendency .................................................. 106
1.2.1. The mode ................................................................................. 106
1.2.2. The median .............................................................................. 107
1.2.3. The arithmetic mean ................................................................ 107
1.2.4. The geometric mean ................................................................ 108
1.3. Measures of dispersion ............................................................ 111
1.3.1. Relative entropy ....................................................................... 112
1.3.2. The range ................................................................................. 113
1.3.3. Quantiles and quartiles ............................................................ 113
1.3.4. The average deviation .............................................................. 115
1.3.5. The standard deviation ............................................................. 116
1.3.6. The variation coefficient .......................................................... 117
1.3.7. Summary functions .................................................................. 118
1.3.8. The standard error .................................................................... 119
1.4. Centering and standardization (z-scores) ................................. 121
1.5. Confidence intervals ................................................................ 123
1.5.1. Confidence intervals of arithmetic means ............................... 124
1.5.2. Confidence intervals of percentages ........................................ 125
2. Bivariate statistics .................................................................... 127
2.1. Frequencies and crosstabulation .............................................. 127
2.1.1. Bar plots and mosaic plots ....................................................... 129
2.1.2. Spineplots ................................................................................ 130
2.1.3. Line plots ................................................................................. 131
2.2. Means....................................................................................... 132
2.2.1. Boxplots ................................................................................... 133
2.2.2. Interaction plots ....................................................................... 134
2.3. Coefficients of correlation and linear regression ..................... 138

Chapter 4
Analytical statistics ................................................................................... 148
Contents ix

1. Distributions and frequencies .................................................. 148


1.1. Distribution fitting ................................................................... 148
1.1.1. One dep. variable (ratio-scaled) ............................................... 148
1.1.2. One dep. variable (nominal/categorical) .................................. 151
1.2. Tests for differences/independence ......................................... 158
1.2.1. One dep. variable (ordinal/interval/ratio scaled) and one
indep. variable (nominal) (indep. samples) ............................. 159
1.2.2 One dep. variable (nominal/categorical) and one indep.
variable (nominal/categorical) (indep. samples) ...................... 165
1.2.3. One dep. variable (nominal/categorical) (dep. samples) ......... 180
2. Dispersions .............................................................................. 184
2.1. Goodness-of-fit test for one dep. variable (ratio-scaled) ......... 185
2.2. One dep. variable (ratio-scaled) and one indep.
variable (nominal).................................................................... 187
3. Means....................................................................................... 194
3.1. Goodness-of-fit tests ................................................................ 195
3.1.1. One dep. variable (ratio-scaled) ............................................... 195
3.1.2. One dep. variable (ordinal) ...................................................... 199
3.2. Tests for differences/independence ......................................... 204
3.2.1. One dep. variable (ratio-scaled) and one indep. variable
(nominal) (indep. samples) ...................................................... 205
3.2.2. One dep. variable (ratio-scaled) and one indep. variable
(nominal) (dep. samples) ......................................................... 212
3.2.3. One dep. variable (ordinal) and one indep. variable
(nominal) (indep. samples) ...................................................... 218
3.2.4. One dep. variable (ordinal) and one indep. variable
(nominal) (dep. samples) ......................................................... 226
4. Coefficients of correlation and linear regression ..................... 230
4.1. The significance of the product-moment correlation ............... 231
4.2. The significance of Kendall’s Tau ........................................... 234
4.3. Correlation and causality ......................................................... 236

Chapter 5
Selected multifactorial methods ............................................................... 238
1. The multifactorial analysis of frequency data.......................... 240
1.1. Configural frequency analysis ................................................. 240
1.2. Hierarchical configural frequency analysis ............................. 248
2. Multiple regression analysis .................................................... 252
3. ANOVA (analysis of variance)................................................ 274
x Contents

3.1. Monofactorial ANOVA ........................................................... 275


3.2. Two-/multifactorial ANOVA .................................................. 284
4. Binary logistic regression ........................................................ 291
5. Hierarchical agglomerative cluster analysis ............................ 306

Chapter 6
Epilog........................................................................................................ 320

References ................................................................................................ 323


Function index .......................................................................................... 329
Chapter 1
Some fundamentals of empirical research

When you can measure what you are speaking about, and express it in
numbers, you know something about it; but when you cannot measure it,
when you cannot express it in numbers, your knowledge is of a meager and
unsatisfactory kind.
It may be the beginning of knowledge, but you have scarcely,
in your thoughts, advanced to the stage of science.
William Thomson, Lord Kelvin.
(<http://hum.uchicago.edu/~jagoldsm/Webpage/index.html>)

1. Introduction

This book is an introduction to statistics. However, there are already very


many introductions to statistics – why do we need another one? Well, this
book is different from most other introductions to statistics in several ways:

− it has been written especially for linguists: there are many introductions
to statistics for psychologists, economists, biologists etc., but only very
few which, like this one, explain statistical concepts and methods on the
basis of linguistic questions and for linguists;
− it explains how to do nearly all of the statistical methods both ‘by hand’
as well as with statistical software, but it requires neither mathematical
expertise nor hours of trying to understand complex equations – many
introductions devote much time to mathematical foundations (and, thus,
make everything more difficult for the novice), others do not explain
any foundations and immediately dive into some nicely designed soft-
ware, which often hides the logic of statistical tests behind a nice GUI;
− it not only explains statistical concepts, tests, and graphs, but also the
design of tables to store and analyze data, summarize previous litera-
ture, and some very basic aspects of experimental design;
− it only uses open source software (mainly R): many introductions use
SAS or in particular SPSS, which come with many disadvantages such
that (i) users must to buy expensive licenses that are restricted in how
many functions they offer and how many data points they can handle)
2 Some fundamentals of empirical research

and how long they can be used; (ii) students and professors may be able
to use the software only on campus; (iii) they are at the mercy of the
software company with regard to bugfixes and updates etc.;
− it does all this in an accessible and informal way: I try to avoid jargon
wherever possible; the use of software will be illustrated in very much
detail, and there are think breaks, warnings, exercises (with answer keys
on the companion website), and recommendations for further reading
etc. to make everything more accessible.

So, this book aims to help you do scientific quantitative research. It is


structured as follows. Chapter 1 introduces the foundations of quantitative
studies: what are variables and hypotheses, what is the structure of quantit-
ative studies and what kind of reasoning underlies it, how do you obtain
good experimental data, and in what kind of format should you store your
data?
Chapter 2 provides an overview of the programming language and envi-
ronment R, which will be used in all other chapters for statistical graphs
and analyses: how do you create, load, and manipulate data to prepare for
your analysis?
Chapter 3 explains fundamental methods of descriptive statistics: how
do you describe your data, what patterns can be discerned in them, and how
can you represent such findings graphically? Chapter 4 explains fundamen-
tal methods of analytical statistics: how do you test whether the obtained
results actually mean something or have just arisen by chance? Chapter 5
introduces several multifactorial procedures, i.e. procedures, in which sev-
eral potential cause-effect relations are investigated simultaneously. This
chapter can only deal with a few selected methods and I will point you to
additional references quite a few times.
Apart from the following chapters with their think breaks and exercises
etc., the companion website for this book at <http://www.linguistics.ucsb.
edu/faculty/stgries/research/sflwr/sflwr.html> (or its mirror at <http://
groups.google.com/group/statforling-with-r/web/statistics-for-linguists-
with-r>) is an important resource. You will have to go there anyway to
download exercise files, data files, answer keys, errata etc., but at
<http://groups.google.com/group/statforling-with-r> you will also find a
newsgroup “StatForLing with R”. I would like to encourage you to become
a member of that newsgroup so that you can

− ask questions about statistics for linguists (and hopefully also get an
answer from some kind soul);
On the relevance of quantitative studies in linguistics 3

− send suggestions for extensions and/or improvements or data for addi-


tional exercises;
− inform me and other readers of the book about bugs you find (and of
course receive such information from other readers). This also means
that if R commands, or code, provided in the book differs from that on
the website, then the latter is most likely going to be correct.

Lastly, I have to mention one important truth right at the start: you can-
not learn to do statistical analyses by reading a book about statistical ana-
lyses. You must do statistical analyses. There is no way that you read this
book (or any other serious introduction to statistics) 15 minutes in bed be-
fore turning off the light and learn to do statistical analyses, and book cov-
ers or titles that tell you otherwise are, let’s say, ‘distorting’ the truth for
marketing reasons. I strongly recommend that, as of the beginning of Chap-
ter 2, you work with this book directly at your computer so that you can
immediately enter the R code that you read and try out all relevant func-
tions from the code files from the companion website; often (esp. in Chap-
ter 5), the code files for this chapter will provide you with important extra
information, additional code snippets, further suggestions for explorations
using graphs etc., and sometimes the exercise files will provide even more
suggestions and graphs. Even if you do not understand every aspect of the
code right away, this will still help you to learn all this book tries to offer.

2. On the relevance of quantitative methods in linguistics

Above I said this book introduces you to scientific quantitative research.


But then, what are the goals of such research? Typically, one distinguishes
three goals, which need to be described because (i) they are part of a body
of knowledge that all researchers within an empirical discipline should be
aware of and (ii) they are relevant for how this book is structured.
The first goal is the description of your data on some phenomenon and
means that your data and results must be reported as accurately and revea-
lingly as possible. All statistical methods described below will help you
achieve this objective, but particularly those described in Chapter 3.
The second goal is the explanation of your data, usually on the basis of
hypotheses about what kind(s) of relations you expected to find in the data.
On many occasions, this will already be sufficient for your purposes. How-
ever, sometimes you may also be interested in a third goal, that of predic-
tion: what is going to happen in the future or when you look at different
4 Some fundamentals of empirical research

data. Chapters 4 and 5 will introduce you to methods to pursue the goals of
explanation and prediction.
When you look at these goals, it may appear surprising that statistical
methods are not that widespread in linguistics. This is all the more surpris-
ing because such methods are very widespread in disciplines with similarly
complex topics such as psychology, sociology, economics. To some de-
gree, this situation is probably due to how linguistics has evolved over the
past decades, but fortunately it is changing now. The number of studies
utilizing quantitative methods has been increasing (in all linguistic sub-
disciplines); the field is experiencing a paradigm shift towards more empir-
ical methods. Still, even though such methods are commonplace in other
disciplines, they still often meet some resistance in linguistic circles: state-
ments such as “we’ve never needed something like that before” or “the
really interesting things are qualitative in nature anyway and are not in
need of any quantitative evaluation” or “I am a field linguist and don’t need
any of this” are far from infrequent.
Let me say this quite bluntly: such statements are not particularly rea-
sonable. As for the first statement, it is not obvious that such quantitative
methods were not needed so far – to prove that point, one would have to
show that quantitative methods could impossibly have contributed some-
thing useful to previous research, a rather ridiculous point of view – and
even then it would not necessarily be clear that the field of linguistics is not
now at a point where such methods are useful. As for the second statement,
in practice quantitative and qualitative methods go hand in hand: qualita-
tive considerations precede and follow the results of quantitative methods
anyway. To work quantitatively does not mean to just do, and report on,
some number-crunching – of course, there must be a qualitative discussion
of the implications – but as we will see below often a quantitative study
allows to identify what merits a qualitative discussion. As for the last
statement: even a descriptive (field) linguist who is working to document a
near-extinct language can benefit from quantitative methods. If the chapter
on tense discusses whether the choice of a tense is correlated with indirect
speech or not, then quantitative methods can show whether there is such a
correlation. If a study on middle voice in the Athabaskan language De-
na’ina tries to identify how syntax and semantics are related to middle
voice marking, quantitative methods can reveal interesting things (cf. Berez
and Gries 2010).
The last two points lead up to a more general argument already alluded
to above: often only quantitative methods can separate the wheat from the
chaff. Let’s assume a linguist wanted to test the so-called aspect hypothesis
On the relevance of quantitative studies in linguistics 5

according to which imperfective and perfective aspect are preferred in


present and past tense respectively (cf. Shirai and Andersen 1995). Strictly
speaking, the linguist would have to test all verbs in all languages, the so-
called population. This is of course not possible so the linguist studies a
sample of sentences to investigate their verbal morphology. Let’s further
assume the linguist took and investigated a small sample of 39 sentences in
one language and got the results in Table 1.

Table 1. A fictitious distribution of tenses and aspects in a small corpus


Imperfective Perfective Totals
Present tense 12 6 18
Past tense 7 13 20
Totals 19 19 38

These data look like a very obvious confirmation of the aspect hypothe-
sis: there are more present tenses with imperfectives and more paste tenses
with perfectives. However, the so-called chi-square test, which could be
used for these data, shows that this tense-aspect distribution can arise by
chance with a probability p that exceeds the usual threshold of 5% adopted
in quantitative studies. Thus, the linguist would not be allowed to accept
the aspect hypothesis for the population on the basis of this sample. The
point is that an intuitive eye-balling of this table is insufficient – a statistic-
al test is needed to protect the linguist against invalid generalizations.
An more eye-opening example is discussed by Crawley (2007: 314f.).
Let’s assume a study showed that two variables x and y are correlated such
that the larger the value of x, the larger the value of y; cf. Figure 1.
Note, however, that the data actually also contain information about a
third variable (with seven levels a to g) on which x and y depend. Interes-
tingly, if you now inspect what the relation between x and y looks like for
each of the seven levels of the third variable, you see that the relation sud-
denly becomes “the larger x, the smaller y”; cf. Figure 2, where the seven
levels are indicated with letters. Such patterns in data are easy to overlook
– they can only be identified through a careful quantitative study, which is
why knowledge of statistical methods is indispensible.
6 Some fundamentals of empirical research

Figure 1. A fictitious correlation between two variables x and y

Figure 2. A fictitious correlation between two variables x and y, controlled for a


third variable
The design and the logic of quantitative studies 7

For students of linguistics – as opposed to experienced practitioners –


there is also a very practical issue to consider. Sometime soon you will
want to write a thesis or dissertation. Quantitative methods can be extreme-
ly useful and powerful if only to help you avoid the pitfalls posed by the
data in Table 1 and Figure 1. In the light of all this, it is hopefully obvious
now that quantitative methods have a lot to offer, and I hope this book will
provide you with some good and practical background knowledge.
This argument has an additional aspect to it. Contrary to, say, literary
criticism, linguistics is an empirical science. Thus, it is necessary – in par-
ticular for students – to know about basic methods and assumptions of em-
pirical research and statistics to be able to understand both scientific argu-
mentation in general and linguistic argumentation in particular. This is
especially relevant in the domains of, for example, contemporary quantita-
tive corpus linguistics or psycholinguistics, where data are often evaluated
with such a high degree of sophistication that a basic knowledge of the
relevant terminology is required. Without training, what do you make of
statements such as “The interaction between the size of the object and the
size of the reference point does not reach standard levels of significance:
F1, 12 = 2.18; p = 0.166; partial eta2 = 0.154.”? Who knows off the top of
their head whether the fact that the average sentence length of ten female
second language learners in an experiment was about two words larger than
the average sentence length of ten male second language learners is more
likely to mean something or whether this is more likely a product of
chance? Again, such data need serious statistical analysis.

3. The design and the logic of quantitative studies

In this section, we will have a very detailed look at the design of, and the
logic underlying, quantitative studies. I will distinguish several phases of
quantitative studies and consider their structure and discuss the reasoning
employed in them. The piece of writing in which you then describe your
quantitative research should usually have four parts: introduction, methods,
results, and discussion – if you discuss more than one case study in your
writing, then typically each case study gets its own methods, results, and
discussion sections, followed by a general discussion.
With few exceptions, the discussion and exemplification in this section
will be based on a linguistic example. The example is the phenomenon of
particle placement in English, i.e. the constituent order alternation of transi-
tive phrasal verbs exemplified in (1).
8 Some fundamentals of empirical research

(1) a. He picked up [NP the book].


CONSTRUCTION: VPO (verb - particle - object)
b. He picked [NP the book] up.
CONSTRUCTION: VOP (verb - object - particle)

An interesting aspect of this alternation is that, most of the time, both


constructions appear to be quite synonymous and native speakers of Eng-
lish usually cannot explain why they produce (1a) on one occasion and (1b)
on some other occasion. In the past few decades, linguists have tried to
describe, explain, and predict the alternation (cf. Gries 2003a for a recent
overview). In this section, we will use this alternation to illustrate the struc-
ture of a quantitative study.

3.1. Scouting

At the beginning of your study, you want to get an overview of previous


work on the phenomenon you are interested in, which also gives you a
sense of what still can or needs to be done. In this phase, you try to learn of
existing theories that can be empirically tested or, much more infrequently,
you enter uncharted territory in which you are the first to develop a new
theory. This is a list of the activities that is typically performed in this
phase:

− a first (maybe informal) characterization of the phenomenon;


− studying the relevant literature;
− observations of the phenomenon in natural settings to aid first inductive
generalizations;
− collecting additional information (e.g., from colleagues, students, etc.);
− deductive reasoning on your part.

If you take just a cursory look at particle placement, you will quickly
notice that there is a large number of variables that influence the construc-
tional choice. A variable is defined as a symbol for a set of states, i.e., a
characteristic that – contrary to a constant – can exhibit at least two differ-
ent states or levels (cf. Bortz and Döring 1995: 6 or Bortz 2005: 6) or, more
intuitively, as “descriptive properties” (Johnson 2008: 4) or as measure-
ments of an item that can be either numeric or categorical (Evert, p.c.).
The design and the logic of quantitative studies 9

Variables that might influence particle placement include the following:1

− COMPLEXITY: is the direct object a SIMPLE DIRECT OBJECT (e.g., the


book), a PHRASALLY-MODIFIED DIRECT OBJECT (e.g., the brown book or
the book on the table) or a CLAUSALLY-MODIFIED DIRECT OBJECT (e.g.,
the book I had bought in Europe) (cf., e.g., Fraser 1966);
− LENGTH: the length of the direct object (cf., e.g., Chen 1986, Hawkins
1994);
− DIRECTIONAL OBJECT: the PRESENCE of a directional prepositional
phrase (PP) after the transitive phrasal verb (e.g. in He picked the book
up from the table) or its ABSENCE (cf. Chen 1986);
− ANIMACY: whether the referent of the direct object is INANIMATE as in
He picked up the book or ANIMATE as in He picked his dad up (cf. Gries
2003a: Ch. 2);
− CONCRETENESS: whether the referent of the direct object is ABSTRACT as
in He brought back peace to the region or CONCRETE as in He picked his
dad up at the station (cf. Gries 2003a: Ch. 2);
− TYPE: is the part of speech of the head of the direct object a PRONOUN
(e.g., He picked him up this morning), a SEMIPRONOUN (e.g., He picked
something up from the floor), a LEXICAL NOUN (e.g., He picked people
up this morning) or a PROPER NAME (e.g., He picked Peter up this morn-
ing) (cf. Van Dongen 1919).

During this early phase, it is often useful to summarize your findings in


tabular format. One possible table summarizes which studies (in the col-
umns) discussed which variable (in the rows). On the basis of the above
list, this table could look like Table 2 and allows you to immediately rec-
ognize (i) which variables many studies have already looked at and (ii) the
studies that looked at most variables. Another table summarizes the varia-
ble levels and their preferences for one of the two constructions. Again, on
the basis of the above list, this table would look like Table 3, and you can
immediately see that, for some variables, only one level has been asso-
ciated with a particular constructional preference.

1. I print variables in small caps; their levels in italicized small caps.


10 Some fundamentals of empirical research

Table 2. Summary of the literature on particle placement I


Fraser Chen Hawkins Gries Van Dongen
(1966) (1986) (1994) (2003a) (1919)
COMPLEXITY x
LENGTH x x
DIRECTIONALPP x
ANIMACY x
CONCRETENESS x
TYPE x

Table 3. Summary of the literature on particle placement II


Variable level for Variable level for
CONSTRUCTION: VPO CONSTRUCTION: VOP
PHRASALLY-MODIFIED
COMPLEXITY
CLAUSALLY MODIFIED
LENGTH LONG
DIRECTIONALPP ABSENCE PRESENCE
ANIMACY INANIMATE ANIMATE
CONCRETENESS ANIMATE CONCRETE
TYPE PRONOMINAL

This table already suggests that CONSTRUCTION: VPO is used with cog-
nitively more complex direct objects: long complex noun phrases with
lexical nouns referring to abstract things. CONSTRUCTION: VOP on the other
hand is used with the opposite preferences. For an actual study, this first
impression would of course have to be phrased more precisely. In addition,
you should also compile a list of other factors that might either influence
particle placement directly or that might influence your sampling of sen-
tences or experimental subjects or … Much of this information would be
explained and discussed in the first section of the empirical study, the in-
troduction.

3.2. Hypotheses and operationalization

Once you have an overview of the phenomenon you are interested in and
have decided to pursue an empirical study, you usually have to formulate
hypotheses. What does that mean and how do you proceed? To approach
this issue, let us see what hypotheses are and what kinds of hypotheses
there are.
The design and the logic of quantitative studies 11

3.2.1. Scientific hypotheses in text form

Following Bortz and Döring (1995: 7), we will consider a hypothesis to be


a statement that meets the following three criteria:

− it is a general statement that is concerned with more than just a singular


event;
− it is a statement that at least implicitly has the structure of a conditional
sentence (if …, then … or the …, the …) or can be paraphrased as one;
− it is potentially falsifiable, which means it must be possible to think of
events or situations or states of affairs that contradict the statement.
Most of the time, this implies that the scenario described in the condi-
tional sentence must also be testable. However, these two characteristics
are not identical. There are statements that are falsifiable but not testable
such as “If children grow up without any linguistic input, then they will
grow up to speak Latin.” This statement is falsifiable, but for obvious
ethical reasons not testable (cf. Steinberg 1993: Section 3.1).

The following statement is a scientific hypothesis according to the


above criteria: “Reducing the minimum age to obtain a driver’s license
from 18 years to 17 years in European countries will double the number of
traffic accidents in these countries.” This statement is a general statement
that is not restricted to just one point of time, just one country, etc. Also,
this statement can be paraphrased as a conditional sentence: “If one reduces
the minimum age …, then the number of traffic accidents will double.”
Lastly, this statement is falsifiable because it is conceivable – actually, very
likely – that if one reduced the minimum age, that the number of traffic
accidents would not double. Accordingly, the following statement is not a
scientific hypothesis: “Reducing the minimum age to obtain a driver’s li-
cense from 18 years to 17 years in European countries may double the
number of traffic accidents in these countries.” This statement is testable
because the minimum age could be reduced, but it is not falsifiable since
the word may basically means ‘may or may not’: the statement is true if the
number of traffic accidents doubles, but also if it does not. Put differently,
whatever one observed after the reduction of the minimum age, it would be
compatible with the statement.
With regard to particle placement, the following statements are exam-
ples for scientific hypotheses:

− if the direct object of a transitive phrasal verb is syntactically complex,


12 Some fundamentals of empirical research

then native speakers will produce the constituent order VPO more often
than when the direct object is syntactically simple;
− if the direct object of a transitive phrasal verb is long, then native speak-
ers will produce the constituent order VPO more often than when the di-
rect object is short;
− if a verb-particle construction is followed by a directional PP, then na-
tive speakers will produce the constituent order VOP more often than
when no such directional PP follows (and analogously for all other va-
riables mentioned in Table 3).

When you formulate a hypothesis, it is also important that the notions


that you use in the hypothesis are formulated precisely. For example, if a
linguistic theory uses notions such as cognitive complexity or availability in
discourse or even something as seemingly straightforward as constituent
length, then it will be necessary that the theory can define what exactly is
meant by this; in Section 1.3.2.2 we will deal with this aspect in much more
detail.
According to what we have said so far, a hypothesis consists of two
parts, an if part (I) and a then part (D). The I stands for independent varia-
ble, the variable in the if part of the hypothesis that is often, but not neces-
sarily, the cause of the changes/effects in the then part of the hypothesis.
The D on the other hand stands for dependent variable, the variable in the
then part of the hypothesis and whose values, variation, or distribution is to
be explained.2
With this terminology, we can now paraphrase the above example hypo-
theses. In the first, I is the syntactic complexity of the direct object
(COMPLEXITY with the three levels SIMPLE, PHRASALLY-MODIFIED, and
CLAUSALLY-MODIFIED), and D is the choice of construction
(CONSTRUCTION with the two levels VPO and VOP). In the second hypothe-
sis, I is the length of the direct object (LENGTH with values from 1 to x),
and D is again the choice of construction (CONSTRUCTION with the two
levels VPO and VOP), etc.
A second kind of hypothesis only contains one (dependent) variable, but
no independent variable with which the dependent variable’s behavior is
explained. In such cases, the hypothesis is ‘only’ a statement about what
the values, variation, or distribution of the dependent variable look like. In

2. Variables such as moderator or confounding variables will not be discussed here; cf.
Matt and Cook (1994).
The design and the logic of quantitative studies 13

what follows, we will deal with both kinds of hypotheses (with a bias to-
ward the former).
Thus, we can also define a scientific hypothesis as a statement about
one variable, or a statement about the relation(s) between two or more va-
riables in some context which is expected to also hold in similar contexts
and/or for similar objects in the population. Once the potentially relevant
variables to be investigated have been identified, you formulate a hypothe-
sis by relating the relevant variables in the appropriate conditional sentence
or some paraphrase thereof.
Once your hypothesis has been formulated in the above text form, you
also have to define – before you collect data! – which situations or states of
affairs would falsify your hypothesis. Thus, in addition to your own hypo-
thesis – the so-called alternative hypothesis H1 – you now also formulate
another hypothesis – the so-called null hypothesis H0 – which is the logical
opposite to your alternative hypothesis. Often, that means that you get the
null hypothesis by inserting the word not into the alternative hypothesis.
For the first of the above three hypotheses, this is what both hypotheses
would look like:

H1: If the direct object of a transitive phrasal verb is syntactically com-


plex, then native speakers will produce the constituent order VPO
more often than when the direct object is syntactically simple.
H0: If the direct object of a transitive phrasal verb is syntactically com-
plex, then native speakers will not produce the constituent order
VPO more often than when the direct object is syntactically simple.

In the vast majority of cases, the null hypothesis states that the depen-
dent variable is distributed randomly (or in accordance with some well-
known mathematically definable distribution such as the normal distribu-
tion), or it states that there is no difference between (two or more) groups
or no relation between the independent variable(s) and the dependent vari-
able(s) and that whatever difference or effect you get is only due to chance
or random variation. However, you must distinguish two kinds of alterna-
tive hypotheses: directional alternative hypotheses not only predict that
there is some kind of effect or difference or relation but also the direction
of the effect – note the expression “more often” in the above alternative
hypothesis. On the other hand, non-directional alternative hypotheses only
predict that there is some kind of effect or difference or relation without
specifying the direction of the effect. A non-directional alternative hypo-
thesis for the above example would therefore be this:
14 Some fundamentals of empirical research

H1 non-directional: If the direct object of a transitive phrasal verb is syntacti-


cally complex, then native speakers will produce the con-
stituent order VPO differently often than when the direct ob-
ject is syntactically simple.

Thus, H0 states that there is no correlation between the syntactic com-


plexity of a direct object and the constructional choice and that if you nev-
ertheless find one in the sample, then this is only a chance effect. Both H1’s
state that there is a correlation – thus, you should also find one in your
sample. Both of these hypotheses must be formulated before the data col-
lection so that one cannot present whatever result one gets as the ‘pre-
dicted’ one. Of course, all of this has to be discussed in the introduction of
the written version of your paper.

3.2.2. Operationalizing your variables

Formulating your hypotheses in the above text form is not the last step in
this part of the study, because it is as yet unclear how the variables invoked
in your hypotheses will be investigated. For example and as mentioned
above, a notion such as cognitive complexity can be defined in many dif-
ferent and differently useful ways, and even something as straightforward
as constituent length is not always as obvious as it may seem: do we mean
the length of, say, a direct object in letters, phonemes, syllables, mor-
phemes, words, syntactic nodes, etc.? Therefore, you must find a way to
operationalize the variables in your hypothesis. This means that you decide
what will be observed, counted, measured etc. when you investigate your
variables.
For example, if you wanted to operationalize a person’s KNOWLEDGE
OF A FOREIGN LANGUAGE, you could do this, among other things, as fol-
lows:

− COMPLEXITY OF THE SENTENCES that a person can form in the language


in a test (only main clauses? also compound sentence, also complex sen-
tences?);
− AMOUNT OF TIME in seconds between two errors in conversation;
− NUMBER OF ERRORS PER 100 WORDS in a text that the person writes in
90 minutes.

What is wrong with the following two proposals for operationalization?


The design and the logic of quantitative studies 15

− AMOUNT OF ACTIVE VOCABULARY;


− AMOUNT OF PASSIVE VOCABULARY.

THINK
BREAK

These proposals are not particularly useful because, while knowing


these amounts would certainly be very useful to assess somebody’s know-
ledge of a foreign language, they are not directly observable: it is not clear
what exactly you would count or measure. If you in turn operationalized
the amount of passive vocabulary on the basis of the number of words a
person knows in a vocabulary test (involving, say, words from different
frequency bands) or synonym finding test, then you know what to count –
but the above is too vague.
From the above it follows that operationalizing involves using numbers
to represent states of variables. Such a number may be a measurement (402
ms reaction time, 12 words in a synonym finding test, the direct object is
four syllables long), but discrete non-numerical states can also be coded
using numbers. Thus, variables are not only distinguished according to
their role in the hypothesis – independent vs. dependent – but also accord-
ing to their level of measurement:

− nominal or categorical variables are variables with the lowest informa-


tion value. Different values of these variables only reveal that the ob-
jects with these different values exhibit different characteristics. Such
variables are called nominal variables (or binary variables) when they
can take on only two different levels; such variables are called categori-
cal variables when they can take on three or more different levels. In
our example of particle placement, the variable DIRECTIONALPP could
be coded with 1 for the ABSENCE and 2 for PRESENCE, but note that the
fact that the value for PRESENCE is twice as large as that for ABSENCE
does not mean anything (other than that the values are different) – theo-
retically, you could code absence with 34.2 and presence with 7.3 Other

3. Usually, nominal variables are coded using 0 and 1. There are two reasons for that: (i) a
conceptual reason: often, such nominal variables can be understood as the presence of
something ( = 1) or the absence of something ( = 0) or even as a ratio variable (cf. be-
low); i.e., in the example of particle placement, the nominal variable CONCRETENESS
could be understood as a ratio variable NUMBER OF CONCRETE REFERENTS; (ii) for rea-
16 Some fundamentals of empirical research

typical examples for nominal or categorical variables are ANIMACY


(ANIMATE vs. INANIMATE), CONCRETENESS (CONCRETE vs. ABSTRACT),
STRESS (STRESSED vs. UNSTRESSED), AKTIONSART (ACTIVITY vs.
ACCOMPLISHMENT vs. ACHIEVEMENT vs. STATE) etc.
− ordinal variables not only distinguish objects as members of different
categories the way that nominal and categorical variables do – they also
allow to rank-order the objects in a meaningful way. However, differ-
ences between ranks cannot be meaningfully compared. Grades are a
typical example: a student with an A (4 grade points) scored a better re-
sult than a student with a C (2 grade points), but just because 4 is two
times 2, that does not necessarily mean that the A-student did exactly
twice as well as the C-student – depending on the grading system, the
A-student may have given three times as many correct answers as the C-
student. In the particle placement example, the variable COMPLEXITY is
an ordinal variable if you operationalize it as above: SIMPLE NP (1) vs.
PHRASALLY-MODIFIED (2) vs. CLAUSALLY-MODIFIED (3). It is useful to
make the ranks compatible with the variable: if the variable is called
SYNTACTIC COMPLEXITY, then large rank numbers should represent
large degrees of complexity, i.e., complex direct objects. If, on the other
hand, the variable is called SYNTACTIC SIMPLICITY, then large rank
numbers should represent large degrees of simplicity, simple direct ob-
jects. Other typical examples are SOCIO-ECONOMIC STATUS or DEGREE
OF IDIOMATICITY or PERCEIVED VOCABULARY DIFFICULTY (e.g., LOW/1
vs. INTERMEDIATE/2 vs. HIGH/3).
− ratio variables not only distinguish objects as members of different
categories and with regard to some rank ordering – they also allow to
meaningfully compare the differences and ratios between values. For
example, LENGTH IN SYLLABLES is such a ratio variable: when one ob-
ject is six syllables long and another is three syllables long, then the first
is of a different length than the second (the nominal information), the
first is longer than the second (the ordinal information), and it is exactly
twice as long as the second. Other typical examples are annual salaries,
reaction times in milliseconds.4

These differences can be clearly illustrated in a table. Table 4 is a part


of a fictitious data set on lengths and degrees of complexity of subjects and

sons I will not discuss here, it is computationally useful to use 0 and 1 and, somewhat
counterintuitively, some statistical software even requires that kind of coding.
4. Strictly speaking, there is also a class of so-called interval variables, which we are not
going to discuss here separately from ratio variables.
The design and the logic of quantitative studies 17

objects – which column contains which kind of variable?

Table 4. A fictitious data set of subjects and objects


DATA POINT COMPLEXITY DATA SOURCE SYLLLENGTH GRMRELATION
1 HIGH D8Y 6 OBJECT
2 HIGH HHV 8 SUBJECT
3 LOW KB0 3 SUBJECT
4 INTERMEDIATE KB2 4 OBJECT

THINK
BREAK

DATA POINT is a categorical variable: every data point gets its own
number so that you can uniquely identify it, but the number as such may
represent little more than the order in which the data points were entered.
COMPLEXITY is an ordinal variable with three levels. DATA SOURCE is
another categorical variable: the levels of this variable are file names from
the British National Corpus. SYLLLENGTH is a ratio variable since the third
object can correctly be described as half as long as the first. GRMRELATION
is a nominal/categorical variable. These distinctions are very important
since these levels of measurement determine which statistical tests can and
cannot be applied to a particular question and data set.
The issue of operationalization is one of the most important of all. If
you do not operationalize your variables properly, then the whole study
might be useless since you may actually end up not measuring what you
want to measure. Without an appropriate operationalization, the validity of
your study is at risk. Let us briefly return to an example from above. If we
investigated the question of whether subjects in English are longer than
direct objects and looked through sentences in a corpus, we might come
across the following sentence:

(2) The younger bachelors ate the nice little parrot.

The result for this sentence depends on how LENGTH is operationalized.


If LENGTH is operationalized as number of morphemes, then the subject is
longer than the direct object: the subject gets the value 5 (The, young, com-
parative -er, bachelor, plural s) and the direct object gets the value 4 (the,
nice, little, parrot). However, if LENGTH is operationalized as number of
18 Some fundamentals of empirical research

words, the subject (3 words) is shorter than the direct object (4 words).
And, if LENGTH is operationalized as number of characters without spaces,
the subject and the direct object are equally long (19 characters). In this
case, thus, the operationalization alone determines the result.

3.2.3. Scientific hypotheses in statistical/mathematical form

Once you have formulated both your own (alternative) hypothesis and the
logically complementary (null) hypothesis and have defined how the va-
riables will be operationalized, you also formulate two statistical versions
of these hypotheses. That is, you first formulate the two text hypotheses,
and in the statistical hypotheses you then express the numerical results you
expect on the basis of the text hypotheses.
Statistical hypotheses usually involve one of three different mathemati-
cal forms: there are hypotheses about frequencies or frequency differences,
hypotheses about means or differences of means, and hypotheses about
correlations. (Rather infrequently, we will also encounter hypotheses about
dispersions and distributions.) We begin by looking at a simple example
regarding particle placement: if a verb-particle construction is followed by
a directional PP, then native speakers will produce the constituent order
VOP more often than when no such directional PP follows. To formulate the
statistical hypothesis counterpart to this text form, you have to answer the
question, if I investigated, say, 200 sentences with verb-particle construc-
tions in them, how would I know whether this hypothesis is correct or not?
(As a matter of fact, you actually have to proceed a little differently, but we
will get to that later.) One possibility of course is to count how often
CONSTRUCTION: VPO and CONSTRUCTION: VOP are followed by a direc-
tional PP, and if there are more directional PPs after CONSTRUCTION: VOP
than after CONSTRUCTION: VPO, then this provides support to the alterna-
tive hypothesis. Thus, this possibility involves frequencies and the statistic-
al hypotheses are:

H1 directional: n dir. PPs after CONSTRUCTION: VPO < n dir. PPs after CONSTRUCTION: VOP
H1 non-directional: n dir. PPs after CONSTRUCTION: VPO ≠ n dir. PPs after CONSTRUCTION: VOP
H0: n dir. PPs after CONSTRUCTION: VPO = n dir. PPs after CONSTRUCTION: VOP5

5. Note: I said above that you often obtain the null hypothesis by inserting not into the
alternative hypothesis. Thus, when the statistical version of the alternative hypothesis
involves a “<“, then you might expect the statistical version of the null hypothesis to
contain a “≥”. However, we will follow the usual convention also mentioned above that
The design and the logic of quantitative studies 19

Just in passing: what do these statistical hypotheses presuppose?

THINK
BREAK

They presuppose that you investigate equally many instances of both


constructions because otherwise a small observed frequency of directional
PPs after CONSTRUCTION: VOP – the frequency we expect to be large –
could simply be due to a small overall frequency of CONSTRUCTION: VOP.
For the variable COMPLEXITY, you could formulate similar hypotheses
based on frequencies, if COMPLEXITY is operationalized on the basis of, for
example, the three levels mentioned above.
Let us now turn to an example involving statistical hypotheses based on
means: if the direct object of a transitive phrasal verb is long, then native
speakers will produce the constituent order VPO more often than when the
direct object is short. One probably obvious way to proceed is to measure
the average lengths of direct objects in CONSTRUCTION: VPO and
CONSTRUCTION: VOP and then compare these average lengths to each oth-
er. You could therefore write:

H1 directional: mean Length of the direct object in CONSTRUCTION: VPO >


mean Length of the direct object in CONSTRUCTION: VOP
H1 non- directional: mean Length of the direct object in CONSTRUCTION: VPO ≠
mean Length of the direct object in CONSTRUCTION: VOP
H0: mean Length of the direct object in CONSTRUCTION: VPO =
mean Length of the direct object in CONSTRUCTION: VOP

With similarly obvious operationalizations, the other text hypotheses


from above can be transformed into analogous statistical hypotheses. Now,
and only now, we finally know what needs to be observed in order for us to
reject the null hypothesis. (We will look at hypotheses involving correla-
tions, dispersion, and distributions later.)
All hypotheses discussed so far were concerned with the simple case
where a sample of verb-particle constructions was investigated regarding
whether the two constructions differ with regard to one independent varia-

a null hypothesis states the absence of a difference/effect/correlation etc., which is why


we write “=“. You will see below that the cases covered by “≥” will still be invoked in
the computations that are based on these statistical hypotheses.
20 Some fundamentals of empirical research

ble (e.g., DIRECTIONALPP: PRESENT vs. ABSENT). The statistical methods to


handle such cases are the subject of Chapter 4. However, things are often
not that simple: most phenomena are multifactorial in nature, which means
dependent variables are usually influenced by, or at least related to, more
than one independent variable. There are basically two different ways in
which several independent and dependent variables may be related, which
we will explore on the basis of the example involving constituent lengths.
Let us assume you wished to study whether the lengths of constituents –
captured in the dependent variable LENGTH – are dependent on two inde-
pendent variables, the variable GRMRELATION (with the two levels
SUBJECT and OBJECT) and the variable CLAUSE TYPE (with the two levels
MAIN and SUBORDINATE). Let us further assume you did a small a pilot
study in which you investigated 120 constituents that are distributed as
shown in Table 5. Let us finally assume you determined the syllabic
lengths of all 120 constituents to compute the means for the four variable
level combinations – subjects in main clauses, subjects in subordinate
clauses, objects in main clauses, and objects in subordinate clauses – and
obtained the following results:

− on average, subjects are shorter than direct objects;


− on average, the subjects and objects in main clauses are shorter than the
subjects and objects in subordinate clauses.

Table 5. A fictitious data set of subjects and objects


GRMRELATION: GRMRELATION:
Totals
SUBJECT OBJECT
CLAUSETYPE:
30 30 60
MAIN
CLAUSETYPE:
30 30 60
SUBORD
Totals 60 60 120

The interesting thing is that these results can come in different forms.
On the one hand, the effects of the two independent variables can be addi-
tive. That means the combination of the two variables has the effect that
you would expect on the basis of each variable’s individual effect. Since
subjects are short, as are constituents in main clauses, according to an addi-
tive effect subjects in main clauses should be the shortest constituents, and
objects in subordinate clauses should be longest. This result, which is the
one that a null hypothesis would predict, is represented in Figure 3.
The design and the logic of quantitative studies 21

Figure 3. Interaction plot for GRMRELATION × CLAUSE TYPE 1

However, it is also possible that the two independent variables interact.


Two or more variables interact if their joint effect on the dependent varia-
ble is not predictable from their individual effects on the same dependent
variable. In our case: when objects are longer than subjects and constituents
in subordinate clauses longer than constituents in main clauses, but at the
same time we find that objects in subordinate clauses are fairly short, then
we have an interaction. This scenario is represented in Figure 4.
As before, subjects are on average shorter than objects. As before, main
clause constituents are shorter on average than subordinate clause constitu-
ents. But the combination of these variable levels does not have the ex-
pected additive effect: objects in general are longer than subjects – but
objects in subordinate clauses are not. This kind of interaction can often be
recognized easily in a plot because the lines connecting means intersect or
at least have slopes with different signs.
Yet another kind of interaction is shown in Figure 5. At first sight, this
does not look like an interaction: the means of all individual variable levels
are as before and even the null hypothesis expectation that objects in sub-
ordinate clauses are the longest constituents is borne out. Why is this still
an interaction?
22 Some fundamentals of empirical research

Figure 4. Interaction plot for GRMRELATION × CLAUSE TYPE 2

THINK
BREAK

This is an interaction because even though the lines do not intersect and
both have a positive slope, the slope of the line for objects is still much
higher than that for subjects. Put differently, while the difference between
main clause subjects and main clause objects is only about one syllable,
that between subordinate clause subjects and subordinate clause objects is
approximately four syllables. This unexpected difference is the reason why
this scenario is also considered an interaction.
Thus, if you have more than two independent variables, you often need
to consider interactions for both the formulation of hypotheses and the
subsequent evaluation. Such issues are often the most interesting but also
the most complex. We will look at some such methods in Chapter 5.
One important recommendation following from this is that, when you
read published results, you should always consider whether other indepen-
dent variables may have contributed to the results. To appreciate how im-
portant this kind of thinking can be, let us look at a non-linguistic example.
The design and the logic of quantitative studies 23

Figure 5. Interaction plot for GRMRELATION × CLAUSE TYPE 3

The 11/29/2007 issue of the Santa Barbara Independent discussed a


study according to which illegal Mexican immigrants use Californian
emergency rooms only half as much per capita as legal Mexican immi-
grants. This finding was used to support the position that illegal Mexican
immigrants do not put a strain on the US American health system. In re-
sponse to this article, a letter to the editor of 12/6/2007, however, points out
that said illegal Mexican immigrants are on average much younger than
legal Mexican immigrants: of the illegal Mexican immigrants, only 5% are
older than 50, but of the legal Mexican immigrants 37% are. Since younger
people are in general less likely to need medical assistance, the letter writer
argues that the fact that illegal Mexican immigrants require less medical
assistance does not prove that they do not overuse the medical system –
they are just too young to require as much medical help. You should recog-
nize what happened here. Technically speaking, the letter writer introduced
an additional independent variable, AGE, to the analysis of the dependent
variable NUMBER OF ER PATIENTS in terms of the independent variable
IMMIGRATION STATUS: LEGAL vs. ILLEGAL. In this particular case, it is ob-
vious that younger people need less medical care than older people – so
AGE appears to play some role. Note, however, that contrary to what the
letter to the editor appears to assume, it is still far from obvious that that is
also relevant for a discussion of ER patients since one might just as well
argue that the larger amount of medical assistance required by older people
is unlikely to consist mainly of ER care, but that is a different story.
While we are not going to solve the issue here, it should have become
24 Some fundamentals of empirical research

obvious that considering more than one variable or more variables than are
mentioned in some context can be interesting and revealing. However, this
does not mean that you should always try to add as many additional va-
riables as possible. An important principle that limits the number of addi-
tional variables to be included is called Occam’s razor. This rule (“entia
non sunt multiplicanda praeter necessitatem”) states that additional va-
riables should only be included when it’s worth it, i.e., when they increase
the explanatory and predictive power of the analysis substantially. How
exactly that decision is made will be explained especially in Chapter 5.

Recommendation(s) for further study


− Good and Hardin (2006: 154f.) for another example to ‘think multifac-
torially’
− <http://en.wikipedia.org/wiki/Occam’s_Razor> and Crawley (2007:
325) on Occam’s razor

3.3. Data collection and storage

Only after all variables have been operationalized and all hypotheses have
been formulated do you actually collect your data. For example, you run an
experiment or do a corpus study or … However, you will hardly ever study
the whole population of events but a sample so it is important that you
choose your sample such that it is representative and balanced with respect
to the population to which you wish to generalize. Here, I call a sample
representative when the different parts of the population are reflected in the
sample, and I call a sample balanced when the sizes of the parts in the pop-
ulation are reflected in the sample. Imagine, for example, you want to study
the frequencies and the uses of the discourse marker like in the speech of
Californian adolescents. To that end, you want to compile a corpus of Cali-
fornian adolescents’ speech by asking some Californian adolescents to
record their conversations. In order to obtain a sample that is representative
and balanced for the population of all the conversations of Californian ado-
lescents, the proportions of the different kinds of conversations in which
the subjects engage would ideally be approximately reflected in the sample.
For example, a good sample would not just include the conversations of the
subjects with members of their peer group(s), but also conversations with
their parents, teachers, etc., and if possible, the proportions that all these
different kinds of conversations make up in the sample would correspond
to their proportions in real life, i.e. the population.
The design and the logic of quantitative studies 25

While it is important you try to stick to these rules as much as possible,


why are they often more of a theoretical ideal?

THINK
BREAK

This is often just a theoretical ideal because we don’t know all parts and
their proportions in the population. Who would dare say how much of an
average Californian adolescent’s discourse – and what is an average Cali-
fornian adolescent? – takes place within his peer group, with his parents,
with his teachers etc.? And how would we measure the proportion – in
words? sentences? minutes? Still, even though these considerations will
often only result in estimates, you must think about the composition of your
sample(s) just as much as you think about the exact operationalization of
your variables. If you do not do that, then the whole study may well fail
because you may be unable to generalize from whatever you find in your
sample to the population. One important rule in this connection is to choose
the elements that enter into your sample randomly, to randomize. For ex-
ample, if the adolescents who participate in your study receive a small re-
cording device with a lamp and are instructed to always record their con-
versations when the lamp lights up, then you could perhaps send a signal to
the device at random time intervals (as determined by a computer). This
would make it more likely that you get a less biased sample of many differ-
ent kinds of conversational interaction, which would then reflect the popu-
lation better.
Let us briefly look at a similar example from the domain of first lan-
guage acquisition. It was found that the number of questions in recordings
of caretaker-child interactions was surprisingly high. Some researchers
suspected that the reason for that was parents’ (conscious or unconscious)
desire to present their child as very intelligent so that they asked the child
“And what is that?” questions all the time so that the child could show how
many different words he knows. Some researchers then changed their sam-
pling method such that the recording device was always in the room, but
the parents did not know exactly when it would record caretaker-child inte-
raction. The results showed that the proportion of questions decreased con-
siderably …
In corpus-based studies you will often find a different kind of randomi-
zation. For example, you will find that a researcher first retrieved all in-
26 Some fundamentals of empirical research

stances of the word he is interested in and then sorted all instances accord-
ing to random numbers. When the researcher then investigates the first
20% of the list, he has a random sample. However you do it, randomization
is one of the most important principles of data collection.
Once you have collected your data, you have to store them in a format
that makes them easy to annotate, manipulate, and evaluate. I often see
people – students as well as seasoned researchers – print out long lists of
data points, which are then annotated by hand, or people annotate concor-
dance lines from a corpus in a text processing software. This may seem
reasonable for small data sets, but it doesn’t work or is extremely inconve-
nient for larger ones, and the generally better way of handling the data is in
a spreadsheet software (e.g., OpenOffice.org Calc) or a database, or in R.
However, there is a set of ground rules that needs to be borne in mind.
First, the first row contains the names of all variables. Second, each of
the other rows represents one and only one data point. Third, the first col-
umn just numbers all n cases from 1 to n so that every row can be uniquely
identified and so that you always restore one particular ordering (e.g., the
original one). Fourth, each of the remaining columns represents one and
only one variable with respect to which every data point gets annotated. In
a spreadsheet for a corpus study, for example, one additional column may
contain the name of the corpus file in which the word in question is found;
another column may provide the line of the file in which the word was
found. In a spreadsheet for an experimental study, one column should con-
tain the name of the subject or at least a unique number identifying the
subject; other columns may contain the age of the subject, the sex of the
subject, the exact stimulus or some index representing the stimulus the
subject was presented with, the order index of a stimulus presented to a
subject (so that you can test whether a subject’s performance changes sys-
tematically in the course of the experiment), …
To make sure these points are perfectly clear, let us look at two exam-
ples. Let’s assume for your study of particle placement you had looked at a
few sentences and counted the number of syllables of the direct objects.
First, a question: in this design, what is the dependent variable and what is
the independent variable?

THINK
BREAK
The design and the logic of quantitative studies 27

The independent variable is the ratio variable LENGTH (in syllables),


which can take on all sorts of positive integer values. The dependent varia-
ble is the nominal variable CONSTRUCTION, which can be either VPO or
VOP. When all hypotheses were formulated and, subsequently, data were
collected and coded, then I sometimes see a format such as the one
represented in Table 6.

Table 6. A not-so-good table 1


LENGTH: 2 LENGTH: 3 LENGTH: 5 LENGTH: 6
CONSTRUCTION: || || ||| ||
VPO
CONSTRUCTION: |||| ||| || |
VOP

As a second example, let’s look at the hypothesis that subjects and di-
rect objects are differently long (in words). Again the question: what is the
dependent variable and what is the independent variable?

THINK
BREAK

The independent variable is the nominal variable RELATION, which can


be either SUBJECT or OBJECT. The dependent variable is LENGTH, which
can take on positive integer values. If you formulated all four hypotheses
(alternative hypothesis: text form and statistical form; null hypothesis: text
form and statistical form) and then looked at the small corpus in (3),

(3) a. The younger bachelors ate the nice little cat.


b. He was locking the door.
c. The quick brown fox hit the lazy dog.

then your spreadsheet should not look like Table 7.

Table 7. A not-so-good table 2


Sentence Subj Obj
The younger bachelors ate the nice little cat. 3 4
He was locking the door. 1 2
The quick brown fox hit the lazy dog. 4 3
28 Some fundamentals of empirical research

Both Table 6 and Table 7 violate all of the above rules. In Table 7, for
example, every row represents two data points, not just one, namely one
data point representing some subject’s length and one representing the
length of the object from the same sentence. Also, not every variable is
represented by one and only column – rather, Table 7 has two columns
with data points, each of which represents one level of an independent vari-
able, not one variable. Before you read on, how would you have to reorgan-
ize Table 7 to make it compatible with the above rules?

THINK
BREAK

This is a much better way to store the data.

Table 8. A much better coding of the data in Table 7


CASE SENT# SENTENCE RELATION LENGTH
1 1 The younger bachelors ate the nice subj 3
little cat.
2 1 The younger bachelors ate the nice obj 4
little cat.
3 2 He was locking the door. subj 1
4 2 He was locking the door. obj 2
5 3 The quick brown fox hit the lazy subj 4
dog.
6 3 The quick brown fox hit the lazy obj 3
dog.

In this version, every data point has its own row and is characterized ac-
cording to the two variables in their respective columns. An even more
comprehensive version may now even include one column containing just
the subjects and objects so that particular cases can be found more easily.
In the first row of such a column, you would find The younger bachelor, in
the second row of the same column, you would find the nice little cat etc.
The same logic applies to the improved version of Table 6, which should
look like Table 9.
With very few exceptions, this is the format in which you should always
save your data.6 Ideally, you enter the data in this format into a spreadsheet

6. There are some more complex statistical techniques which can require different formats,
but in the vast majority of cases, the standard format discussed above is the one that you
The design and the logic of quantitative studies 29

software and save the data (i) in the native file format of that application (to
preserve colors and other formattings you may have added) and (ii) into a
tab-delimited text file, which is often smaller and easier to import into other
applications (such as R).
One important aspect to note is that data sets are often not complete.
Sometimes, you can’t annotate a particular corpus line, or a subject does
not provide a response to a stimulus. Such ‘data points’ are not simply
omitted, but are entered into the spreadsheet as so-called missing data with
the code “NA” in order to (i) preserve the formal integrity of the data set
(i.e., have all rows and columns contain the same number of elements) and
(ii) be able to do follow-up studies on the missing data to see whether there
is a pattern in the data points which needs to be accounted for.

Table 9. A much better coding of the data in Table 6


CASE CONSTRUCTION LENGTH
1 VPO 2
2 VPO 2
3 VOP 2
4 VOP 2
5 VOP 2
6 VOP 2
7 VPO 3
8 VPO 3
9 VOP 3
10 VOP 3
11 VOP 3
… … …

All these steps having to do with the data collection must be described
in the methods part of your written version: what is the population to which
you wanted to generalize, how did you draw your (ideally) representative
and balanced sample, which variables did you collect data for, etc.

3.4. The decision

When the data have been stored in a format that corresponds to that of Ta-
ble 8/Table 9, you can finally do what you wanted to do all along: evaluate

will need and that will allow you to easily switch to another format. Also, for reasons
that will only become obvious much later in Chapter 5, I myself always use capital let-
ters for variables and small letters for their levels.
30 Some fundamentals of empirical research

the data. As a result of that evaluation you will obtain frequencies, means,
or correlation coefficients. However, one central aspect of this evaluation is
that you actually do not simply try to show that your alternative hypothesis
is correct – contrary to what you might expect you try to show that the
statistical version of the null hypothesis is wrong, and since the null hypo-
thesis is the logical counterpart to the alternative hypothesis, this supports
your own alternative hypothesis. The obvious question now is, why this
‘detour’? The answer to this question can be approached by considering the
following two questions:

− how many subjects and objects do you maximally have to study to show
that the alternative hypothesis “subjects are shorter than direct objects”
is correct?
− how many subjects and objects do you minimally have to study to show
that the null hypothesis “subjects are as long as direct objects” is incor-
rect?

THINK
BREAK

You probably figured out quickly that the answer to the first question is
“infinitely many.” Strictly speaking, you can only be sure that the alterna-
tive hypothesis is correct if you have studied all subjects and direct objects
and found not a single counterexample. The answer to the second question
is “one each” because if the first subject is longer or shorter than the first
object, we know that, strictly speaking, the null hypothesis is not correct.
However, especially in the humanities and in the social sciences you do not
usually reject a hypothesis on the basis of just one counterexample. Rather,
you evaluate the data in your sample and then determine whether your null
hypothesis H0 and the empirical result are sufficiently incompatible to re-
ject H0 and thus accept H1. More specifically, you assume the null hypothe-
sis is true and compute the probability p that you would get your observed
result or all other results that deviate from the null hypothesis even more
strongly. When that probability p is equal to or larger than 5%, then you
stick to the null hypothesis because, so to speak, the result is still too com-
patible with the null hypothesis to reject it and accept the alternative hypo-
thesis. If, however, that probability p is smaller than 5%, then you can re-
ject the null hypothesis and adopt the alternative hypothesis.
The design and the logic of quantitative studies 31

For example, if in your sample subjects and direct objects are on aver-
age 4.2 and 5.6 syllables long, then you compute the probability p to find
this difference of 1.4 syllables or an even larger difference when you in fact
don’t expect any such difference (because that is what the null hypothesis
predicts). Then, there are two possibilities:

− if this probability p is smaller than 5% (or 0.05, as this is usually writ-


ten), then you can reject the null hypothesis that there is no difference
between subjects and direct objects in the population. In the results sec-
tion of your paper, you can then write that you found a significant dif-
ference between the means in your sample, and in the discussion section
of your paper you would discuss what kinds of implications this has,
etc.
− if this probability p is equal to or larger than 5%, then you cannot reject
the null hypothesis that there is no difference between subjects and di-
rect objects in the population. In the results section of your paper, you
must then admit that you have not found a significant difference be-
tween the means in your sample. In the discussion part of your paper,
you should then discuss the implications of this finding as well as spe-
culate or reason about why there was no significant difference – there
may have been outliers in the corpus data or in the experiment (because
subjects reacted strangely to particular stimuli, coding errors, etc. (Out-
liers are values in the sample that are rather untypical given the rest of
the sample.)

Two aspects of this logic are very important: First, the fact that an effect
is significant does not necessarily mean that it is an important effect despite
what the everyday meaning of significant might suggest. The word signifi-
cant is used in a technical sense here, meaning the effect is large enough
for us to assume that, given the size of the sample(s), it is probably not
random. Second, just because you accept an alternative hypothesis given a
significant result, that does not mean that you have proven the alternative
hypothesis. This is because there is still the probability p that the observed
result has come about although the null hypothesis is correct – p is small
enough to warrant accepting the alternative hypothesis, but not to prove it.
This line of reasoning may appear a bit confusing at first especially
since we suddenly talk about two different probabilities. One is the proba-
bility of 5% (to which the other probability is compared), that other proba-
bility is the probability to obtain the observed result when the null hypothe-
sis is correct.
32 Some fundamentals of empirical research

Warning/advice
You must never change your hypotheses after you have obtained your re-
sults and then sell your study as successful support of the ‘new’ alternative
hypothesis. Also, you must never explore a data set – the nicer way to say
‘fish for something useable’ – and, when you then find something signifi-
cant, sell this result as a successful test of a previously formulated alterna-
tive hypothesis. You may of course explore a data set in search of patterns
and hypotheses, but if a data set generates a hypothesis, you must test that
hypothesis on the basis of different data.

The former is the so-called level of significance, which defines a thre-


shold level. If that threshold is exceeded, you must stick to the null hypo-
thesis. This value is defined before data are obtained (and usually set to
5%). The latter probability is the so-called p-value or probability of error
and is computed on the basis of the data. Why is this probability called
probability of error? It is because it is the probability to err when you ac-
cept the alternative hypothesis given the observed data. Sometimes, you
will find that people use different wordings for different p-values:

− p < 0.001 is sometimes referred to as highly significant and indicated


with ***;
− 0.001 ≤ p < 0.01 is sometimes referred to as very significant and indi-
cated with **;
− 0.01 ≤ p < 0.05 is sometimes referred to as significant and indicated
with *;
− 0.05 ≤ p < 0.1 is sometimes referred to as marginally significant and
indicated with ms or a period but since such p-values are larger than the
usual standard of 5%, calling such results marginally significant
amounts, polemically speaking at least, to saying “Look, I didn’t really
get the significant results I was hoping for, but they are still pretty nice,
don’t you agree?”, which is why I typically discourage the use of this
expression.

But while we have seen above how this comparison of the two probabil-
ities contributes to the decision in favor of or against the alternative hypo-
thesis, it is still unclear how this p-value is computed.
The design and the logic of quantitative studies 33

3.4.1. Overview: discrete probability distributions

Let’s assume you and I decided to toss a coin 100 times. If we get heads, I
get one dollar from you – if we get tails, you get one dollar from me. Be-
fore this game, you formulate the following hypotheses:

Text H0: Stefan does not cheat: the probability for heads and tails is
50% vs. 50%.
Text H1: Stefan cheats: the probability for heads is larger than 50%.

This scenario can be easily operationalized using frequencies:

Statistical H0: Stefan will win just as often as I will, namely 50 times.
Statistical H1: Stefan will win more often than I, namely more than 50
times.

Now my question: when we play the game and toss the coin 100 times,
after which result will you suspect that I cheated?

− when you lost 51 times (probably not …)?


− when you lost 55 times ? when you lost 60 times? (maybe …)?
− when you lost 80 times or even more often? (most likely …)?

THINK
BREAK

Maybe without realizing it, you are currently doing some kind of signi-
ficance test. Let’s assume you lost 60 times. Since the expectation from the
null hypothesis was that you would lose only 50 times, you lost more often
than you thought you would. Let’s finally assume that, given the above
explanation, you decide to only accuse me of cheating when the probability
p to lose 60 or even more times in 100 tosses is smaller than 5%. Why “or
even more times”? Well, above we said

you expect the null hypothesis to be true and compute the


probability p to get the observed result or all other results
that deviate from the null hypothesis even more strongly.
34 Some fundamentals of empirical research

Thus, you must ask yourself how and how much does the observed re-
sult deviate from the result expected from the null hypothesis. Obviously,
the number of losses is larger: 60 > 50. Thus, the results that deviate from
the null hypothesis that much or even more in the predicted direction are
those where you lose 60 times or more often: 60 times, 61 times, 62, times,
…, 99 times, and 100 times. In a more technical parlance, you set the signi-
ficance level to 5% (0.05) and ask yourself “how likely is it that Stefan did
not cheat but still won 60 times although he should only have won 50
times?” This is exactly the logic of significance testing.
It is possible to show that the probability p to lose 60 times or more just
by chance – i.e., without me cheating – is 0.02844397, i.e., 2.8%. Since this
p-value is smaller than 0.05 (or 5%), you can now accuse me of cheating. If
we had been good friends, however, so that you would not have wanted to
risk our friendship by accusing me of cheating prematurely and had set the
significance level to 1%, then you would not be able to accuse me of cheat-
ing, since 0.02844397 > 0.01.
This example has hopefully clarified the overall logic even further, but
what is probably still unclear is how this p-value is computed. To illustrate
that, let us reduce the example from 100 coin tosses to the more managea-
ble amount of three coin tosses. In Table 10, you find all possible results of
three coin tosses and their probabilities provided that the null hypothesis is
correct and the chance for heads/tails on every toss is 50%.

Table 10. All possible results of three coin tosses and their probabilities (when H0
is correct)

Toss 1 Toss 2 Toss 3 # heads # tails presult


heads heads heads 3 0 0.125
heads heads tails 2 1 0.125
heads tails heads 2 1 0.125
heads tails tails 1 2 0.125
tails heads heads 2 1 0.125
tails heads tails 1 2 0.125
tails tails heads 1 2 0.125
tails tails tails 0 3 0.125

More specifically, the three left columns represent the possible results,
column 4 and column 5 show how many heads and tails are obtained in
each of the eight possible results, and the rightmost column lists the proba-
bility of each possible result. As you can see, these are all the same, 0.125.
Why is that so?
The design and the logic of quantitative studies 35

Two easy ways to explain this are conceivable, and both of them require
you to understand the crucial concept of independence. The first one in-
volves understanding that the probability of heads and tails is the same on
every trial and that all trials are independent of each other. This notion of
independence is a very important one: trials are independent of each other
when the outcome of one trial (here, one toss) does not influence the out-
come of any other trial (i.e., any other toss). Similarly, samples are inde-
pendent of each other when there is no meaningful way in which you can
match values from one sample onto values from another sample. For ex-
ample, if you randomly sample 100 transitive clauses out of a corpus and
count their subjects’ lengths in syllables, and then you randomly sample
100 different transitive clauses from the same corpus and count their direct
objects’ lengths in syllables, then the two samples – the 100 subject lengths
and the 100 object lengths – are independent. If, on the other hand, you
randomly sample 100 transitive clauses out of a corpus and count the
lengths of the subjects and the objects in syllables, then the two samples –
the 100 subject lengths and the 100 object lengths – are dependent because
you can match up the 100 subject lengths onto the 100 object lengths per-
fectly by aligning each subject with the object from the very same clause.
Similarly, if you perform an experiment twice with the same subjects, then
the two samples made up by the first and the second experimental results
are dependent, because you match up each subject’s data point in the first
experiment with the same subject’s data point in the second experiment.
This distinction will become very important later on.
Returning to the three coin tosses: since there are eight different out-
comes of three tosses that are all independent of each other and, thus,
equally probable, the probability of each of the eight outcomes is 1/8 =
0.125.
The second way to understand Table 10 involves computing the proba-
bility of each of the eight events separately. For the first row that means the
following: the probability to get head in the first toss, in the second, in the
third toss is always 0.5. Since the tosses are independent of each other, you
obtain the probability to get heads three times in a row by multiplying the
individual events’ probabilities: 0.5·0.5·0.5 = 0.125 (the multiplication rule
in probability theory). Analogous computations for every row show that the
probability of each result is 0.125. According to the same logic, we can
show the null hypothesis predicts that each of us should win 1.5 times,
which is a number that has only academic value since you cannot win half
a time.
36 Some fundamentals of empirical research

Now imagine you lost two out of three times. If you had again set the
level of significance to 5%, could you accuse me of cheating?

THINK
BREAK

No way. Let me first ask again which events need to be considered. You
need to consider the observed result – that you lost two times – and you
need to consider the result(s) that deviate(s) even more from the null hypo-
thesis and the observed result in the predicted direction. This is easy here:
the only such result is that you lose all three times. Let us compute the sum
of the probabilities of these events.
As you can see in column 4, there are three results in which you lose
two times in three tosses: H H T (row 2), H T H (row 3), and T H H (row
5). Thus, the probability to lose exactly two times is 0.125+0.125+0.125 =
0.375, and that is already much much more than your level of significance
0.05 allows. However, to that you still have to add the probability of the
event that deviates even more from the null hypothesis, which is another
0.125. If you add this all up, the probability p to lose two or more times in
three tosses when the null hypothesis is true is 0.5. This is ten times as
much as the level of significance so there is no way that you can accuse me
of cheating.
This logic can also be represented graphically very well. In Figure 6, the
summed probabilities for all possible numbers of heads are represented as
bars. The bars for two heads – the observed result – and for three heads –
the even more extreme deviation from the null hypothesis in this direction
– are shown in black, and their lengths indicate the probabilities of these
outcomes. Visually speaking, you move from the expectation of the null
hypothesis away to the observed result (at x = 2) and add the length of that
bar to the lengths of all bars you encounter if you move on from there in
the same direction, which here is only one bar at x = 3.
This actually also shows that you, given your level of significance, can-
not even accuse me of cheating when you lose all three times because this
result already comes about with a probability of p = 0.125, as you can see
from the length of the rightmost bar.
The design and the logic of quantitative studies 37

Figure 6. All possible results of three coin tosses and their probabilities (when H0
is correct, one-tailed)

The final important aspect that needs to mentioned now involves the
kind of alternative hypothesis. So far, we have always been concerned with
directional alternative hypotheses: the alternative hypothesis was “Stefan
cheats: the probability for heads is larger than 50% [and not just different
from 50%].” The kind of significance test we discussed are corresponding-
ly called one-tailed tests because we are only interested in one direction in
which the observed result deviates from the expected result. Again visually
speaking, when you summed up the bar lengths in Figure 6 you only
moved from the null hypothesis expectation in one direction. This is impor-
tant because the decision for or against the alternative hypothesis is based
on the cumulative lengths of the bars of the observed result and the more
extreme ones in that direction.
However, often you only have a non-directional alternative hypothesis.
In such cases, you have to look at both ways in which results may deviate
from the expected result. Let us return to the scenario where you and I toss
a coin three times, but this time we also have an impartial observer who has
no reason to suspect cheating on either part. He therefore formulates the
following hypotheses (with a significance level of 0.05):

Statistical H0: Stefan will win just as often as the other player, namely 50
times (or “Both players will win equally often”).
Statistical H1: Stefan will win more or less often than the other player (or
“The players will not win equally often”).
38 Some fundamentals of empirical research

Imagine now you lost three times. The observer now asks himself
whether one of us should be accused of cheating. As before, he needs to
determine which events to consider. First, he has to consider the observed
result that you lost three times, which arises with a probability of 0.125.
But then he also has to consider the probabilities of other events that de-
viate from the null hypothesis just as much or even more. With a direction-
al alternative hypothesis, you moved from the null hypothesis only in one
direction – but this time there is no directional hypothesis so the observer
must also look for deviations just as large or even larger in the other direc-
tion of the null hypothesis expectation. For that reason – both tails of the
distribution in Figure 6 must be observed – such tests are considered two-
tailed tests. As you can see in Table 10 or Figure 6, there is another devia-
tion from the null hypothesis that is just as extreme, namely that I lose three
times. Since the observer only has a non-directional hypothesis, he has to
include the probability of that event, too, arriving at a cumulative probabili-
ty of 0.125+0.125 = 0.25. This logic is graphically represented in Figure 7.

Figure 7. All possible results of three coin tosses and their probabilities (when H0
is correct, two-tailed)

Note that when you tested your directional alternative hypothesis, you
looked at the result ‘you lost three times’, but when the impartial observer
tested his non-directional alternative hypothesis, he looked at the result
‘somebody lost three times.’ This has one very important consequence:
when you have prior knowledge about a phenomenon that allows you to
formulate a directional, and not just a non-directional, alternative hypothe-
sis, then the result you need for a significant finding can be less extreme
The design and the logic of quantitative studies 39

than if you only have a non-directional alternative hypothesis. In most cas-


es, it will be like here: the p-value you get for a result with a directional
alternative hypothesis is half of the p-value you get for a result with a non-
directional alternative hypothesis. Prior knowledge is rewarded, which will
be illustrated once more now.
With statistical software such as R these kinds of computations can easi-
ly be done for more than three tosses, which is why we can now return to
the example game involving 100 tosses. Again, we look at the situation
through your eyes (directional alternative hypothesis) and through those of
an impartial observer (non-directional alternative hypothesis), but this time
you and the observer try to determine before the game which results are so
extreme that one will be allowed to adopt the alternative hypothesis. We
begin with your perspective: In Figure 8, you find the by now familiar
graph for 100 tosses with the expected frequency for heads of 50. (The
meaning of the black lines will be explained presently.)

Figure 8. All possible results of 100 coin tosses and their probabilities (when H0
is correct, one-tailed H1)

Above, we had an empirical result whose p-value we were interested in,


and in order to get that p-value, we moved from the expected null hypothe-
sis results to the extreme values. Now we want to determine, but not ex-
ceed, a p-value before we have results and have to proceed the other way
round: from an extreme point to the null hypothesis expectation. For exam-
ple, to determine how many times you can lose without getting a cumula-
tive probability exceeding 0.05, you begin at the most extreme result on the
right – that you lose 100 times – and begin to add the lengths of the bars.
40 Some fundamentals of empirical research

(Of course, you would compute that and not literally measure lengths.) The
probability that you lose all 100 tosses is 7.8886·10-31. To that you add the
probability that you lose 99 out of 100 times, the probability that you lose
98 out of 100 times, etc. When you have added all probabilities until 59
times heads, then the sum of all these probabilities reaches 0.0443; all these
are represented in black in Figure 8. Since the probability to get 58 heads
out of 100 tosses amounts to 0.0223, you cannot add this event’s probabili-
ty to the others anymore without exceeding the level of significance value
of 0.05. Put differently, if you don’t want to cut off more than 5% of the
summed bar lengths, then you must stop adding probabilities at x = 59. You
conclude: if Stefan wins 59 times or more often, then I will accuse him of
cheating, because the probability of that happening is the largest one that is
still smaller than 0.05.
Now consider the perspective of the observer in Figure 9, which is very
similar, but not completely identical to Figure 8. The observer also begins
with the most extreme result, that I get heads every time: p100 heads ≈
7,8886·10-31. But since the observer only has a non-directional alternative
hypothesis, he must also include the probability of the opposite, equally
extreme result that you get tails all the time. For each additional number of
heads – 99, 98, etc. – the observer must now also add the corresponding
opposite results – 1, 2, etc. Once the observer has added the probabilities
61 times heads / 39 times tails and 39 times heads / 61 times tails, then the
cumulative sum of the probabilities reaches 0.0352 (cf. the black bars in
Figure 9). Since the joint probability for the next two events – 60 heads / 40
tails and 40 heads / 60 tails – is 0.0217, the observer cannot add any further
results without exceeding the level of significance of 0.05. Put differently,
if the observer doesn’t want to cut off more than 5% of the summed bar
lengths on both sides, then he must stop adding probabilities by going from
right to the left at x = 61 and stop going from the left to right at x = 39. He
concludes: if Stefan or his opponent wins 61 times or more often, then
someone is cheating (most likely the person who wins more often).
Again, observe that in the same situation the person with the directional
alternative hypothesis needs a less extreme result to be able to accept it
than the person with a non-directional alternative hypothesis: with the same
level of significance, you can already accuse me of cheating when you lose
59 times (only 9 times more often than the expected result) – the impartial
observer needs to see someone lose 61 times (11 times more often than the
expected result) before he can start accusing someone. Put differently, if
you lose 60 times, you can accuse me of cheating, but the observer cannot.
This difference is very important and we will use it often.
The design and the logic of quantitative studies 41

Figure 9. All possible results of 100 coin tosses and their probabilities (when H0
is correct, two-tailed H1)

While reading the last few pages, you probably sometimes wondered
where the probabilities of events come from: How do we know that the
probability to get heads 100 times in 100 tosses is 7.8886·10-31? These val-
ues were computed with R on the basis of the so called binomial distribu-
tion. You can easily compute he probability that one out of two possible
events occurs x out of s times when the event’s probability is p in R with
the function dbinom. The arguments of this function we deal with here are:

− x: the frequency of the event (e.g., three times heads);


− s: the number of trials in which the event could occur (e.g., three
tosses);
− p: the probability of the event in each trial (e.g., 50%).

You know that the probability to get three heads in three tosses when
the probability of head is 50% is 12.5%. In R:7

> dbinom(3,—3,—0.5)¶
[1]—0.125

As a matter of fact, you can compute the probabilities of all four possi-

7. I will explain how to install R etc. in the next chapter. It doesn’t really matter if you
haven’t installed R and/or can’t enter or understand the above input yet. We’ll come
back to this …
42 Some fundamentals of empirical research

ble numbers of heads – 0, 1, 2, and 3 – in one line (because, as we will see


below, sequences of integers can be defined with a colon):

> dbinom(0:3,—3,—0.5)¶
[1]—0.125—0.375—0.375—0.125

In a similar fashion, you can also compute the probability that heads
will occur two or three times by summing up the relevant probabilities:
> sum(dbinom(2:3,—3,—0.5))¶
[1]—0.5

Now you do the same for the probability to get 100 heads in 100 tosses,

> dbinom(100,—100,—0.5)¶
[1]—7.888609e-31

the probability to get heads 58 or more times in 100 tosses (which is larger
than 5% and does not allow you to accept a one-tailed directional alterna-
tive hypothesis),

> sum(dbinom(58:100,—100,—0.5))¶
[1]—0.06660531

the probability to get heads 59 or more times in 100 tosses (which is small-
er than 5% and does allow you to accept a one-tailed directional alternative
hypothesis):

> sum(dbinom(59:100,—100,—0.5))¶
[1]—0.04431304

For two-tailed tests, you can do the same, e.g., compute the probability
to get heads 39 times or less often, or 61 times and more often (which is
smaller than 5% and allows you to accept a two-tailed non-directional al-
ternative hypothesis):

> sum(dbinom(c(0:39,—61:100),—100,—0.5))¶
[1]—0.0352002

If we want to proceed the other way round in, say, the one-tailed case,
then we can use the function qbinom to determine, for a given probability q,
the number of occurrences of an event (or successes) in s trials, when the
The design and the logic of quantitative studies 43

probability of the event in each trial is p. The arguments of this function


are:

− p: the probability for which we want the frequency of the event (e.g.,
12.51%);
− size: the number of trials in which the event could occur (e.g., three
tosses);
− prob: the probability of the event in each trial (e.g., 50%);
− lower.tail=TRUE, if we consider the probabilities from 0 to p (i.e.,
probabilities from the lower/left tail of the distribution), or low-
er.tail=FALSE if we consider the probabilities from p to 1 (i.e., proba-
bilities from the upper/right tail of the distribution); note, you can ab-
breviate TRUE and FALSE as T and F respectively (but cf. below).

Compare Figure 10 and the following code to the results discussed in


the text above. On the x-axis, you find the numbers of heads (for three and
100 tosses in the left and the right panel respectively) and on the y-axis the
cumulative probability of all possible numbers of heads included on the x-
axis (i.e., 0.125, 0.5, 0.875, and 1 in the left panel).

> qbinom(0.1251,—3,—0.5,—lower.tail=FALSE)¶
[1]—2
> qbinom(0.1249,—3,—0.5,—lower.tail=FALSE)¶
[1]—3
> qbinom(0.05,—100,—0.5,—lower.tail=FALSE)¶
[1]—58

Figure 10. The probabilities for all possible results of three tosses (left panel) or
100 tosses (right panel); the dotted line is at 0.05
44 Some fundamentals of empirical research

3.4.2. Extension: continuous probability distributions

In the above examples, we always had only one variable with two levels.
Unfortunately, life is usually not that easy. On the one hand, we have seen
above that our categorical variables will often involve more than two le-
vels. On the other hand, if the variable in question is ratio-scaled, then the
computation of the probabilities of all possible states or levels is not possi-
ble. For example, you cannot compute the probabilities of all possible reac-
tion times to a stimulus. For this reason, many statistical techniques do not
compute an exact p-value as we did, but are based on the fact that, as the
sample size increases, the probability distributions of events begin to ap-
proximate those of mathematical distributions whose functions/equations
and properties are very well known. For example, the curve in Figure 9 for
the 100 coin tosses is very close to that of a bell-shaped normal distribu-
tion. In other words, in such cases the p-values are estimated on the basis of
these equations, and such tests are called parametric tests. Four such distri-
butions will be important for the discussion of tests in Chapters 4 and 5:

− the standard normal distribution with z-scores (norm);


− the t-distribution (t);
− the F-distribution (f);
− the chi-square- / χ2-distribution (chisq).

For each of these distributions, there is a function whose name begins


with q and ends with the above function name (i.e. qnorm, qt, qf, qchisq),
and a function whose name begins with p and ends with the above function
name (i.e. pnorm, pt, pf, pchisq). By analogy to our adding up the lengths
of the bars that make up the curve of the binomial distribution, the func-
tions beginning with q compute how much in percent of the area under the
curve (the whole of which is defined as 1) a particular z-/t-/F-/χ2-value
defines. For example, in the very similar-looking binomial distribution in
Figure 8 above we added probabilities from the right hand side moving to
the left in order to test how often you can lose without being able to adopt
the alternative hypothesis, using the function qbinom. The continuous prob-
ability functions can be used rather similarly. If we want to perform a one-
tailed test on the basis of the standard normal distribution – the bell-shaped
normal distribution with a mean of 0 and a standard deviation of 1, I will
explain these concepts in detail below – then we can determine which value
on the x-axis cuts off 5% of the left or the right part of the curve:
The design and the logic of quantitative studies 45

> qnorm(0.05,—lower.tail=TRUE)—#—one-tailed—test,—left—panel¶
[1]—-1.644854
> qnorm(0.95,—lower.tail=TRUE)—#—one-tailed—test,—
right—panel¶
[1]—1.644854

That means, the grey area under the curve in the left panel of Figure 11
in the range -∞ ≤ x ≤ -1.644854 corresponds to 5% of the total area under
the curve. Since the standard normal distribution is symmetric, the same is
true of the grey area under the curve in the right panel in the range
1.644854 ≤ x ≤ ∞.

Figure 11. Density function of the standard normal distribution for pone-tailed = 0.05

This corresponds to a one-tailed test since you only look at one side of
the curve, and if you were to get a value of -1.7 for such a one-tailed test,
then that would be a significant result. For a corresponding two-tailed test
at the same significance level, you would have to consider both areas under
the curve (as in Figure 9) and consider 2.5% on each edge to arrive at 5%
altogether. To get the x-axis values that jointly cut off 5% under the curve,
this is what you could enter into R; code lines two and three are different
ways to compute the same thing (cf. Figure 12):

> qnorm(0.025,—lower.tail=TRUE)—#—two-tailed—test,—
left—shaded—area:—∞≤x≤-1.96¶
[1]—-1.959964
> qnorm(0.975,—lower.tail=TRUE)—#—two-tailed—test,—
right—shaded—area:—1.96≤x≤∞¶
[1]—1.959964
> qnorm(0.025,—lower.tail=FALSE)—#—two-tailed—test,—
right—shaded—area:—1.96≤x≤∞¶
[1]—1.959964
46 Some fundamentals of empirical research

Figure 12. Density function of the standard normal distribution for ptwo-tailed = 0.05

Again, you see that with non-directional two-tailed tests you need a
more extreme result for a significant outcome: -1.7 would not be enough.
In sum, with the q-functions we determine the minimum one- or two-tailed
statistic that we must get to obtain a particular p-value. For one-tailed tests,
you typically use p = 0.05; for two-tailed tests you typically use p = 0.05/2 =
0.025 on each side.
The functions whose names start with p do the opposite of those begin-
ning with q: with them, you can determine which p-value our statistic cor-
responds to. The following two rows show how you get p-values for one-
tailed tests (cf. Figure 11):

> pnorm(-1.644854,—lower.tail=TRUE)—#—one-
tailed—test,—left—panel¶
[1]—0.04999996
> pnorm(1.644854,—lower.tail=TRUE)—#—one-
tailed—test,—left—panel¶
[1]—0.95

For the two-tailed test, you of course must multiply the probability by
two because whatever area under the curve you get, you must consider it on
both sides of the curve. For example (cf. again Figure 12):

> 2*pnorm(-1.959964,—lower.tail=TRUE)—#—two-tailed—test¶
[1]—0.05

The following confirms what we said above about the value of -1.7: that
value is significant in a one-tailed test, but not in a two-tailed test:
The design and the logic of quantitative studies 47

> pnorm(-1.7,—lower.tail=TRUE)—#—significant,—since—<0.05¶
[1]—0.04456546
> 2*pnorm(-
1.7,—lower.tail=TRUE)—#—not—significant,—since—>0.05¶
[1]—0.08913093

The other p/q-functions work in the same way, but will require some
additional information, namely so-called degrees of freedom. I will not
explain this notion here in any detail but instead cite Crawley’s (2002: 94)
rule of thumb: “[d]egrees of freedom [df] is the sample size, n, minus the
number of parameters, p, estimated from the data.” For example, if you
compute the mean of four values, then df = 3 because when you want to
make sure you get a particular mean out of four values, then you can
choose three values freely, but the fourth one is then set. If you want to get
a mean of 8, then the first three values can vary freely and be 1, 1, and 1,
but then the last one must be 29. Degrees of freedom are the way in which
sample sizes and the amount of information you squeeze out of a sample
are integrated into the significance test.
The parametric tests that are based on the above distributions are usual-
ly a little easier to compute (although this is usually not an important point
anymore, given the computing power of current desktop computers) and
more powerful, but they have one potential problem. Since they are only
estimates of the real p-value based on the equations defining z-/t-/F-/χ2-
values, their accuracy is dependent on how well these equations reflect the
distribution of the data. In the above example, the binomial distribution in
Figure 9 and the normal distribution in Figure 12 are extremely similar, but
this may be very different on other occasions. Thus, parametric tests make
distributional assumptions – the most common one is in fact that of a nor-
mal distribution – and you can use such tests only if the data you have meet
these assumptions. If they don’t, then you must use a so-called non-
parametric test or use a permutation test or other resampling methods. For
nearly all tests introduced in Chapters 4 and 5 below, I will list the assump-
tions which you have to test before you can apply the test, explain the test
itself with the computation of a p-value, and illustrate how you would
summarize the result in the third (results) part of the written version of your
study. I can already tell you that you should always provide the sample
sizes, the obtained effect (such as the mean, the percentage, the difference
between means, etc.), the name of the test you used, its statistical parame-
ters, the p-value, and your decision (in favor of or against the alternative
hypothesis). The interpretation of these findings will then be discussed in
the fourth and final section of your study.
48 Some fundamentals of empirical research

Recommendation(s) for further study


Good and Hardin (2006: Ch. 1 and 2) for many interesting and practically
relevant tips as well as Good and Hardin (2006: Ch. 8) on information you
should provide in your methods and results sections

Warning/advice
Do not give in to the temptation to use a parametric test when its assump-
tions are not met. What have you gained when you do the wrong test and
maybe publish wrong results and get cited because of the methodological
problems of your study?

4. The design of an experiment: introduction

In this section, we will deal with a few fundamental rules for the design of
experiments.8 The probably most central notion in this section is the token
set (cf. Cowart 1997). I will distinguish two kinds of token sets, schematic
token sets and concrete token sets. A schematic token set is typically a ta-
bular representation of all experimental conditions. To explain this more
clearly, let us return to the above example of particle placement.
Let us assume you do want to investigate particle placement not only on
the basis of corpus data, but also on the basis of experimental data. For
instance, you might want to determine how native speakers of English rate
the acceptability of sentences (the dependent variable ACCEPTABILITY) that
differ with regard to the constructional choice (the first independent varia-
ble CONSTRUCTION: VPO vs. VOP) and the part of speech of the head of the
direct object (the second independent variable OBJPOS: PRONOMINAL vs.
9
LEXICAL). Since there are two independent variables for each of the two
levels, there are 2·2 = 4 experimental conditions. This set of experimental
conditions is the schematic token set, which is represented in two different
forms in Table 11 and Table 12. The participants/subjects of course never
get to see the schematic token set. For the actual experiment, you must
develop concrete stimuli – a concrete token set that realizes the variable
level combinations of the schematic token set.

8. I will only consider the simplest and most conservative kind of experimental design,
factorial designs, where every variable level is combined with every other variable lev-
el.
9. For expository reasons, I only assume two levels of OBJPOS.
The design of an experiment 49

Table 11. Schematic token set for CONSTRUCTION × OBJPOS 1


OBJPOS: PRONOMINAL OBJPOS: LEXICAL
CONSTRUCTION: VPO V Part pron. NPdir. obj. V Part lexical NPdir. obj.
CONSTRUCTION: VOP V pron. NPdir. obj. Part V lexical NPdir. obj. Part

Table 12. Schematic token set for CONSTRUCTION × OBJPOS 2


Experimental condition CONSTRUCTION OBJPOS
1 VPO PRONOMINAL
2 VPO LEXICAL
3 VOP PRONOMINAL
4 VOP LEXICAL

However, both the construction of such concrete token sets and the ac-
tual presentations of the concrete stimuli are governed by a variety of rules
that aim at minimizing undesired sources of noise in the data. Three such
sources are particularly important:

− knowledge of what the experiment is about: you must make sure that the
participants in the experiment do not know what is being investigated
before or while they participate (after the experiment you should of
course tell them). This is important because otherwise the participants
might make their responses socially more desirable or change the res-
ponses to ‘help’ the experimenter.
− undesirable experimental effects: you must make sure that the responses
of the subjects are not influenced by, say, habituation to particular vari-
able level combinations. This is important because in the domain of,
say, acceptability judgments, Nagata (1987, 1989) showed that such
judgments can change because of repeated exposure to stimuli and this
may not be what you’re interested in.
− evaluation of the results: you must make sure that the responses of the
subjects can be interpreted unambiguously. Even a large number of
willing and competent subjects is useless if your design does not allow
for an appropriate evaluation of the data.

In order to address all these issues, you have to take the rules in (4) to
(12) under consideration. Here’s the first one in (4):

(4) The stimuli of each individual concrete token set differ with regard
50 Some fundamentals of empirical research

to the variable level combinations under investigation (and ideally


only with regard to these and nothing else).

Consider Table 13 for an example. In Table 13, the stimuli differ only
with respect to the two independent variables. If this was not the case (for
example, because the left column contained the stimuli John picked up it
and John brought it back) and you found a difference of acceptability be-
tween them, then you would not know what to attribute this difference to –
the different construction (which would be what this experiment is all
about), the different phrasal verb (that might be interesting, but is not what
is studied here), to an interaction of the two … The rule in (4) is therefore
concerned with the factor ‘evaluation of the results’.

Table 13. A concrete token set for CONSTRUCTION × OBJPOS 1


OBJPOS: PRONOMINAL OBJPOS: LEXICAL
CONSTRUCTION: VPO John picked up it. John picked up the keys.
CONSTRUCTION: VOP John picked it up. John picked the keys up.

When creating the concrete token sets, it is also important to control for
variables which you are not interested in and which make it difficult to
interpret the results with regard to the variables that you are interested in.
In the present case, for example, the choice of the verbs and the direct ob-
jects may be important. For instance, it is well known that particle place-
ment is also correlated with the concreteness of the referent of the direct
object. There are different ways to take such variables, or sources of varia-
tion, into account. One is to make sure that 50% of the objects are abstract
and 50% are concrete for each experimental condition in the schematic
token set (as if you introduced an additional independent variable). Another
one is to use only abstract or only concrete objects, which would of course
entail that whatever you find in your experiment, you could strictly speak-
ing only generalize to that class of objects.

Recommendation(s) for further study


Good and Hardin (2006: 38f.) on handling extraneous variables

(5) You must use more than one concrete token set, ideally as many
concrete token sets as there are variable level combinations (or a
multiple thereof).
The design of an experiment 51

One reason for the rule in (5) is that, if you only used the concrete token
set in Table 13, then a conservative point of view would be that you could
only generalize to other sentences with the transitive phrasal verb pick up
and the objects it and the book, which would probably not be the most in-
teresting study ever. Thus, the first reason for (5) is again concerned with
the factor ‘evaluation of results’, and the remedy is to create different con-
crete token sets with different verbs and different objects such as those
shown in Table 14 and Table 15, which also must conform to the rule in
(4). For your experiment, you would now just need one more.
A second reason for the rule in (5) is that if you only used the concrete
token set in Table 13, then subjects would probably able to guess the pur-
pose of the experiment right away: since our token set had to conform to
the rule in (4), the subject can identify the relevant variable level combina-
tions quickly because those are the only things according to which the sen-
tences differ. This immediately brings us to the next rule:

(6) Every subject receives maximally one item out of a concrete token
set.

Table 14. A concrete token set for CONSTRUCTION × OBJPOS 2


OBJPOS: PRONOMINAL OBJPOS: LEXICAL
CONSTRUCTION: VPO Mary brought back him. Mary brought back his dad.
CONSTRUCTION: VOP Mary brought him back. Mary brought his dad back.

Table 15. A concrete token set for CONSTRUCTION × OBJPOS 3


OBJPOS: PRONOMINAL OBJPOS: LEXICAL
CONSTRUCTION: VPO I eked out it. I eked out my living.
CONSTRUCTION: VOP I eked it out. I eked my living out.

As I just mentioned, if you do not follow the rule in (6), the subjects
might guess from the minimal variations within one concrete token set
what the whole experiment is about: the only difference between John
picked up it and John picked it up is the choice of construction. Thus, when
subject X gets to see the variable level combination (CONSTRUCTION: VPO
× OBJPOS: PRONOMINAL) in the form of John picked up it, then the other
experimental items of Table 13 must be given to other subjects. In that
regard, the rules in both (5) and (6) are (also) concerned with the factor
‘knowledge of what the experiment is about’.
52 Some fundamentals of empirical research

(7) Every subject is presented every variable level combination.

The motivation for the rule in (7) are the factors ‘undesirable experi-
mental effects’ and ‘evaluation of the results’. First, if several experimental
items you present to a subject only instantiate one variable level combina-
tion, then habituation effects may distort the results. Second, if you present
one variable level combination to a subject very frequently and another one
only rarely, then whatever difference you find between these variable level
combinations may theoretically just be due to the different frequencies of
exposure and not due to the effects of the variable level combinations under
investigation.

(8) Every subject gets to see every variable level combination more
than once and equally frequently.
(9) Every experimental item is presented to more than one subject and
to equally many subjects.

These rules are motivated by the factor ‘evaluation of the results’. You
can see what their purpose is if you think about what happens when you try
to interpret a very unusual reaction by a subject to a stimulus. On the one
hand, that reaction could mean that the item itself is unusual in some re-
spect in the sense that every subject would react unusually – but you can’t
test that if that item is not also given to other subjects, and this is the reason
for the rule in (9). On the other hand, the unusual reaction could mean that
only this particular subject reacts unusually to that variable level combina-
tion in the sense that the same subject would react more ‘normally’ to other
items instantiating the same variable level combination – but you can’t test
that if that subject does not see other items with the same variable level
combination, and this is the reason for the rule in (8).

(10) The experimental items are interspersed with distractors / filler


items; there are minimally as many filler items as real experimental
items per subject, but ideally two or three times as many filler
items as real experimental items per subject.

The reason for this rule is obviously ‘knowledge of what the experiment
is about’: you do not want the subjects to be able to guess the purpose of
the experiment (or have them think they know the purpose of the experi-
The design of an experiment 53

ment) so that they cannot distort the results.10


An additional well-known factor that can distort results is the order in
which items and distractors are presented. To minimize such effects, you
must take into consideration the final two rules:

(11) The order of experimental and filler items is pseudorandomized.


(12) The order of experimental and filler items is pseudorandomized
differently for every subject.

The rule in (11) requires that the order of experimental items and filler
items is randomized using a random number generator, but it is not com-
pletely random – hence pseudorandomized – because the ordering resulting
from the randomization must usually be ‘corrected’ such that

− the first stimulus (e.g., the first question on a questionnaire) is not an


experimental item but a distractor;
− experimental items do not follow each other directly;
− experimental items exhibiting the same variable level combinations do
not follow each other, which means that, after John picked it up, the
next experimental item must not be Mary brought him back even if the
two are interrupted by distractors.

The rule in (12) means that the order of stimuli must vary pseudoran-
domly across subjects so that whatever you find cannot be attributed to
systematic order effects: every subject is exposed to a different order of
experimental items and distractors. Hence, both (11) and (12) are con-
cerned with ‘undesirable experimental effects ‘ and ‘evaluation of the re-
sults’.
Only after all these steps have been completed properly can you begin
to print out the questionnaires and have subjects participate in an experi-
ment. It probably goes without saying that you must carefully describe how
you set up your experimental design in the methods section of your study.
Since this is a rather complex procedure, we will go over it again in the
following section.

10. In many psychological studies, not even the person actually conducting the experiment
(in the sense of administering the treatment, handing out the questionnaires, …) knows
the purpose of the experiment. This is to make sure that the experimenter cannot provide
unconscious clues to desired or undesired responses. An alternative way to conduct such
so-called double-blind experiments is to use standardized instructions in the forms of
videotapes or have a computer program provide the instructions.
54 Some fundamentals of empirical research

Warning/advice
You must be prepared for the fact that usually not all subjects answer all
questions, give all the acceptability judgments you ask for, show up for
both the first and the second test, etc. Thus, you should plan conservatively
and try to get more subjects than you thought you would need in the first
place. As mentioned above, you should still include these data in your table
and mark them with “NA”. Also, it is often very useful to carefully ex-
amine the missing data for whether their patterning reveals something of
interest (it would be very important if, say, 90% of the missing data exhi-
bited only one variable level combination or if 90% of the missing data
were contributed by only two out of, say, 60 subjects).

One final remark about this before we look at another example. I know
from experience that the previous section can have a somewhat discourag-
ing effect. Especially beginners read this and think “how am I ever going to
be able to set up an experiment for my project if I have to do all this? (I
don’t even know my spreadsheet well enough yet …)” And it is true: I
myself still need a long time before a spreadsheet for an experiment of
mine looks the way it is supposed to. But if you do not go through what at
first sight looks like a terrible ordeal, your results might well be, well, let’s
face it, crap! Ask yourself what is more discouraging: spending maybe
several days on getting the spreadsheet right, or spending maybe several
weeks for doing a simpler experiment and then having unusable results …

5. The design of an experiment: another example

Let us assume you want to investigate which variables determine how


many elements a quantifier such as some refers to; consider (13):

(13) a. [NP some balls [PP in front of [NP the cat]]


b. [NP some balls [PP in front of [NP the table]]
c. [NP some cars [PP in front of [NP the building]]

Thus, the question is: are some balls in front of the cat as many balls as
some balls in front of the table? Or: does some balls in front of the table
mean as many balls as some cars in front of the building means cars? What
– or more precisely, how many – does some mean? Your study of the litera-
ture may have shown that at least the following two variables influence the
quantities that some denotes:
The design of an experiment: another example 55

− OBJECT: the size of the object referred to by the first noun: SMALL (e.g.
cat) vs. LARGE (e.g. car);
− REFPOINT: the size of the object introduced as a reference in the PP:
11
SMALL (e.g. cat) vs. LARGE (e.g. building).

Obviously, a study of some with these two variables results in a sche-


matic token set with four variable level combinations, as represented in
Table 16.
The (non-directional) hypotheses for this study are:

H0: The average estimate of how many some denotes is independent of


the sizes of the objects (OBJECT: SMALL vs. LARGE) and the sizes of
the reference points (REFPOINT: SMALL vs. LARGE) in the utterances
for which subjects provide estimates: meanSMALL+SMALL = meanSMALL+
LARGE = meanLARGE+SMALL = meanLARGE+LARGE.
H1: The average estimate of how many some denotes is dependent on
the sizes of the objects (OBJECT: SMALL vs. LARGE) and/or the sizes
of the reference points (REFPOINT: SMALL vs. LARGE): there is at
least one ≠ in the above equation (“and/or” because of the possi-
bility of an interaction; cf. above Section 1.3.2.3).

Table 16. Token sets (schematic + concrete) for OBJECT × REFPOINT


REFPOINT: SMALL REFPOINT: LARGE
OBJECT: SMALL + SMALL: SMALL + LARGE:
SMALL some dogs next to a cat some dogs next to a car
OBJECT: LARGE + SMALL: LARGE + LARGE:
LARGE some cars next to a cat some cars next to a fence

Let us now also assume you want to test these hypotheses with a ques-
tionnaire: subjects will be shown phrases such as those in Table 16 and
then asked to provide estimates of how many elements a speaker of such a
phrase would probably intend to convey – how many dogs were next to a
cat etc. Since you have four variable level combinations, you need at least
four concrete token sets (the rule in (5)), which are created according to the
rule in (4). According to the rules in (6) and (7) this also means you need at
least four subjects: you cannot have fewer because then some subject

11 I will not discuss here how to decide what is ‘small’ and what is ‘large’. In the study
from which this example is taken, the sizes of the objects were determined on the basis
of a pilot study prior to the real experiment.
56 Some fundamentals of empirical research

would see more than one stimulus from one concrete token set. You can
then assign experimental stimuli to the subjects in a rotating fashion. The
result of this is shown in the sheet <Phase 1> of the file <C:/_sflwr/_input
files/01-5_ExperimentalDesign.ods> (just like all files, this one too can be
found on the companion website at <http://groups.google.com/
group/statforling-with-r/web/statistics-for-linguists-with-r> or its mirror).
The actual experimental stimuli are represented only schematically as a
unique identifying combination of the number of the token set and the vari-
able levels of the two independent variables (in column F).
As you can easily see in the table on the right, the rotation ensures that
every subject sees each variable level combination just once and each of
these from a different concrete token set. However, we know you have to
do more than that because in <Phase 1> every subject sees every variable
level combination just once (which violates the rule in (8)) and every expe-
rimental item is seen by only one subject (which violates the rule in (9)).
Therefore, you first re-use the experimental items in <Phase 1>, but put
them in a different order so that the experimental items do not occur to-
gether with the very same experimental items (you can do that by rotating
the subjects differently). One possible result of this is shown in the sheet
<Phase 2>.
The setup in <Phase 2> does not yet conform to the rule in (8), though.
For that, you have to do a little more. You must present more experimental
items to, say, subject 1, but you cannot use the existing experimental items
anymore without violating the rule in (6). Thus, you need four more con-
crete token sets, which are created and distributed across subjects as before.
The result is shown in <Phase 3>. As you can see in the table on the right,
every experimental item is now seen by two subjects (cf. the row totals),
and in the columns you can see that each subjects sees each variable level
combination in two different stimuli.
Now every subjects receives eight experimental items, you must now
create enough distractors. In this example, let’s use a ratio of experimental
items to distractors of 1:2. Of course, 16 distractors are enough, which are
presented to all subjects – there is no reason to create 8·16 = 128 distrac-
tors. Consider <Phase 4>, where the filler items have been added to the
bottom of the table.
Now you must order the all stimuli – experimental items and distractors
– for every subject. To that end, you can add a column called “RND”,
which contains random numbers ranging between 0 and 1 (you can get
those from R or by writing “=RAND()” (without double quotes, of course)
into a cell in OpenOffice.org Calc. If you now sort the whole spreadsheet
The design of an experiment: another example 57

(i) according to the column “SUBJ” and then (ii) according to the column
“RAND”, then all items of one subject are grouped together, and within
each subject the order of items is random. This is required by the rule in
(12) and represented in <Phase 5>.
When you look at <Phase 5>, you also see that the order of some ele-
ments must still be changed: red arrows in column H indicate problematic
sequences of experimental items. To take care of these cases, you can arbi-
trarily pick one distractor out of a series of distractors and exchange their
positions. The result is shown in <Phase 6>, where the green arrows point
to corrections. If we had used actual stimuli, you could now create a cover
sheet with instructions for the subjects and a few examples (which in the
case of, say, judgments would ideally cover the extremes of the possible
judgments!), paste the experimental stimuli onto the following page(s), and
hand out the questionnaires. To evaluate this experiment, you would then
have to compute a variety of means:

− the means for the two levels of OBJECT (i.e., meanOBJECT: SMALL and meanOB-
JECT: LARGE);

− the means for the two levels of REFPOINT (i.e., meanREFPOINT: SMALL and
meanREFPOINT: LARGE);
− the four means for the interaction of OBJECT and REFPOINT.

We will discuss the method that is used to test these means for signifi-
cant differences – analysis of variance or ANOVA – in Section 5.3.

Recommendation(s) for further study


Good and Hardin (2006: Ch. 13) and, when you have more experience with
R, the website <http://cran.r-project.org/src/contrib/Views/Experimental
Design.html>

Now you should do the exercises for Chapter 1 (which you can find on
the website) …
Chapter 2
Fundamentals of R

1. Introduction and installation

In this chapter, you will learn about the basics of R that enable you to load,
process, and store data as well as perform some simple data processing
operations. Thus, this chapter prepares you for the applications in the fol-
lowing chapters. Let us first take the most important step: the installation of
R (first largely for Windows).

1. The main R website is <http://www.r-project.org/>. From there you


can go to the CRAN website at <http://cran.r-project.org/
mirrors.html>. Click on the mirror Austria, then on the link(s) for
your operating system;
2. for Windows you will then click on “base”, and then on the link to
the setup program to download the relevant setup program; for
Mac OS X, you immediately get to a page with a link to a .dmg
file; for Linux, you choose your distribution and maybe your dis-
tribution version and then the relevant file;12
3. choose “complete installation” into the suggested standard directo-
ry;
4. start R by double-clicking on the icon on the desktop, the icon in
the start menu, or the icon in the quick launch tool bar.

That’s it. You can now start and use R. However, R has more to offer.
Since R is an open-source software, there is a lively community of people
who have written so-called packages for R. These packages are small addi-
tions to R that you can load into R to obtain commands (or functions, as we
will later call them) that are not part of the default configuration.

5. In R, enter the following at the console install.packages()¶ and


then choose a mirror; I recommend always using Austria;
6. Choose all packages you think you will need; if you have a broad-
band connection, you could choose all of them, but I minimally

12 Depending on your Linux distribution, you may also be able to install R and many
frequently-used packages using a distribution-internal package manager).
Introduction and installation 59

recommend amap, car, Design, Hmisc, lattice, MASS, qcc, rpart,


and vcd. (You can also enter, say, install.packages("car")¶ at
the console to install said package and ideally do that with adminis-
trator/root rights; that is, in Ubuntu for example start R with sudo
R¶. On Linux systems, you will sometimes also need additional
files such as gfortran, which you may need to install separately.)

As a next step, you should download the files with example files, all the
code, exercises, and answer keys onto your hard drive. Create a folder such
as <C:/_sflwr/> on your harddrive (for statistics for linguists with R). Then
download all files from website of the Google group “StatForLing with R”
hosting the companion website of this book (<http://groups.google.com/
group/statforling-with-r/web/statistics-for-linguists-with-r> or the mirror at
<http://www.linguistics.ucsb.edu/faculty/stgries/research/sflwr/sflwr.html>)
and save them into the right folders:

− <C:/_sflwr/_inputfiles>: this folder will contain all input files: text files
with data for later statistical analysis, spreadsheets providing all files in
a compact format, input files for exercises etc.; to unzip these files, you
will need the password “hamste_R”;
− <C:/_sflwr/_outputfiles>: this folder will contain output files from
Chapters 2 and 5; to unzip these files, you will need the password
“squi_Rrel”;
− <C:/_sflwr/_scripts>: this folder will contain all files with code from
this book as well as the files with exercises and their answer keys; to
unzip these files, you will need the password “otte_R”.

(By the way, I am using regular slashes here because you can use those
in R, too, and more easily so than backslashes.) On Linux, you may want to
use your main user directory as in <home/user/sflwr>.) The companion
website will also provide a file with errata. Lastly, I would recommend that
you also get a text editor that has syntax highlighting for R. As a Windows
user, you can use Tinn-R or Notepad++; the former already has syntax
highlighting for R; the latter can be easily configured for the same functio-
nality. As a Mac user, you can use R.app. As a Linux user, I use gedit (with
the Rgedit plugin) or actually configure Notepad++ with Wine.
After all this, you can view all scripts in <C:/_sflwr/_scripts> with syn-
tax-highlighting which will make it easier for you to understand them. I
strongly recommend to write all R scripts that are longer than, say, 2-3
lines in these editors and then paste them into R because the syntax high-
60 Fundamentals of R

lighting will help you avoid mistakes and you can more easily keep track of
all the things you have entered into R.
R is not just a statistics program – it is also a programming language
and environment which has at least some superficial similarity to Perl or
Python. The range of applications is breathtakingly large as R offers the
functionality of spreadsheet software, statistics programs, a programming
language, database functions etc. This introduction to statistics, however, is
largely concerned with

− functions to generate and process simple data structures in R, and


− functions for statistical tests, graphical evaluation, and probability dis-
tributions.

We will therefore unfortunately not be able to deal with more complex


data structures and many aspects of R as a programming language however
interesting these may be. Also, I will not always use the simplest or most
elegant way to perform a particular task but the way that is most useful
from a pedagogical and methodological perspective (e.g., to highlight
commonalities between different functions and approaches). Thus, this
book is not a general introduction to R, and I refer you to the recommenda-
tions for further study and reference section for introductory books to R.
Now we have to address some typographical and other conventions. As
already above, websites, folders, and files will be delimited by “<“ and “>“
as in, say, <C:/_sflwr/_inputfiles/04-1-1-1_tense-aspect.txt>, where the
numbering before the underscore refers to the section in which this file is
used. Text you are supposed to enter into R is formatted like this
mean(c(1,—2,—3))¶ and will usually be given in grey blocks of several
lines. This character “—” represents a space (which are nearly always op-
tional) and this character “¶” instructs you to hit ENTER (I show these
characters here because they can be important to show the exact structure
of a line and because whitespace makes a big difference in character
strings; the code files of course do not include those visibly unless you set
your text editor to displaying them):

> a<-c(1,—2,—3)¶
> mean(a)¶
[1]—2

This also means for you: do not enter the two characters “>—”. They are
only provided for you to distinguish your input from R’s output more easi-
Introduction and installation 61

ly. You will also occasionally see lines that begin with “+”. These plus
signs, which you are not supposed to enter either, begin lines where R is
still expecting further input before it begins to execute the function. For
example, when you enter 2-¶, then this is what your R interface will look
like:

> 2-¶
+

R is waiting for you to complete the subtraction. When you enter the
number you wish to subtract and press ENTER, then the function will be
executed properly.

> 2-¶
+ 3¶
[1]—-1

Another example: if you wish to load the package corpora into R to


access some of the functions that the computational linguists Marco Baroni
and Stefan Evert contributed to the community, you can load this package
by entering library(corpora)¶. (Note: this only works if you installed the
package before as explained above.) However, if you forget the closing
bracket, R will wait for you to complete the input:

> library(corpora¶
+—)¶
>

Unfortunately, R will not always be this forgiving. By the way, if you


make a mistake in R, you often need to change only one thing in a line.
Thus, rather than typing the whole line again, press the cursor-up key to get
back to that line you wish to change or execute again; also, you need not
move the cursor to the end of the line before pressing ENTER.
Corpus files or tables / data frames will be represented as in Figure 13,
where “→” and “¶” again denote tab stops and line breaks respectively. Me-
nus, submenus, and commands in submenus in applications are given in
italics in double quotes, and hierarchical levels within application menus
are indicated with colons. So, if you open a document in, say, OpenOf-
fice.org Writer, you do that with what is given here as File: Open ….
62 Fundamentals of R

PartOfSp → TokenFreq → TypeFreq → Class¶


ADJ → 421 → 271 → open¶
ADV → 337 → 103 → open¶
N → 1411 → 735 → open¶
CONJ → 458 → 18 → closed¶
PREP → 455 → 37 → closed¶

Figure 13. Representational format of corpus files and data frames

2. Functions and arguments

As you may remember from school, one often does not use numbers, but
rather letters to represent variables that ‘contain’ numbers. In algebra class,
for example, you had to find out from two equations such as the following
which values a and b represent (here a = 23/7 and b = 20/7):

a+2b = 9 and
3a-b = 7

In R, you can solve such problems, too, but R is much more powerful,
so variable names such as a and b can represent huge multidimensional
elements or, as we will call them here, data structures. In this chapter, we
will deal with the data structures that are most important for statistical ana-
lyses. Such data structures can either be entered into R at the console or
read from files. I will present both means of data entry, but most of the
examples below presuppose that the data are available in the form of a tab-
delimited text file that has the structure discussed in the previous chapter
and was created in a text editor or a spreadsheet software such as OpenOf-
fice.org Calc. In the following sections, I will explain

− how to create data structures in R;


− how to load data structures into R and save them from R;
− how to edit data structures in R.

One of the most central things to understand about R is how you tell it
to do something other than the simple calculations from above. A com-
mand in R virtually always consists of two elements: a function and, in
parentheses, arguments. Arguments can be null, in which case the function
name is just followed by opening and closing parentheses. The function is
an instruction to do something, and the arguments to a function represent
Functions and arguments 63

(i) what the instruction is to be applied to and (ii) how the instruction is to
be applied to it. Let us look at two simple arithmetic functions you know
from school. If you want to compute the square root of 5 with R – without
simply entering the instruction 5^0.5¶, that is – you need to know the
name of the function as well as how many and which arguments it takes.
Well, the name of the function is sqrt, and it takes just one argument,
namely the figure of which you want the square root. Thus:

> sqrt(5)¶
[1]—2.236068

Note that R just outputs the result, but does not store it. If you want to
store a result into a data structure, you must use the assignment operator <-
(an arrow consisting of a less-than sign and a minus). The simplest way in
the present example is to assign a name to the result of sqrt(5). Note: R’s
handling of names, functions, and arguments is case-sensitive, and you can
use letters, numbers, periods, and underscores in names as long as the name
begins with a letter or a period (e.g., my.result or my_result or …):

> a<-sqrt(5)¶

R does not return anything, but the result of sqrt(5) has now been as-
signed to a data structure that is called a vector, which is called a. You can
test whether the assignment was successful by looking at the content of a.
One function to do that is print, and its minimally required argument is
the data structure whose content you want to see. Thus,

> print(a)¶
[1]—2.236068

Most of the time, it is enough to simply enter the name of the relevant
data structure:

> a¶
[1]—2.236068

Three final comments before we discuss various data structures in more


detail. First, R ignores everything in a line after a pound/number sign or
hash, which you can use to put comments into your lines. Second, the as-
signment operator can also be used to assign a new value to an existing
data structure. For example,
64 Fundamentals of R

> a<-sqrt(9)—#—assign—the—value—of—'sqrt(9)'—to—a¶
> a—#—print—a¶
[1]—3
> a<-a+2—#—assign—the—value—of—'a+2'—to—a¶
> a—#—print—a¶
[1]—5

If you want to delete or clear a data structure, you can use the function
rm (for remove). You can remove just a single data structure by using its
name as an argument to rm, or you can remove all data structures at once.

> rm(a)—#—remove/clear—a¶
> rm(list=ls(all=TRUE))—#—clear—memory—of—all—data—
structures¶

Third, it will be very important later on to know that many functions


have default settings for their arguments. This means that if you use a func-
tion without specifying all the necessary arguments, then R will use the
default settings of that function. Let us explore this on the basis of the very
useful function sample. This function generates random or pseudorandom
samples of elements and can take up to four arguments:

− x: a data structure – typically a vector – containing the elements of


which you want a sample;
− size: a positive integer giving the size of the sample;
− the assignment replace=TRUE (if the elements of the vector can be
sampled multiple times, sampling with replacement) or replace=FALSE
(if each element of the vector can only be sampled once, the default set-
ting);
− prob: a vector with the probabilities of each element to be sampled; the
default setting is NULL, which means that all elements are equally likely
to be sampled.

Let us look at a few examples, which will make successively more use
of default settings. First, you generate a vector with the numbers from 1 to
10 using the function c (for concatenate); the colon here generates a se-
quence of integers between the two numbers:

> some.data<-c(1:10)—#—or—just—some.data<-1:10¶

If you want to sample 5 elements from this vector equiprobably and


Functions and arguments 65

with replacement, you can enter the following:13

> sample(x=some.data,—size=5,—replace=TRUE,—prob=NULL)¶
[1]—5—9—9—9—2

If you list the arguments of a function in their standard order (as we do


here), then you can leave out their names:

> sample(some.data,—5,—TRUE,—NULL)¶
[1]—3—8—4—1—7

Also, prob=NULL is the default, so you can leave that out:

> sample(some.data,—5,—TRUE)¶
[1]——2——1——9——9—10

With the following line, you sample 5 elements equiprobably without


replacement:

> sample(some.data,—5,—FALSE)¶
[1]——1—10——6——3——8

But since replace=FALSE is the default, you can leave that out, too:

> sample(some.data,—5)¶
[1]—10——5——9——3——6

Sometimes, you can even leave out the size argument. If you do that, R
assumes you want all elements of the given vector in a random order:

> some.data¶
[1]——1——2——3——4——5——6——7——8——9—10
> sample(some.data)¶
[1]——2——4——3—10——9——8——1——6——5——7

And if you only want to the numbers from 1 to 10 in a random order,


you can even do away with the vector some.data:

> sample(10)¶
[1]——5—10——2——6——1——3——4——9——7——8

13. Your results will be different, after all this is random sampling.
66 Fundamentals of R

In extreme cases, the property of default settings may result in function


calls without any arguments. Consider the function q (for quit). This func-
tion shuts R down and usually requires three arguments:

− save: a character string indicating whether the R workspace should be


saved or not or whether the user should be prompted to make that deci-
sion (the default);
− status: the (numerical) error status to be returned to the operating sys-
tem, where relevant; the default is 0, indicating ‘successful completion’;
− runLast: a logical value (TRUE or FALSE), stating whether a function
called Last should be executed before quitting R; the default is TRUE.

Thus, if you want to quit R with these settings, you just enter:

> q()¶

R will then ask you whether you wish to save the R workspace or not
and, when you answered that question, executes the function Last (only if
you defined one), shuts down R and sends “0” to your operating system.
As you can see, defaults can be a very useful way of minimizing typing
effort. However, especially at the beginning, it is probably wise to try to
strike a balance between minimizing typing on the one hand and maximiz-
ing code transparency on the other hand. While this may ultimately boil
down to a matter of personal preferences, I recommend using more explicit
code at the beginning in order to be maximally aware of the options your R
code uses; you can then shorten your code as you become more proficient.

Recommendation(s) for further study


the functions ? or help, which provide the help file for a function (try
?sample¶ or help(sample)¶), and the function formals, which provides
the arguments a function needs, their default settings, and their default or-
der (try formals(sample)¶)

3. Vectors

3.1. Generating vectors in R

The most basic data structure in R is a vector. Vectors are one-dimensional,


sequentially ordered sequences of elements (such as numbers or character
Vectors 67

strings (such as words)). While it may not be completely obvious why vec-
tors are important here, we must deal with them in some detail since nearly
all other data structures in R can ultimately be understood in terms of vec-
tors. As a matter of fact, we have already used vectors when we computed
the square root of 5:

> sqrt(5)¶
[1]—2.236068

The “[1]” before the result indicates that the result of sqrt(5) is a vec-
tor that is one element long and whose first (and only) element is 2.236068.
You can test this with R: first, you assign the result of sqrt(5) to a data
structure called a.

> a<-sqrt(5)¶

The function is.vector tests whether its argument is a vector or not


and returns the result of its test, R’s version of “yes”:

> is.vector(a)¶
[1]—TRUE

And the function length determines and returns the number of elements
of the data structure provided as an argument:

> length(a)¶
[1]—1

Of course, you can also create vectors that contain character strings –
the only difference is that the character strings are put into double quotes:

> a.name<-"John";—a.name—#—several—functions—in—one—line—
are—separated—by—semicolons¶
[1]—"John"

(Actually, there are six different vector types, but we only deal with log-
ical vectors as well as vectors of numbers or character strings). Vectors
usually only become interesting when they contain more than one element.
You already know the function to create such vectors, c, and the arguments
it takes are just the elements to be concatenated in the vector, separated by
commas. For example:
68 Fundamentals of R

> numbers<-c(1,—2,—3);—numbers¶
[1]—1—2—3

or

> some.names<-c("al",—"bill",—"chris");—some.names¶
[1]—"al"————"bill"——"chris"

Note that, since individual numbers or character strings are also vectors
(of length 1), the function c can not only combine individual numbers or
character strings but also vectors with 2+ elements:

> numbers1<-c(1,—2,—3);—numbers2<-
c(4,—5,—6)—#—generate—two—vectors¶
> numbers1.and.numbers2<-
c(numbers1,—numbers2)—#—combine—vectors¶
> numbers1.and.numbers2¶
[1]—1—2—3—4—5—6

A similar function is append. This function takes at least two and max-
imally three arguments (and as usual the different arguments are separated
by commas):

− x: a vector to which something should be appended;


− values: the vector to be appended;
− after: the position in the data structure of the first argument where the
elements of the second argument are to be appended; the default setting
is at the end.

Thus, with append, the above example would look like this:

> numbers1.and.numbers2<-
append(numbers1,—numbers2)—#—combine—
vectors¶
> numbers1.and.numbers2¶
[1]—1—2—3—4—5—6

An example of how append is more typically used is the following,


where an existing vector is modified:

> evenmore<-c(7,—8)¶
> numbers<-append(numbers1.and.numbers2,—evenmore)¶
> numbers¶
[1]—1—2—3—4—5—6—7—8
Vectors 69

It is important to note that – unlike arrays in Perl – vectors can only


store elements of one data type. For example, a vector can contain numbers
or character strings, but not really both: if you try to force character strings
into a vector together with numbers, R will change the data type of one
kind of element to homogenize the kinds of vector elements, and since you
can interpret numbers as characters but not vice versa, R changes the num-
bers into character strings and then concatenates them into a vector of cha-
racter strings:

> mixture<-c("al",—2,—"chris");—mixture¶
[1]—"al"————"2"—————"caesar"

and

> numbers<-c(1,—2,—3);—names.of.numbers<-c("four",—"five",—
"six")—#—generate—two—vectors¶
> names.and.names.of.numbers<-c(numbers,—names.of.numbers)—#—
combine—vectors¶
> names.and.names.of.numbers¶
[1]—"1"————"2"————"3"————"four"—"five"—"six"

The double quotes around 1, 2, and 3 indicate that these are understood
as character strings, which means that you cannot use them for calculations
anymore (unless you change their data type back). We can identify the type
of a vector (or the data types of other data structures) with str (for “struc-
ture”) which takes as an argument the name of a data structure:

> str(numbers)¶
—num—[1:3]—1—2—3¶
> str(numbers.and.names.of.numbers)
—chr—[1:6]—"1"—"2"—"3"—"four"—"five"—"six"

The first vector consists of three numerical elements, namely 1, 2, and


3. The second vector consists of the six character strings (from character)
that are printed.
As you will see later, it is often necessary to create quite long vectors in
which elements or sequences of elements are repeated. Instead of typing
those into R, you can use two very useful functions, rep and seq. In a sim-
ple form, the function rep (for repetition) takes two arguments: the ele-
ment(s) to be repeated, and the number of repetitions. To create, say, a
vector x in which the number sequence from 1 to 3 is repeated four times,
you enter:
70 Fundamentals of R

> numbers<-c(1,—2,—3)¶
> x<-rep(numbers,—4)¶

or

> x<-rep(c(1,—2,—3),—4);—x¶
—[1]—1—2—3—1—2—3—1—2—3—1—2—3

To create a vector in which the numbers from 1 to 3 are individually re-


peated four times – not in sequence – then you use the argument each:

> x<-rep(c(1,—2,—3),—each=4);—x¶
—[1]—1—1—1—1—2—2—2—2—3—3—3—3

(The same would be true of vectors of character strings.) With whole


numbers, you can also often use the : as a range operator:

> x<-rep(c(1:3),—4)¶

The function seq (for sequence) is used a little differently. In one form,
seq takes three arguments:

− from: the starting point of the sequence;


− to: the end point of the sequence;
− by: the increment of the sequence.

Thus, instead of entering numbers<-c(1:3)¶, you can also write:

> numbers<-seq(1,—3,—1)¶

Since 1 is the default increment, the following would suffice:

> numbers<-seq(1,—3)¶

If the numbers in the vector to be created do not increment by 1, you


can set the increment to whatever value you need. The following lines gen-
erate a vector x in which the even numbers between 1 and 10 are repeated
six times in sequence. Try it out:

> numbers<-seq(2,—10,—2)¶
> x<-rep(numbers,—6)¶
Vectors 71

or

> x<-rep(seq(2,—10,—2),—6)¶

With c, append, rep, and seq, even long and complex vectors can often
be created fairly easily. Another useful feature is that you can not only
name vectors, but also elements of vectors:

> numbers<-c(1,—2,—3);—names(numbers)<-
c("one",—"two",—"three")¶
> numbers¶
——one———two—three
————1—————2—————3

Before we turn to loading and saving vectors, let me briefly mention an


interactive way to enter vectors into R. If you assign to a data structure just
scan()¶ (for a vector of numbers) or scan(what=character(0))¶ (for
vectors of character strings), then you can enter the numbers or character
strings separated by ENTER until you complete the data entry by pressing
ENTER twice:

> x<-scan()¶
1:—1¶
2:—2¶
3:—3¶
4:¶¶
Read—3—items
> x¶
[1]—1—2—3

Recommendation(s) for further study


the functions as.numeric and as.character to change the type of vectors

3.2. Loading and saving vectors in R

Since data for statistical analysis will usually not be entered into R manual-
ly, we now turn to reading vectors from files. First a general remark: R can
read data of different formats, but we only discuss data saved as text files,
i.e., files that often have the extension: <.txt>. Thus, if the data file has not
been created with a text editor but a spreadsheet software such as OpenOf-
fice.org Calc, then you must first export these data into a text file (with
File: Save As … and Save as type: Text CSV (.csv)).
72 Fundamentals of R

A very powerful function to load vector data into R is the function scan,
which we already used to enter data manually. This function can take many
different arguments so you should list arguments with their names. The
most important arguments of scan for our purposes together with their
default settings are as follows:

− file="": the path of the file you want to load;


− what="": the kind of input scan is supposed to read. The most impor-
tant settings are what=double(0) (for numbers, the omissible default)
and what=character(0);
− sep="": the character that separates individual entries in the file. The
default setting, sep="", means that any whitespace character will sepa-
rate entries, i.e. spaces, tabs (represented as "\t"), and newlines
(represented as "\n"). Thus, if you want to read in a text file into a vec-
tor such that each line is one element of the vector, you write sep="\n";
− dec="": the decimal point character; dec="." is the default; if you want
to use a comma instead of the default period, just enter that here as
dec=",".

To read the file <C:/_sflwr/_inputfiles/02-3-2_vector1.txt>, which con-


tains what is shown in Figure 14, into a vector x, you enter this.





Figure 14. An example file

> x<-scan(file="C:/_sflwr/_inputfiles/02-3-2_vector1.txt",—
sep="\n")¶
Read—5—items

Then you can print out the contents of x:

> x¶
[1]—1—2—3—4—5

Reading in a file with character strings (like the one in Figure 15) is just
as easy:
Vectors 73

alpha·bravo·charly·delta·echo¶

Figure 15. Another example file

> x<-scan(file="C:/_sflwr/_inputfiles/02-3-2_vector2.txt",—
what=character(0),—sep="—",—quiet=TRUE)¶

The argument quiet=TRUE suppresses the output of how many elements


were read. You get:

> x¶
[1]—"alpha"——"bravo"——"charly"—"delta"——"echo"

On a Windows-based PC, you can also use the function


choose.files() without arguments; R will prompt you to choose one or
more files in a file explorer window:

> x<-scan(file=choose.files(),—what=character(0),—sep="—",—
quiet=TRUE)—#—and—then—choose—<C:/_sflwr/_inputfiles/
02-3-2_vector2.txt>¶
> x¶
[1]—"alpha"——"bravo"——"charly"—"delta"——"echo"

If you use R on another operating system, you can either use the func-
tion file.choose(), which only allows you to choose one file, or you can
proceed as follows: After you entered the following line into R,

> choice<-select.list(dir(scan(nmax=1,—what=character(0)),—
full.names=TRUE),—multiple=TRUE)¶

you first enter the path to the directory in which the relevant file is located,
for example “C:/_sflwr/_inputfiles”. Then R will show to you all the files
in that directory and you can choose one (or more) of these and then load it
with the following line:14

14. Below and in the scripts I will mostly use choose.files with the argument de-
fault="…"; that argument provides the path to the required file. On PCs running Micro-
soft Windows – for some reason certainly still the most widely used operating system –
this is more convenient than file.choose() and allows you to access the relevant file
immediately just by pressing ENTER (if you have stored the files in the recommended
directories, that is). As a Mac- or Linux User you (i) change the file=choose.files(…)
argument into file=file.choose() and then enter a path when promoted to read in, or
write into, an already existing file.
74 Fundamentals of R

> x<-scan(choice,—what=character(0),—sep="—")¶

Now, how do you save vectors into files. The required function – basi-
cally the reverse of scan – is cat and it takes very similar arguments:

− the vector(s) to be saved;


− file="": the file into which the vector is to saved (or, on Windows
PCs, choose.files());
− sep="": the character that separates the elements of the vector from
each other: sep="" or sep="—" for spaces (the default), sep="\t" for
tabs, sep="\n" for newlines;
− append=TRUE or append=FALSE (the default): if the output file already
exists and you set append=TRUE, then the output will be appended to the
output file, otherwise the output will overwrite the existing file.

Thus, to append two names to the vector x and then save it under some
other name, you can enter the following:

> x<-append(x,—c("foxtrot",—"golf"))¶
> cat(x,—file=choose.files())—#—and—then—choose—<C:/_sflwr/
_outputfiles/02-3-2_vector3.txt>¶

Recommendation(s) for further study


the functions write as a wrapper for cat to save vectors

3.3. Editing vectors in R

Now that you can generate, load, and save vectors, we must deal with how
you can edit them. The functions we will be concerned with allow you to
access particular parts of vectors to output them, to use them in other func-
tions, or to change them. First, a few functions to edit numerical vectors.
One such function is round. Its first argument is the vector with numbers to
be rounded, its second the desired number of decimal places. (Note, R
rounds according to an IEEE standard: 3.55 does not become 3.6, but 3.5.)

> a<-seq(3.4,—3.6,—0.05);—a¶
[1]—3.40—3.45—3.50—3.55—3.60
> round(a,—1)¶
[1]—3.4—3.4—3.5—3.5—3.6
Vectors 75

The function floor returns the largest integers not greater than the cor-
responding elements of the vector provided as its argument, ceiling re-
turns the smallest integers not less than the corresponding elements of the
vector provided as an argument, and trunc simply truncates the elements
toward 0:

> floor(c(-1.8,—1.8))¶
[1]—-2——1
> ceiling(c(-1.8,—1.8))¶
[1]—-1——2
> trunc(c(-1.8,—1.8))¶
[1]—-1——1

That also means you can round in the ‘traditional’ way by using floor
as follows:

> floor(a+0.5)¶
[1]—3—3—4—4—4

Or, in a more abstract, but also more versatile, way:

> digits<-0
> floor(a*10^digits+0.5*10^digits)¶
[1]—3—3—4—4—4
> digits<-1
> floor(a*10^digits+0.5)/10^digits¶
[1]—3.4—3.5—3.5—3.6—3.6

The probably most important way to access parts of a vector (or other
data structures) in R involves subsetting with square brackets. In the sim-
plest possible form, this is how you access an individual vector element:

> x<-c("a",—"b",—"c",—"d",—"e")¶
> x[3]—#—access—the—3.—element—of—x¶
[1]—"c"

Since you already know how flexible R is with vectors, the following
uses of square brackets should not come as big surprises:

> y<-3¶
> x[y]—#—access—the—3.—element—of—x¶
[1]—"c"

and
76 Fundamentals of R

> z<-c(1,—3);—x[z]—#—access—elements—1—and—3—of—x¶
[1]—"a"—"c"

and

> z<-c(1:3)¶
> x[z]—#—access—elements—1—to—3—of—x¶
[1]—"a"—"b"—"c"

With negative numbers, you can leave out elements:

> x[-2]—#—access—x—without—the—2.—element¶
[1]—"a"—"c"—"d"—"e"

However, there are many more powerful ways to access parts of vec-
tors. For example, you can let R determine which elements of a vector ful-
fill a certain condition. One way is to present R with a logical expression:

> x=="d"¶
[1]—FALSE—FALSE—FALSE——TRUE—FALSE

This means, R checks for each element of x whether it is “d” or not and
returns its findings. The only thing requiring a little attention here is that
the logical expression uses two equal signs, which distinguishes logical
expressions from assignments such as file="". Other logical operators are:

& and | or
> greater than < less than
>= greater than or equal to <= less than or equal to
! not != not equal to

Here are some examples:

> x<-c(10:1)—#—generate—vector—with—the—numbers—from—10—to—1¶
> x¶
—[1]—10——9——8——7——6——5——4——3——2——1
> x==4—#—which—elements—of—x—are—4?¶
—[1]—FALSE—FALSE—FALSE—FALSE—FALSE—FALSE——TRUE—FALSE—FALSE—
FALSE
> x<=7—#—which—elements—of—x—are—<=—7?¶
—[1]—FALSE—FALSE—FALSE——TRUE——TRUE——TRUE——TRUE——TRUE——TRUE——
TRUE
> x!=8—#—which—elements—of—x—are—not—8?¶
—[1]——TRUE——TRUE—FALSE——TRUE——TRUE——TRUE——TRUE——TRUE——TRUE——
TRUE
Vectors 77

> (x>8—|—x<3)—#—which—elements—of—x—are—>—8—or—<—3?¶
—[1]——TRUE——TRUE—FALSE—FALSE—FALSE—FALSE—FALSE—FALSE——TRUE——
TRUE

Since TRUE and FALSE in R correspond to 1 and 0, you can easily deter-
mine how often a particular logical expressions is true in a vector:

> sum(x==4)¶
[1]—1
> sum(x>8—|—x<3)—#—an—example—using—or¶
[1]—4

The very useful function table counts how often vector elements (or
combinations of vector elements) occur. For example, with table we can
immediately determine how many elements of x are greater than 8 or less
than 3. (Note: table ignores missing data – if you want to count those, too,
you must write table(…,—exclude=NULL).)

> table(x>8—|—x<3)¶
FALSE——TRUE
————6—————4

It is, however, obvious that the above examples are not particularly ele-
gant ways to identify the position(s) of elements. However many elements
of x fulfill a logical condition, you always get 10 logical values and must
locate the TRUEs by hand – what do you do when a vector contains 10,000
elements? Another function can do that for you, though. This function is
which, and its argument is the kind of logical expression discussed above:

> which(x==4)—#—which—elements—of—x—are—4?¶
[1]—7

As you can see, this function looks nearly like English: you ask R
“which element of x is 4?”, and you get the response that the seventh ele-
ment of x is a 4. The following examples are similar to the ones above but
now use which:

> which(x<=7)—which—elements—of—x—are—<=—7?¶
[1]——4——5——6——7——8——9—10
> which(x!=8)—#—which—elements—of—x—are—not—8?¶
[1]——1——2——4——5——6——7——8——9—10
> which(x>8—|—x<3)—which—elements—of—x—are—>—8—or—<—3?¶
[1]——1——2——9—10
78 Fundamentals of R

It should go without saying that you can assign such results to data
structures, i.e. vectors:

> y<-which(x>8—|—x<3)¶
> y
[1]——1——2——9—10

Note: do not confuse the position of an element in a vector with the ele-
ment of the vector. The function which(x==4)¶ does not return the element
4, but the position of the element 4 in x, which is 7; and the same is true for
the other examples. You can probably guess how you can now get the ele-
ments themselves and not just their positions. You only need to remember
that R uses vectors. The data structure you just called y is also a vector:

> is.vector(y)¶
[1]—TRUE

Above, you saw that you can use vectors in square brackets to access
parts of a vector. Thus, when you have a vector x and do not just want to
know where to find numbers which are larger than 8 or smaller than 3, but
also which numbers these are, you first use which and then square brackets:

> y<-which(x>8—|—x<3)¶
> x[y]¶
[1]—10——9——2——1

Or you immediately combine these two steps:

> x[which(x>8—|—x<3)]¶
[1]—10——9——2——1

or even

> x[x>8—|—x<3]¶
[1]—10——9——2——1

You use a similar approach to see how often a logical expression is true:

> length(which(x>8—|—x<3))—#—or—sum(x>8—|—x<3)—as—above¶
[1]—4

Sometimes you may want to test for several elements at once, which
which can’t do, but you can use the very useful operator %in%:
Vectors 79

> c(1,—6,—11)—%in%—x¶
[1]——TRUE——TRUE—FALSE

The output of %in% is a logical vector which says for each element of
the vector before %in% whether it occurs in the vector after %in%. If you
also would like to know the exact position of the first (!) occurrence of
each of the elements of the first vector, you can use match:

> match(c(1,—6,—11),—x)¶
[1]—10——5—NA

That is to say, the first element of the first vector – the 1 – occurs the
first (and only) time at the tenth position of x; the second element of the
first vector – the 6 – occurs the first (and only) time at the fifth position of
x; the last element of the first vector – the 11 – does not occur in x.
I hope it becomes more and more obvious that the fact much of what R
does happens in terms of vectors is a big strength of R. Since nearly every-
thing we have done so far is based on vectors (often of length 1), you can
use functions flexibly and even embed them into each other freely. For
example, now that you have seen how to access parts of vectors, you can
also change those. Maybe you would like to change the values of x that are
greater than 8 into 12:

> x—#—show—x—again¶
—[1]—10——9——8——7——6——5——4——3——2——1
> y<-which(x>8)—#—store—the—positions—of—the—elements—
of—x—that—are—larger—than—8—into—a—vector—y¶
> x[y]<-12—#—replace—these—elements—of—x—with—12¶
> x¶
—[1]—12—12——8——7——6——5——4——3——2——1

As you can see, since you want to replace more than one element in x
but provide only one replacement (12), R recycles the replacement as often
as needed (cf. below for more on that feature). This is a shorter way to do
the same thing:

> x<-10:1—#—restore—x¶
> x[which(x>8)]<-12¶
> x¶
—[1]—12—12——8——7——6——5——4——3——2——1

And this one is even shorter:


80 Fundamentals of R

> x<-10:1—#—restore—x¶
> x[x>8]<-12¶
> x¶
—[1]—12—12——8——7——6——5——4——3——2——1

R also offers several set-theoretical functions – setdiff, intersect,


and union – which take two vectors as arguments. The function setdiff
returns the elements of the first vector that are not in the second vector:

> x<-c(10:1);—y<-c(2,—5,—9)—#—restore—x—and—y¶
> setdiff(x,—y)¶
[1]—10——8——7——6——4——3——1
> setdiff(y,—x)¶
numeric(0)

The function intersect returns the elements of the first vector that are
also in the second vector.

> intersect(x,—y)¶
[1]—9—5—2
> intersect(y,—x)¶
[1]—2—5—9

The function union returns all elements that occur in at least one of the
two vectors.

> union(x,—y)¶
[1]—10——9——8——7——6——5——4——3——2——1
> union(y,—x)¶
[1]——2——5——9—10——8——7——6——4——3——1

Another useful function is unique, which can be explained particularly


easily to linguists: unique goes through all the elements of a vector (to-
kens) and returns all elements that occur at least once (types).

> x<-c(1,—2,—3,—2,—3,—4,—3,—4,—5)¶
> unique(x)
[1]—1—2—3—4—5

In R you can also very easily apply a mathematical function or opera-


tion to a set of elements of a numerical vector. Mathematical operations
that are applied to a vector are applied to all elements of the vector:

> x<-c(10:1)—#—restore—x¶
> x¶
Vectors 81

—[1]—10——9——8——7——6——5——4——3——2——1
> y<-x+2¶
> y¶
—[1]—12—11—10——9——8——7——6——5——4——3

If you add two vectors (or multiply them with each other, or …), three
different things can happen. First, if the vectors are equally long, the opera-
tion is applied to all pairs of corresponding vector elements:

> x<-c(2,—3,—4);—y<-c(5,—6,—7)¶
> x*y¶
[1]—10—18—28

Second, the vectors are not equally long, but the length of the longer
vector can be divided by the length of the shorter vector without a remaind-
er. Then, the shorter vector will again be recycled as often as is needed to
perform the operation in a pairwise fashion; as you saw above, often the
length of the shorter vector is 1.

> x<-c(2,—3,—4,—5,—6,—7);—y<-c(8,—9)¶
> x*y¶
[1]—16—27—32—45—48—63

Third, the vectors are not equally long and the length of the longer vec-
tor cannot be divided by the length of the shorter vector without a remaind-
er. In such cases, R will recycle the shorter vectors as often as possible, but
will also return a warning:

> x<-c(2,—3,—4,—5,—6);—y<-c(8,—9)¶
> x*y¶
[1]—16—27—32—45—48
Warning—message:
longer—object—length
————————is—not—a—multiple—of—shorter—object—length—in:—x—*—y

Finally, two functions to change the ordering of elements of vectors.


The first of these functions is called sort, and its most important argument
is of course the vector whose elements are to be sorted; another important
argument defines the sorting style: decreasing=FALSE (the default) or
decreasing=TRUE.

> x<-c(1,—3,—5,—7,—9,—2,—4,—6,—8,—10)—#—generate—a—vector—x¶
> y<-sort(x)—#—sort—x—in—ascending—order¶
> z<-sort(x,—decreasing=TRUE)—#—sort—x—in—descending—order¶
82 Fundamentals of R

> y;—z¶
—[1]——1——2——3——4——5——6——7——8——9—10
—[1]—10——9——8——7——6——5——4——3——2——1

The second function is order. It takes one or more vectors as arguments


as well as the argument decreasing=… – but it returns something that may
not be immediately obvious. Can you see what order does?

> z<-c(3,—5,—10,—1,—6,—7,—8,—2,—4,—9)¶
> order(z,—decreasing=FALSE)¶
[1]——4——8——1——9——2——5——6——7—10——3

THINK
BREAK

The output of order when applied to a vector z is a vector which pro-


vides the order of the elements of z when they are sorted as specified. Let
us clarify this rather opaque characterization by means of this example. If
you wanted to sort the values of z in increasing order, you would first have
to take z’s fourth value (which is the smallest value, 1). Thus, the first val-
ue of order(z,—decreasing=FALSE)¶ is 4. The next value you would have
to take is the eighth value of z, which is 2. The next value you would take
is the first value of z, namely 3, etc. The last value of z you would take is
its third one, 10, the maximum value of z. (If you provide order with more
than vector, additional vectors are used to break ties.) As we will see be-
low, this function will turn out to be very handy when applied to data
frames.

Recommendations for further study


− the functions any and all to test whether any or all elements of a
vector fulfill a particular condition
− the function rev to reverse the elements of a vector
− the function abs to obtain the absolute values of a numerical vector

4. Factors

At a superficial glance at least, factors are similar to vectors of character


strings. Apart from the few brief remarks in this section, they will mainly
Factors 83

be useful when we read in tables and want R to recognize that some of the
columns in tables are nominal or categorical variables.

4.1. Generating factors in R

As I just mentioned, factors are mainly used to code nominal or categorical


variables, i.e. in situations where a variables has two or more (but usually
not very many) qualitatively different levels. The simplest way to create a
factor is to first generate a vector and then change that vector into a factor
using the function factor.

> rm(list=ls(all=T))—#—clear—memory;—recall:—T—/—F—=—
TRUE—/—FALSE¶

(While using T and F for TRUE and FALSE is a convenient shortcut, be


aware that you can also define T and F yourself with anything you want.
Thus, if you use T and F for TRUE and FALSE, make sure you never over-
write them with something else.)

> x<-c(rep("male",—5),—rep("female",—5))¶
> y<-factor(x);—y¶
—[1]—male———male———male———male———male———female—female—female—
female—female
Levels:—female—male
> is.factor(y)
> [1]—TRUE

The function factor usually takes one or two arguments. The first is
mostly the vector you wish to change into a factor. The second argument is
levels=… and will be explained in Section 2.4.3 below.
When you output a factor, you can see one difference between factors
and vectors because the output includes a list of all factor levels that occur
at least once. It is not a perfect analogy, but you can look at it this way:
levels(FACTOR)¶ generates something similar to unique(VECTOR)¶.

4.2. Loading and saving factors in R

We do not really need to discuss how you load factors – you do it in the
same way as you load vectors, and then you convert the loaded vector into
a factor as illustrated above. Saving a factor, however, is a little different.
84 Fundamentals of R

Imagine you have the following factor a.

> a<-factor(c("alpha",—"charly",—"bravo"));—a¶
[1]—alpha——charly—bravo
Levels:—alpha—bravo—charly

If you now try to save this factor into a file as you would do with a vec-
tor, your output file will look like this:

> cat(a,—sep="\n",—file="C:/_sflwr/_outputfiles/02-
42_factor1.txt")¶



Figure 16. Another example file

This is because R represents factors internally in the form of numbers


(which represent the factor levels), and therefore R also only outputs these
numbers into a file. Since you want the words, however, you should simply
force R to treat the factor as a vector for the output, which will produce the
desired result.

> cat(as.vector(a),—sep="\n",—file="C:/_sflwr/_outputfiles/
02-4-2_factor2.txt")¶

4.3. Editing factors in R

Editing factors is similar to editing vectors, but sometimes a small addi-


tional difficulty arises:

> x<-c(rep("long",—3),—rep("short",—3))¶
> x<-factor(x);—x¶
—[1]—long——long——long——short—short—short
Levels:—long—short
> x[2]<-"short"¶
> x¶
—[1]—long——short—long——short—short—short
Levels:—long—short

Thus, if your change only consists of assigning a level that already ex-
ists in the factor to another position in the factor, then you can treat vectors
Data frames 85

and factors alike. The difficulty arises when you assign a new level:

> x[2]<-"intermediate"¶
Warning—message:
In—`[<-.factor`(`*tmp*`,—2,—value—=—"intermediate")—:
——invalid—factor—level,—NAs—generated
> x¶
—[1]—long——<NA>——long—short—short—short
Levels:—long—short

Thus, if you want to assign a new level, you first must tell R that. You
can do that with factor, but now you also must use the argument levels:

> x<-c(rep("long",—3),—rep("short",—3))—#—as—above¶
> x<-factor(x,—levels=c("long",—"short",—"intermediate"))—#—
introducing—the—new—level¶
> x—#—x—has—not—changed—apart—from—the—levels¶
—[1]—long——long——long——short—short—short
Levels:—long—short—intermediate
> x[2]<-"intermediate"—#—now—you—can—use—the—new—level¶
> x¶
—[1]—long—————————intermediate—long—————————short———————short
———————short
Levels:—long—short—intermediate

Recommendation(s) for further study


− the function gl to create factors
− the function reorder and relevel to reorder the levels of a factor

5. Data frames

The data structure that is most relevant to nearly all statistical methods in
this book is the data frame. The data frame, basically a table, is actually
only a specific type of another data structure, the list, but since data frames
are the single most frequent input format for statistical analyses (within R,
but also for other statistical programs and of course spreadsheet software),
we will concentrate only on data frames per se and disregard lists.

5.1. Generating data frames in R

Given the centrality of vectors in R, you can generate data frames easily
from vectors (and factors). Imagine you collected three different kinds of
86 Fundamentals of R

information for five parts of speech:

− the variable TOKENFREQUENCY, i.e. the frequency of words of a partic-


ular part of speech in a corpus X;
− the variable TYPEFREQUENCY, i.e. the number of different words of a
particular part of speech in the corpus X;
− the variable CLASS, which represents whether the part of speech is from
the group of open-class words or closed-class words.

Imagine also the data frame or table you wanted to generate is the one in
Figure 17. Step 1: you generate four vectors, one for each column of the
table:

PartOfSp → TokenFreq → TypeFreq → Class¶


ADJ → 421 → 271 → open¶
ADV → 337 → 103 → open¶
N → 1411 → 735 → open¶
CONJ → 458 → 18 → closed¶
PREP → 455 → 37 → closed¶

Figure 17. An example data frame

> rm(list=ls(all=T))—#—clear—memory¶
> PartOfSp<-c("ADJ",—"ADV",—"N",—"CONJ",—"PREP")¶
> TokenFrequency<-c(421,—337,—1411,—458,—455)¶
> TypeFrequency<-c(271,—103,—735,—18,—37)¶
> Class<-c("open",—"open",—"open",—"closed",—"closed")¶

Step 2: The first row in the desired table does not contain data points but
the header with the column names. You must now decide whether the first
column contains data points or also ‘just’ the names of the rows. In the first
case, you can just create your data frame with the function data.frame,
which takes as arguments the relevant vectors:

> x<-data.frame(PartOfSp,—TokenFrequency,—TypeFrequency,—
Class)¶

(The order of vectors is not really important, but determines the order of
columns.) Now you can look at the data frame’s characteristics:

> x¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open
Data frames 87

3————————N———————————1411———————————735———open
4—————CONJ————————————458————————————18—closed
5—————PREP————————————455————————————37—closed
> str(x)¶
'data.frame':———5—obs.—of——4—variables:
—$—PartOfSp——————:—Factor—w/—5—levels—"ADJ","ADV","CONJ",..:—
1—2—4—3—5
—$—TokenFrequency:—num———421——337—1411——458——455
—$—TypeFrequency—:—num——271—103—735—18—37
—$—Class—————————:—Factor—w/—2—levels—"closed","open":—2—2—2—
1—1

Within the data frame, R has changed the vectors of character strings in
factors and represents them with numbers internally (e.g., “closed” is 1 and
“open” is 2). It is very important in this connection that R only changes
variables into factors when they contain character strings (and not just
numbers). If you have a data frame in which nominal or categorical va-
riables are coded with numbers, then R will not know or guess that these
are in fact factors, will treat the variables as numeric and thus as inter-
val/ratio variables in statistical analyses. Thus, you should either use mea-
ningful character strings as factor levels in the first place or must character-
ize the relevant variable(s) as factors at the point of time you create the data
frame: factor(vectorname). Also, you did not define row names, so R
automatically numbers the rows. If you want to use the parts of speech as
row names, you need to say so explicitly:

> x<-data.frame(TokenFrequency,—TypeFrequency,—Class,—
row.names=PartOfSp)¶
> x¶
—————TokenFrequency—TypeFrequency——Class
ADJ—————————————421———————————271———open
ADV—————————————337———————————103———open
N——————————————1411———————————735———open
CONJ————————————458————————————18—closed
PREP————————————455————————————37—closed
> str(x)¶
'data.frame':———5—obs.—of——3—variables:
—$—TokenFrequency:—num———421——337—1411——458——455
—$—TypeFrequency—:—num——271—103—735—18—37
—$—Class—————————:—Factor—w/—2—levels—"closed","open":—2—2—2—
1—1

As you can see, there are now only three variables left because
PartOfSp now functions as row names. Note that this is only possible when
the column with the row names contains no element twice.
88 Fundamentals of R

5.2. Loading and saving data frames in R

While you can generate data frames as shown above, this is certainly not
the usual way in which data frames are entered into R. Typically, you will
read in files that were created with a spreadsheet software. If you create a
table in, say Openoffice.org Calc and want to work on it within R, then you
should first save it as a comma-separated text file. There are two ways to
do this. Either you copy the whole file into the clipboard, paste it into a text
editor (e.g., Tinn-R or Notepad++), and then save it as a tab-delimited text
file, or you save it directly out of the spreadsheet software as a CSV file (as
mentioned above with File: Save As … and Save as type: Text CSV (.csv);
then you choose tabs as field delimiter and no text delimiter.15 To load this
file into R, you use the function read.table and some of its arguments:

− file="…": the path to the text file with the table (on Windows PCs you
can use choose.files() here, too; if the file is still in the clipboard,
you can also write file="clipboard";
− header=T: an indicator of whether the first row of the file contains col-
umn headers (which it should always have) or header=F (the default);
− sep="": between the double quotes you put the single character that
delimits columns; the default sep="" means space or tab, but usually
you should set sep="\t" so that you can use spaces in cells of the table;
− dec="." or dec=",": the decimal separator;
− row.names=…, where … is the number of the column containing the row
names;
− quote=…: the default is that quotes are marked with single or double
quotes, but you should nearly always set quote="";
− comment.char=…: the default is that comments are separated by “#”, but
we will always set comment.char="".

Thus, if you want to read in the above table from the file
<C:/_sflwr/_inputfiles/02-5-2_dataframe1.txt> – once without row names
and once with row names – then this is what you enter on a Windows PC:

> a<-read.table(choose.files(),—header=T,—sep="\t",—quote=""
,—comment.char="")—#—no—row—numbers:—R—numbers—rows¶

15. To do the same in Microsoft Excel, you save the file as a tab-delimited text file.
Data frames 89

or

> a<-read.table(choose.files(),—header=T,—sep="\t",—quote=""
,—comment.char="",—row.names=1)—#—with—row—numbers:—
R—does—not—number—rows¶

By entering a¶ or str(a)¶, you can check whether the data frame has
been loaded correctly. If you want to save a data frame from R, then you
use write.table. Its most important arguments are:

− x: the data frame you want to save;


− file: the path to the file into which you wish to save the data frame; on
Windows PCs you can again use choose.files();
− append=F (the default) or append=T: the former generates or overwrites
the defined file, the latter appends the data frame to that file;
− quote=T (the default) or quote=F: the former prints factor levels with
double quotes; the letter prints them without quotes;
− sep="": between the double quotes you put the single character that
delimits columns; the default "—" means a space, what you should use is
"\t", i.e. tabs;
− eol="\n": between the double quotes you put the single character that
separates lines from each other (eol for end of line); the default "\n"
means newline;
− dec="." (the default): the decimal separator;
− row.names=T (the default) or row.names=F: whether you want row
names or not;
− col.names=T (the default) or col.names=F: whether you want column
names or not.

Given the above default settings and under the assumption that your op-
erating system uses an English locale, there are two most common ways to
save such data frames: if you have a data frame without row names (i.e., the
first version of a we looked at), you enter the following line:
write.table(a,—choose.files(),—quote=F,—sep="\t")¶. If you have
a data frame with row names (the second version of a we looked at), you
add col.names=NA so that the column names stay in place:

> write.table(x,—file.choose(default="C:/_sflwr/_outputfiles/
02-5-2_dataframe2.txt"),—quote=F,—sep="\t",—
col.names=NA)¶
90 Fundamentals of R

(Cf. <C:/_sflwr/_outputfiles/02-5-2_dataframe2.txt> for the output).


The default setting for the decimal separator can be changed in OpenOf-
fice.org Calc (Tools: Options: Language Settings: Languages) or in Micro-
soft Excel (Tools: Options: International) or – for the locale – in Windows
(Control panel: Regional and language options).

5.3. Editing data frames in R

In this section, we will discuss how you can access parts of data frames and
then how you can edit and change data frames.
Further below, we will discuss many examples in which you have to
access individual columns or variables of data frames. You can do this in
several ways. The first of these you may have already guessed from look-
ing at how a data frame is shown in R. If you load a data frame with col-
umn names and use str to look at the structure of the data frame, then you
see that the column names are preceded by a “$”. You can use this syntax
to access columns of data frames, as in this example using the file
<C:/_sflwr/_inputfiles/02-5-3_dataframe.txt>.

> rm(list=ls(all=T))—#—clear—memory¶
> a<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> a¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open
3————————N———————————1411———————————735———open
4—————CONJ————————————458————————————18—clossd
5—————PREP————————————455————————————37—closed
> a$TokenFrequency¶
[1]——421——337—1411——458——455
> a$Class¶
[1]—open———open———open———closed—closed
Levels:—closed—open

You can now use these just like any other vector or factor. For example,
the following line computes token/type ratios of the parts of speech:

> ratio<-a$TokenFrequency/a$TypeFrequency;—ratio¶
[1]——1.553506——3.271845——1.919728—25.444444—12.297297

You can also use indices in square brackets for subsetting. Above, we
discussed how you can access parts of vectors or factors by putting the
Data frames 91

position of an element into square brackets. Vectors and factors are one-
dimensional structures, but R allows you to specify arbitrarily complex data
structures. With two-dimensional data structures, you can also use square
brackets, but now you must of course provide values for both dimensions to
identify one or several data points – just like in a two-dimensional coordi-
nate system. This is very simple and the only thing you need to memorize
is the order of the values – rows, then columns – and that the two values are
separated by a comma. Here are some examples:

> a[2,3]—#—the—value—in—row—2—and—column—3¶
[1]—103
> a[2,]—#—the—values—in—row—2,—since—no—column—is—defined¶
——PartOfSp—TokenFrequency—TypeFrequency—Class
2——————ADV————————————337———————————103——open
> a[,3]—#—the—values—in—column—3,—since—no—row—is—defined¶
[1]—271—103—735——18——37
> a[2:3,4]—#—values—2—and—3—of—column—4¶
[1]—open—open
Levels:—closed—open
> a[2:3,3:4]—#—values—2—and—3—of—column—3—and—4¶
——TypeFrequency—Class
2———————————103——open
3———————————735——open

Note that row and columns names are not counted. Also note that all
functions applied to vectors above can be used with what you extract out of
a column of a data frame:

> which(a[,2]>450)¶
[1]—3—4—5
> a[,3][which(a[,3]>100)]¶
[1]—271—103—735

The most practical way to access individual columns, however, involves


the function attach (and gets undone with detach). You get no output, but
you can now access any column with its name:

> attach(a)¶
> Class¶
[1]—open———open———open———closed—closed
Levels:—closed—open

Note, however, that you now use ‘copies’ of these variables. You can
change those, but these changes do not affect the data frame a they come
from.
92 Fundamentals of R

> Class[4]<-NA;—Class¶
[1]—open———open———open———<NA>———closed
Levels:—closed—open
> a¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open
3————————N———————————1411———————————735———open
4—————CONJ————————————458————————————18—clossd
5—————PREP————————————455————————————37—closed

Let’s change Class back to its original state:

> Class[4]<-"closed"¶

If you want to change the data frame a, then you must make your
changes in a directly, for example with a$TokenFrequency[2]<-338¶ or
a$Class[4]<-NA¶. Given what you have seen in Section 2.4.3, however,
this is only easy with factors where you do not add a new level or vectors –
if you want to add a new factor level, you must define that level first.
Sometimes you will need to investigate only a part of a data frame –
maybe a set of rows, or a set of columns, or a matrix within a data frame.
Also, a data frame may be so huge that you only want to keep one part of it
in memory. As usual, there are several ways to achieve that. One uses in-
dices in square brackets with logical conditions or which. Either you have
already used attach and can use the column names directly

> b<-a[Class=="open",];—b¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open
3————————N———————————1411———————————735———open

or not:

> b<-a[a[,4]=="open",];—b¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open
3————————N———————————1411———————————735———open

(Of course you can also write b<-a[a$Class=="open",]¶.) That is, you
determine all elements of the column called “Class” / the fourth column
that are “open”, and then you use that information to access the desired
rows (hence the comma before the closing square bracket). There is a more
Data frames 93

elegant way to do this, though, the function subset. This function takes
two arguments: the data frame of which you want a subset and the logical
condition(s) describing which subset you want. Thus, the following line
creates the same structure b as above:

> b<-subset(a,—Class=="open")¶

The formulation “condition(s)” already indicates that you can of course


use several conditions at the same time.

> b<-subset(a,—Class=="open"—&—TokenFrequency<1000);—b¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open
> b<-subset(a,—PartOfSp—%in%—c("ADJ",—"ADV"));—b¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open

As I mentioned above, you will usually edit data frames in a spreadsheet


software or, because the spreadsheet software does not allow for as many
rows as you need, in a text editor. For the sake of completeness, let me
mention that R of course also allows you to edit data frames in a spread-
sheet-like format. The function fix takes as argument a data frame and
opens a spreadsheet editor in which you can edit the data frame; you can
even introduce new factor levels without having to define them first. When
you close the editor, R will do that for you.
Finally, let us look at ways in which you can sort data frames. Recall
that the function order creates a vector of positions and that vectors can be
used for sorting. Imagine you wanted to search the data frame a according
to the column Class (in alphabetically ascending order), and within Class
according to TokenFrequency (in descending order). How can you do that?

THINK
BREAK

One problem here is that both sorting styles are different: one is
decreasing=F,the other is decreasing=T. What you can do is this:

> order.index<-order(Class,—-TokenFrequency);—order.index¶
[1]—4—5—3—1—2
94 Fundamentals of R

That is, you do not apply order to TokenFrequency, but to the negative
values of TokenFrequency. Once that is done, you can use the vector
order.index to sort the data frame:

> a[order.index,]¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
4—————CONJ————————————458————————————18—closed
5—————PREP————————————455————————————37—closed
3————————N———————————1411———————————735———open
1——————ADJ————————————421———————————271———open
2——————ADV————————————337———————————103———open

Of course you can do that in just one line:16

> a[order(Class,—-TokenFrequency),]¶

You can now also use the function sample to sort the rows of a data
frame randomly (for example, to randomize tables with experimental items;
cf. above). You first determine the number of rows to be randomized (with
dim) and then combine sample with order:

> no.rows<-dim(a)[1]—#—or—e.g.:—no.rows<-length(a$Class)¶
> order.index<-sample(no.rows);—order.index¶
[1]—1—4—2—3—5
> a[order.index,]¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
1——————ADJ————————————421———————————271———open
4—————CONJ————————————458————————————18—clossd
2——————ADV————————————337———————————103———open
3————————N———————————1411———————————735———open
5—————PREP————————————455————————————37—closed

You data frame will probably be different because we used a random


sampling. Once more in just one line:

> a[sample(dim(a)[1]),]¶

But what do you do when you need to sort a data frame according to
several factors – some in ascending and some in descending order? You
can of course not use negative values of factor levels – what would -“open”
be? Thus, you use the function rank, which first rank-orders factor levels,
and then you can use negative values of these ranks:

16. Note that R is superior to many other programs here because the number of sorting
parameters is in principle unlimited.
Data frames 95

> order.index<-order(-rank(Class),—-rank(PartOfSp))¶
> a[order.index,]¶
——PartOfSp—TokenFrequency—TypeFrequency——Class
3————————N———————————1411———————————735———open
2——————ADV————————————337———————————103———open
1——————ADJ————————————421———————————271———open
5—————PREP————————————455————————————37—closed
4—————CONJ————————————458————————————18—clossd

Now you should do the exercise(s) for Chapter 2 …

Recommendation(s) for further study


− the functions nrow and ncol for the number of rows and columns of a
data frame (as a more direct way than dim(x)[1] and dim(x)[2])
− the argument colClasses of the function read.table to tell R which
columns should be converted to vectors and which to factors
− the functions read.csv and read.csv2 to read in simple Microsoft
Excel files
− the function read.delim to load text files with useful default settings
− the function read.spss to load simple SPSS files
− the functions cbind and rbind to combine vectors and factors in a co-
lumnwise or rowwise way
− the function merge to combine different data frames
− the function with to access columns of a data frame without attach
− the functions NA, is.na, NaN, is.nan, na.action, na.omit, and
na.fail on how to handle missing data
− the function complete.cases to test which rows of a data frame contain
missing data / NA
− the function save to save data structures in a compressed binary format
− Ligges (2005), Crawley (2007), Braun and Murdoch (2008), Spector
(2008), Gentleman (2009), and Gries (2009) for more information on R:
Ligges (2005), Braun and Murdoch (2008), and Gentleman (2009) on R
as a (statistical) programming language, Crawley as a very comprehen-
sive overview, Spector (2008) on data manipulation in R, and Gries
(2009) on corpus-linguistic methods with R
Chapter 3
Descriptive statistics

In this chapter, I will explain how you describe the results of your study. In
section 3.1, I will discuss univariate statistics, i.e. statistics that summarize
the distribution of one variable, of one vector, of one factor. Section 3.2
then is concerned with bivariate statistics, statistics that characterize the
relation of two variables, two vectors, two factors to each other. Both sec-
tions also introduce ways of representing the data graphically; additional
graphs will be illustrated in Chapter 4.

1. Univariate statistics

1.1. Frequency data

The probably simplest way to describe the distribution of data points are
frequency tables, i.e. lists that state how often each individual outcome was
observed. In R, generating a frequency table is extremely easy. Let us look
at a psycholinguistic example. Imagine you extracted all occurrences of the
disfluencies uh, uhm, and ‘silence’ and noted for each disfluency whether it
was produced by a male or a female speaker, whether it was produced in a
monolog or in a dialog, and how long in milliseconds the disfluency lasted.
First, we load these data from the file <C:/_sflwr/_inputfiles/03-
1_uh(m).txt>.

> UHM<-read.table(file.choose(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(UHM)¶
> str(UHM)—#—inspect—the—structure—of—the—data—frame¶
'data.frame':———1000—obs.—of——5—variables:
—$—CASE——:—int——1—2—3—4—5—6—7—8—9—10—...
—$—SEX———:—Factor—w/—2—levels—"female","male":—2—1—1—1—2——...
—$—FILLER:—Factor—w/—3—levels—"silence","uh",..:—3—1—1—3——...
—$—GENRE—:—Factor—w/—2—levels—"dialog","monolog":—2—2—1—1—...
—$—LENGTH:—int——1014—1188—889—265—465—1278—671—1079—643—...

To see which disfluency or filler occurs how often, you use the function
table, which creates a frequency list of the elements of a vector or factor:
Univariate statistics 97

> table(FILLER)¶
FILLER
silence——————uh—————uhm
————332—————394—————274

If you also want to know the percentages of each disfluency, then you
can either do this rather manually

> table(FILLER)/length(FILLER)¶
FILLER
silence——————uh—————uhm
——0.332———0.394———0.274

… or you use the function prop.table, whose argument is a table generat-


ed with table and which returns the percentages of the frequencies in that
table. (More arguments to prop.table will be discussed below.)

> prop.table(table(FILLER))¶
FILLER
silence——————uh—————uhm
——0.332———0.394———0.274

Often, it is also useful to generate a cumulative frequency table of the


observed values or of the percentages. R has a function cumsum, which
successively adds the values of a vector and returns all sums, which is ex-
emplified in the following two lines:

> 1:5—#—the—values—from—1—to—5¶
[1]—1—2—3—4—5
> cumsum(1:5)—#—cumulative—sums—of—the—values—from—1—to—5¶
[1]——1——3——6—10—15

And of course you can apply cumsum to our tables:

> cumsum(table(FILLER))¶
silence——————uh—————uhm
————332—————726————1000
> cumsum(prop.table(table(FILLER)))¶
silence——————uh—————uhm
——0.332———0.726———1.000

Usually, it is instructive to represent the observed distribution graphical-


ly and the sections below introduce a few graphical formats. For reasons of
space, I can only discuss some ways to tweak graphs, but you can turn to
the documentation of these functions and Murrell (2005) for more info.
98 Descriptive statistics

1.1.1. Scatterplots and line plots

Before we begin to summarize vectors and factors graphically in groups of


elements, we discuss how the data points of a vector are plotted individual-
ly. The simplest approach just requires the function plot. This is a very
versatile function, which, depending on the arguments you use, creates
many different graphs. (This may be a little confusing at first, but allows
for an economical style of working, as you will see later.) If you provide
just one numerical vector as an argument, then R plots a scatterplot, i.e., a
two-dimensional coordinate system in which the values of the vector are
interpreted as coordinates of the y-axis, and the order in which they appear
in the vector are the coordinates of the x-axis. Here’s an example:

> a<-c(1,—3,—5,—2,—4);—b<-1:5¶
> plot(a)—#—left—panel—of—Figure—18¶

Figure 18. Simple scatterplots

But if you give two vectors as arguments, then the values of the first and
the second are interpreted as coordinates of the x-axis and the y-axis re-
spectively (and the names of the vectors will be used as axis labels):

> plot(a,—b)—#—right—panel—of—Figure—18¶

With the argument type=…, you can specify the kind of graph you want.
The default, which was used because you did not specify anything else, is
type="p" (for points). If you use type="b" (for both), you get points and
lines connecting the points; if you use type="l" (for lines), you get a line
plot; cf. Figure 19. (With type="n", nothing gets plotted into the main
plotting area, but the coordinate system is set up.)
Univariate statistics 99

> plot(b,—a,—type="b")—#—left—panel—of—Figure—19¶
> plot(b,—a,—type="l")—#—right—panel—of—Figure—19¶

Figure 19. Simple line plots

Other simple but useful ways to tweak graphs involve defining labels
for the axes (xlab="…" and ylab="…"), a bold heading for the whole graph
(main="…"), the ranges of values of the axes (xlim=… and ylim=…), and the
addition of a grid (grid()¶). With col="…", you can also set the color of
the plotted element, as you will see more often below.

> plot(b,—a,—xlab="A—vector—b",—ylab="A—vector—a",—
xlim=c(0,—8),—ylim=c(0,—8),—type="b")—#—Figure—20¶
> grid()¶

Figure 20. A scatterplot exemplifying a few simple plot settings

An important rule of thumb is that the ranges of the axes must be chosen
in such a way that the distribution of the data is represented most meaning-
fully. It is often useful to include the point (0, 0) within the ranges of the
100 Descriptive statistics

axes and to make sure that graphs that are to be compared have the same
axis ranges. For example, if you want to compare the ranges of values of
two vectors x and y in two graphs, then you usually may not want to let R
decide on the ranges of axes. Consider the upper panel of Figure 21.

Figure 21. Scatterplots and the importance of properly-defined ranges of axes

The clouds of points look very similar and you only notice the distribu-
tional difference between x and y when you specifically look at the range
of values on the y-axis. The values in the upper left panel range from 0 to 2
but those in the upper right panel range from 0 to 6. This difference be-
tween the two vectors is immediately obvious, however, when you use
ylim=… to manually set the ranges of the y-axes to the same range of val-
ues, as I did for the lower panel of Figure 21.
Note: whenever you use plot, by default a new graph is created and the
old graph is lost. If you want to plot two lines into a graph, you first gener-
ate the first with plot and then add the second one with points (or lines;
sometimes you can also use the argument add=T). That also means that you
must define the ranges of the axes in the first plot in such a way that the
Univariate statistics 101

values of the second graph can also be plotted into it. An example will
clarify that point. If you want to plot the points of the vectors m and n, and
then want to add into the same plot the points of the vectors x and y, then
this does not work, as you can see in the left panel of Figure 22.

> m<-1:5;—n<-5:1¶
> x<-6:10;—y<-6:10¶
> plot(m,—n,—type="b");—points(x,—y,—type="b");—grid()¶

Figure 22. Scatterplots and the importance of properly-defined ranges of axes

The left panel of Figure 22 shows the points defined by m and n, but not
those of x and y because the ranges of the axes that R used to plot m and n
are too small for x and y, which is why you must define those manually
while creating the first coordinate system. One way to do this is to use the
function max, which returns the maximum value of a vector (and min re-
turns the minimum). The right panel of Figure 22 shows that this does the
trick. (In this line, the minimum is set to 0 manually – of course, you could
also use min(m,—x) and min(n,—y) for that, but I wanted to include (0, 0)
in the graph.)

> plot(m,—n,—type="b",—xlim=c(0,—max(m,—x)),—ylim=
c(0,—max(n,—y)),—xlab="Vectors—m—and—x",—
ylab="Vectors—n—and—y");—grid()¶
> points(x,—y,—type="b")¶

Recommendation(s) for further study


the functions pmin and pmax to determine the minima and maxima at each
position of different vectors (try pmin(c(1,—5,—3),—c(2,—4,—6))¶)
102 Descriptive statistics

1.1.2. Pie charts

The function to generate a pie chart is pie. Its most important argument is a
table generated with table. You can either just leave it at that or, for ex-
ample, change category names with labels=… or use different colors with
col=… etc.:

> pie(table(FILLER),—col=c("grey20",—"grey50",—"grey80"))¶

Figure 23. A pie chart with the frequencies of disfluencies

1.1.3. Bar plots

To create a bar plot, you can use the function barplot. Again, its most
important argument is a table generated with table and again you can
create either a standard version or more customized ones. If you want to
define your own category names, you unfortunately must use
names.arg=…, not labels=… (cf. Figure 24 below).
An interesting way to configure bar plots is to use space=0 to have the
bars be immediately next to each other. That is of course not exactly mind-
blowing in itself, but it is one way to make it easier to add further data to
the plot. For example, you can then easily plot the observed frequencies
into the middle of each bar using the function text. The first argument of
text is a vector with the x-axis coordinates of the text to be printed (0.5 for
the first bar, 1.5 for the second bar, and 2.5 for the third bar), the second
argument is a vector with the y-axis coordinates of that text (half of each
observed frequency), and labels=… provides the text to be printed.

> barplot(table(FILLER))—#—left—panel—of—Figure—24¶
> barplot(table(FILLER),—col=c("grey20",—"grey40",—
Univariate statistics 103

"grey60"),—names.arg=c("Silence",—"Uh",—"Uhm"))—#—
right—panel—of—Figure—24¶

Figure 24. Bar plots with the frequencies of disfluencies

> barplot(table(FILLER),—col=c("grey40",—"grey60",—"grey80")
,—names.arg=c("Silence",—"Uh",—"Uhm"),—space=0)¶
> text(c(0.5,—1.5,—2.5),—table(FILLER)/2,—labels=
table(FILLER))¶

Figure 25. Bar plot with the frequencies of disfluencies

The functions plot and text allow for another interesting and easy-to-
understand graph: first, you generate a plot that contains nothing but the
axes and their labels (with type="n", cf. above), and then with text you
plot not points, but words or numbers. Try this:

> plot(c(394,—274,—332),—type="n",—xlab="Disfluencies",—ylab=
"Observed—frequencies",—xlim=c(0,—4),—ylim=c(0,—500))¶
104 Descriptive statistics

> text(1:3,—c(394,—274,—332),—labels=c("uh",—"uhm",—
"silence"))¶

This probably already illustrates the potential of this kind of graph, of


which you will see further examples below.

Recommendation(s) for further study


the parameter settings cex, srt, col, and font to tweak the size, rotation,
color, and font type of text you plot: ?par¶. Also, the function segments
can be very useful to plot lines

1.1.4. Pareto-charts

A related way to represent the frequencies of the disfluencies is a pareto-


chart. In pareto-charts, the frequencies of the observed categories are
represented as in a bar plot, but they are first sorted in descending order of
frequency and then overlaid by a line plot of cumulative percentages that
indicates what percent of all data one category and all other categories to
the left of that category account for. The function pareto.chart comes
with the library qcc that you must (install and/or) load first; cf. Figure 26.

Figure 26. Pareto-chart with the frequencies of disfluencies


Univariate statistics 105

> library(qcc)¶
> pareto.chart(table(FILLER))¶
Pareto—chart—analysis—for—table(FILLER)
——————————Frequency—Cum.Freq.—Percentage—Cum.Percent.
——uh————————————394———————394———————39.4—————————39.4
——silence———————332———————726———————33.2—————————72.6
——uhm———————————274——————1000———————27.4————————100.0

1.1.5. Histograms

While pie charts and bar plots are probably the most frequent forms of
representing the frequencies of nominal/categorical variables (many people
advise against pie charts, though, because humans are notoriously bad at
inferring proportions from angles in charts), histograms are most wide-
spread for the frequencies of interval/ratio variables. In R, you can use the
function hist, which just requires the relevant vector as its argument.

> hist(LENGTH)—#—standard—graph¶

For some ways to make the graph nicer, cf. Figure 27, whose left panel
contains a histogram of the variable LENGTH with axis labels and grey bars:

Figure 27. Histograms for the frequencies of lengths of disfluencies

> hist(LENGTH,—main="",—xlab="Length—in—ms",—ylab=
"Frequency",—xlim=c(0,—2000),—ylim=c(0,—100),—
col="grey80")¶

The right panel of Figure 27 contains a histogram of the probability


densities (generated by freq=F) with a corresponding curve:
106 Descriptive statistics

> hist(LENGTH,—main="",—xlab="Length—in—ms",—ylab="Density",—
freq=F,—xlim=c(0,—2000),—col="grey50")¶
> lines(density(LENGTH))¶

With the argument breaks=…, you can instruct R to try to use a particu-
lar number of bins (or bars). You either provide one integer – then R tries
to create a histogram with as many groups – or you provide a vector with
the boundaries of the bins. The latter raises the question of how many bins
should or may be chosen? In general, you should not have more than 20
bins, and as a rule of thumb for the number of bins to choose you can use
the formula in (14). The most important aspect, though, is that the bins you
choose do not misrepresent the actual distribution.

(14) Number of bins for a histogram of n data points = 1+3.32·log10 n

Recommendation(s) for further study


− the functions dotchart and stripchart (with method="jitter") to
represent the distribution of individual data points
− the function scatterplot (from the library(car)) for more sophisti-
cated scatterplots
− the functions scatterplot3d (from the library(scatterplot3d)) for
3-dimensional scatterplots

1.2. Measures of central tendency

Measures of central tendency are probably the most frequently used statis-
tics. They provide a value that attempts to summarize the behavior of a
variable. Put differently, they answer the question, if I wanted to summar-
ize this variable and were allowed to use only one number to do that, which
number would that be? Crucially, the choice of a particular measure of
central tendency depends on the variable’s level of measurement. For no-
minal/categorical variables, you should use the mode (if you do not simply
list all frequencies anyway), for ordinal variables you should use the me-
dian, for interval/ratio variables you can usually use the arithmetic mean.

1.2.1. The mode

The mode of a variable or distribution is the value that is most often ob-
Univariate statistics 107

served. As far as I know, there is no function for the mode in R, but you
can find the mode easily. For example, the mode of FILLER is uh:

> which.max(table(FILLER))¶
uh
—2
> max(table(FILLER))¶
[1]—394

1.2.2. The median

The measure of central tendency for ordinal data is the median, the value
you obtain when you sort all values of a distribution according to their size
and then pick the middle one. The median of the numbers from 1 to 5 is 3,
and if you have an even number of values, the median is the average of the
two middle values.

> median(LENGTH)¶
[1]—897

1.2.3. The arithmetic mean

The best-known measure of central tendency is the arithmetic mean for


interval/ratio variables. You compute it by adding up all values of a distri-
bution or a vector and dividing that sum by the number of values, but of
course there is also a function for this:

> sum(LENGTH)/length(LENGTH)¶
[1]—915.043
> mean(LENGTH)¶
[1]—915.043

One weakness of the arithmetic mean is its sensitivity to outliers:

> a<-1:10;—a¶
[1]——1——2——3——4——5——6——7——8——9—10
> b<-c(1:9,—1000);—b¶
[1]————1————2————3————4————5————6————7————8————9—1000
> mean(a)¶
[1]—5.5
> mean(b)¶
[1]—104.5
108 Descriptive statistics

Although the vectors a and b differ with regard to only a single value,
the mean of b is much larger than that of a because of that one outlier, in
fact so much larger that b’s mean of 104.5 neither summarizes the values
from 1 to 9 nor the value 1000 very well. There are two ways of handling
such problems. First, you can add the argument trim=… to mean: the per-
centage of elements from the top and the bottom of the distribution that are
discarded before the mean is computed. The following lines compute the
means of a and b after the highest and the lowest value have been dis-
carded:

> mean(a,—trim=0.1)¶
[1]—5.5
> mean(b,—trim=0.1)¶
[1]—5.5

Second, you can just use the median:

> median(a)¶
[1]—5.5
> median(b)¶
[1]—5.5

Using the median is also a good idea if the data whose central tendency
you want to report are not normally distributed.

Warning/advice
Just because R or your spreadsheet software can return many decimals does
not mean you have to report them all. Use a number of decimals that makes
sense given the statistic that you report.

Recommendation(s) for further study


Good and Hardin (2006: 49f.) for additional measures of central tendency
for asymmetric distributions

1.2.4. The geometric mean

The geometric mean is used to compute averages of factors or ratios (whe-


reas the arithmetic mean is computed to get the average of sums). Let’s
assume you have six recordings of a child at the ages 2;1 (two years and
one month), 2;2, 2;3, 2;4, 2;5, and 2;6. Let us also assume you had a vector
Univariate statistics 109

lexicon that contains the cumulative numbers of different words (types!)


that the child produced at each age:

> lexicon<-c(132,—158,—169,—188,—221,—240)¶
> names(lexicon)<-
c("2;1",—"2;2",—"2;3",—"2;4",—"2;5",—"2;6")¶

You now want to know the average rate at which the lexicon increased.
First, you compute the successive increases:

> increases<-lexicon[2:6]/lexicon[1:5];—increases¶
—————2;2——————2;3——————2;4——————2;5——————2;6
1.196970—1.069620—1.112426—1.175532—1.085973

That is, by age 2;2, the child produced 19.697% more types than by age
2;1, until age 2;3, the child produced 6.962% more types than by age 2;2,
etc. Now, you must not think that the average rate of increase of the lexicon
is the arithmetic mean of these increases:

> mean(increases)—#—wrong!¶
[1]—1.128104

You can easily test that this is not the correct result. If this number was
the true average rate of increase, then the product of 132 (the first lexicon
size) and this rate of 1.128104 to the power of 5 (the number of times the
supposed ‘average rate’ applies) should be the final value of 240. This is
not the case:

> 132*mean(increases)^5¶
[1]—241.1681

Instead, you must compute the geometric mean. The geometric mean of
a vector x with n elements is computed according to formula (15):

1
n
(15) meangeom = (x1·x2·…·xn-1·xn)

That is:

> rate.increase<-prod(increases)^(1/length(increases));—
rate.increase¶
[1]—1.127009
110 Descriptive statistics

If you use this value as the average rate of increase, you get the desired
result:

> 132*rate.increase^5¶
[1]—240

True, the difference between 240 – the correct value – and 241.1681 –
the incorrect value – may seem negligible, but 241.1681 is still wrong and
the difference is not always that small, as an example from Wikipedia (s.v.
geometric mean) illustrates: If you do an experiment and get an increase
rate of 10.000 and then you do a second experiment and get an increase rate
of 0.0001 (i.e., a decrease), then the average rate of increase is not approx-
imately 5.000 – the arithmetic mean of the two rates – but 1 – their geome-
tric mean.17
Finally, let me again point out how useful it can be to plot words or
numbers instead of points, triangles, … Try to generate Figure 28, in which
the position of each word on the y-axis corresponds to the average length of
the disfluency (e.g., 928.4 for women, 901.6 for men, etc.). (The horizontal
line is the overall average length – you may not know yet how to plot that
one.) Many tendencies are immediately obvious: men are below the aver-
age, women are above, silent disfluencies are of about average length, etc.

Figure 28. Mean lengths of disfluencies

17. Alternatively, you can compute the geometric mean of increases as follows:
exp(mean(log(increases)))¶.
Univariate statistics 111

1.3. Measures of dispersion

Most people know what measures of central tendencies are. What many
people do not know is that you should never – NEVER! – report a measure
of central tendency without a corresponding measure of dispersion. The
reason for this rule is that without such a measure of dispersion you never
know how good the measure of central tendency is at summarizing the
data. Let us look at a non-linguistic example, the monthly temperatures of
two towns and their averages:

> town1<-c(-5,—-12,—5,—12,—15,—18,—22,—23,—20,—16,—8,—1)¶
> town2<-c(6,—7,—8,—9,—10,—12,—16,—15,—11,—9,—8,—7)¶
> mean(town1)¶
[1]—10.25
> mean(town2)¶
[1]—9.833333

On the basis of the means alone, the towns seem to have a very similar
climate, but even a quick glance at Figure 29 shows that that is not true – in
spite of the similar means, I know where I would want to be in February.

Figure 29. Temperature curves of two towns


112 Descriptive statistics

Obviously, the mean of Town 2 summarizes the central tendency of


Town 2 much better than the mean of Town 1 does for Town 1: the values
of Town 1 vary much more widely around their mean. Thus, always pro-
vide a measure of dispersion for your measure of central tendency: relative
entropy for the mode, the interquartile range or quantiles for the median,
the standard deviation or the variance or quantiles for interval/ratio-scaled
data.

1.3.1. Relative entropy

A simple measure for categorical data is relative entropy Hrel. Hrel is 1 when
the levels of the relevant categorical variable are all equally frequent, and it
is 0 when all data points have the same variable level. For categorical va-
riables with n levels, Hrel is computed as shown in formula (16), in which pi
corresponds to the frequency in percent of the i-th level of the variable:

∑ ( p ⋅ ln p )
i =1
i i
(16) Hrel = −
ln n

Thus, if you count the articles of 300 noun phrases and find 164 cases
with no determiner, 33 indefinite articles, and 103 definite articles, this is
how you compute Hrel:

> article<-c(164,—33,—103)¶
> perc<-article/sum(article)¶
> hrel<--sum(perc*log(perc))/log(length(perc));—hrel¶
[1]—0.8556091

The distribution of articles is rather heterogeneous. It is worth pointing


out that the above formula does not produce the desired result of 0 when
only no-determiner cases are observed:

> article<-c(300,—0,—0)¶
> perc<-article/sum(article)¶
> hrel<--sum(perc*log(perc))/log(length(perc));—hrel¶
[1]—NaN

This is because the logarithm of 0 is not defined. Usually, this is taken


care of by simply setting the result of log(0) to zero (or sometimes also by
Univariate statistics 113

incrementing all values by 1 before logging). While I do not talk at all


about how to write your own functions here, this is a case where this can be
useful: you could simply define your own logarithm function logw0 and
then use that function to get the desired result:

> logw0<-function(x)—ifelse(x>0,—log(x),—0)¶
> hrel<--sum(perc*logw0(perc))/logw0(length(perc));—hrel¶
[1]—0

Distributions of categorical variables will be dealt with in much more


detail below in Section 4.1.1.2.

1.3.2. The range

The simplest measure of dispersion for interval/ratio data is the range, the
difference of the largest and the smallest value. You can either just use the
function range, which requires the vector in question as its only argument,
and then compute the difference from the two values, or you just compute
the range from the minimum and maximum yourself:

> range(LENGTH)¶
[1]——251—1600
> diff(range(LENGTH))—#—diff—computes—pairwise—differences¶
[1]—1349
> max(LENGTH)-min(LENGTH)¶
[1]—1349

This measure is extremely simple to compute but obviously also very


sensitive: one outlier is enough to yield results that are not particularly
meaningful anymore. For this reason, the range is not used very often.

1.3.3. Quantiles and quartiles

Another very simple, but very useful measure of dispersion involves the
quantiles of a distribution. You compute quantiles by sorting the values in
ascending order and then counting which values delimit the lowest x%, y%,
etc. of the data; when these percentages are 25%, 50%, and 75%, then they
are called quartiles. In R you can use the function quantile, and the fol-
lowing example makes all this clear:
114 Descriptive statistics

> a<-1:100—#—a—vector—with—the—numbers—from—1—to—100¶
> quantile(a,—type=1)¶
——0%——25%——50%——75%—100%
———1———25———50———75——100

If you write the integers from 1 to 100 next to each other, then 25 is the
value that cuts off the lower 25%, etc. The value for 50% corresponds to
the median, and the values for 0% and 100% are the minimum and the
maximum. Let me briefly mention two arguments of this function. First,
the argument probs allows you to specify other percentages:

> quantile(a,—probs=c(0.05,—0.1,—0.5,—0.9,—0.95),—type=1)¶
—5%—10%—50%—90%—95%
——5——10——50——90——95

Second, the argument type=… allows you to choose other ways in which
quantiles are computed. For discrete distributions, type=1 is probably best,
for continuous variables the default setting type=7 is best. The bottom line
of course is that the more the 25% quartile and the 75% quartile differ from
each other, the more heterogeneous the data are, which is confirmed by
looking at the data for the two towns: the so-called interquartile range – the
difference between the 75% quartile and the 25% quartile – is much larger
for Town 1 than for Town 2.

> quantile(town1)¶
———0%———25%———50%———75%——100%
-12.0———4.0——13.5——18.5——23.0
> IQR(town1)—#—the—function—for—the—interquartile—range¶
[1]—14.5
> quantile(town2)¶
———0%———25%———50%———75%——100%
—6.00——7.75——9.00—11.25—16.00
> IQR(town2)¶
[1]—3.5

You can now apply this function to the lengths of the disfluencies:

> quantile(LENGTH,—probs=c(0.2,—0.4,—0.5,—0.6,—0.8,—1),—
type=1)¶
—20%——40%——50%——60%——80%—100%
—519——788——897—1039—1307—1600

That is, the central 20% of all the lengths of disfluencies are between
788 and 1,039, 20% of the lengths are smaller than or equal to 519, 20% of
the values are larger than 1,307, etc.
Univariate statistics 115

An interesting application of quantile is to use it to split vectors of


continuous variables up into groups. For example, if you wanted to split the
vector LENGTH into five groups of nearly equal ranges of values, you can
use the function cut, which splits up vectors into groups, and the function
quantile, which tells cut what the groups should look like. That is, there
are 200 values of LENGTH between and including 251 and 521 etc.

> LENGTH.GRP<-cut(LENGTH,—breaks=quantile(LENGTH,—probs=
c(0,—0.2,—0.4,—0.6,—0.8,—1)),—include.lowest=T)¶
> table(LENGTH.GRP)¶
LENGTH.GRP
——————————[251,521]———————————(521,789]——————(789,1.04e+03]
————————————————200—————————————————200—————————————————200
(1.04e+03,1.31e+03]——(1.31e+03,1.6e+03]
————————————————203—————————————————197

1.3.4. The average deviation

Another way to characterize the dispersion of a distribution is the average


deviation. You compute the absolute difference of every data point from
the mean of the distribution (cf. abs), and then you compute the mean of
these absolute differences. For Town 1, the average deviation is 9.04:

> town1¶
—[1]——-5—-12———5——12——15——18——22——23——20——16———8———1
> town1-mean(town1)¶
—[1]—-15.25—-22.25——-5.25———1.75———4.75———7.75——11.75——
12.75——9.75———5.75——-2.25——-9.25
> abs(town1-mean(town1))¶
—[1]—15.25—22.25——5.25——1.75——4.75——7.75—11.75—12.75——
9.75——5.75——2.25——9.25
> mean(abs(town1-mean(town1)))¶
[1]—9.041667
> mean(abs(town2-mean(town2)))¶
[1]—2.472222

For the lengths of the disfluencies, we obtain:

> mean(abs(LENGTH-mean(LENGTH)))¶
[1]—329.2946

Although this is a very intuitively accessible measure, it is hardly used


anymore. Instead, you will most often find the dispersion measure dis-
cussed in the next section, the standard deviation.
116 Descriptive statistics

1.3.5. The standard deviation

The standard deviation sd of a distribution x with n elements is defined in


(17). This may look difficult at first, but the standard deviation is con-
ceptually similar to the average deviation. For the average deviation, you
compute the difference of each data point to the mean and take its absolute
value, for the standard deviation you compute the difference of each data
point to the mean and square it (and after the division by n-1, you take the
square root).
1
 n 2
(
 ∑ xi − x )
2

(17) sd =  i =1 
 n −1 
 
 

Once we ‘translate’ this into R, it probably becomes much clearer:

> town1¶
—[1]——-5—-12———5——12——15——18——22——23——20——16———8———1
> town1-mean(town1)¶
—[1]—-15.25—-22.25——-5.25———1.75———4.75———7.75——11.75——
12.75———9.75———5.75——-2.25——-9.25
> (town1-mean(town1))^2¶
—[1]—232.5625—495.0625——27.5625———3.0625——22.5625——60.0625—
138.0625—162.5625——95.0625——33.0625———5.0625——85.5625
> sum((town1-mean(town1))^2)—#—the—numerator¶
[1]—1360.25
> sum((town1-mean(town1))^2)/(length(town1)-1)¶
[1]—123.6591
> sqrt(sum((town1-mean(town1))^2)/(length(town1)-1))¶
[1]—11.12021

There is of course an easier way …

> sd(town1)¶
[1]—11.12021
> sd(town2)¶
[1]—3.157483

Note in passing: the standard deviation is the square root of another


measure, the variance, which you can also compute with the function var.
Univariate statistics 117

Recommendation(s) for further study


the function mad to compute another very robust measure of dispersion, the
median absolute deviation

1.3.6. The variation coefficient

Even though the standard deviation is probably the most widespread meas-
ure of dispersion, it has one potential weakness: its size is dependent on the
mean of the distribution, as you can immediately recognize in the following
example:

> sd(town1)¶
[1]—11.12021
> sd(town1*10)¶
[1]—111.2021

When the values, and hence the mean, is increased by one order of
magnitude, then so is the standard deviation. You can therefore not com-
pare standard deviations from distributions with different means if you do
not first normalize them. If you divide the standard deviation of a distribu-
tion by its mean, you get the variation coefficient:

> sd(town1)/mean(town1)¶
[1]—1.084899
> sd(town1*10)/mean(town1*10)—#—now—you—get—the—same—value¶
[1]—1.084899
> sd(town2)/mean(town2)¶
[1]—0.3210999

You see that the variation coefficient is not affected by the multiplica-
tion with 10, and Town 1 still has a larger degree of dispersion.

Recommendation(s) for further study


− the functions skewness (from the library(e1071)), skewness (from
the library(fBasics)), basicStats (from the library(fBasics)),
skewness and agostino.test (from the library(moments)) to com-
pute the so-called skewness
− the functions kurtosis (from the library(e1071)) and kurtosis
(from the library(fBasics)) to compute the so-called kurtosis
118 Descriptive statistics

1.3.7. Summary functions


If you want to obtain several summarizing statistics for a vector (or a fac-
tor), you can also use the function summary, whose output should be self-
explanatory.

> summary(town1)¶
———Min.—1st—Qu.——Median————Mean—3rd—Qu.————Max.
—-12.00————4.00———13.50———10.25———18.50———23.00

Recommendation(s) for further study


the function fivenum for a different kind of summary

An immensely useful graph is the so-called boxplot. In its simplest


form, the function boxplot just requires one vector as an argument, but we
also add notch=T (recall, T is the short form for TRUE), which I will explain
shortly, as well as a line that adds little plus signs for the arithmetic means.

> boxplot(town1,—town2,—notch=T,—names=c("Town—1",—
"Town—2"))¶
> text(1:2,—c(mean(town1),—mean(town2)),—c("+",—"+"))¶

Figure 30. Boxplot of the temperatures of the two towns

This plot contains a lot of valuable information:


Univariate statistics 119

− the bold-typed horizontal lines represent the medians of the two vectors;
− the regular horizontal lines that make up the upper and lower boundary
of the boxes represent the hinges (approximately the 75%- and the 25%
quartiles);
− the whiskers – the dashed vertical lines extending from the box until the
upper and lower limit – represent the largest and smallest values that are
not more than 1.5 interquartile ranges away from the box;
− each outlier that would be outside of the range of the whiskers would be
represented with an individual dot;
− the notches on the left and right sides of the boxes extend across the
range ±1.58*IQR/sqrt(n): if the notches of two boxplots overlap, then
these will most likely not be significantly different.

Figure 30 shows that the average temperatures of the two towns are very
similar and not significantly different from each other. Also, you can see
that the dispersion of Town 1 is much larger than that of Town 2. Some-
times, a good boxplot nearly obviates the need for further analysis, which is
why they are extremely useful and will often be used in the chapters to
follow.

Recommendation(s) for further study


the functions hdr.boxplot (from the library(hdrcde)), vioplot (from
the library(vioplot)), and bpplot (from the library(Hmisc)) for inter-
esting alternatives to, or extensions of, boxplots

1.3.8. The standard error

The standard error of an arithmetic mean is defined as the standard devia-


tion of the means of equally large samples drawn randomly from a popula-
tion with replacement. Imagine you took a sample from a population and
computed the arithmetic mean of some variable. This mean will most likely
not correspond exactly to the arithmetic mean of that variable in the popu-
lation, and it will also most likely not correspond exactly to the arithmetic
mean you would get from another equally large sample from the same pop-
ulation. If you took many random and equally large samples from the popu-
lation with replacement and computed the arithmetic mean of each of them,
then the standard deviation of all these means is the standard error. The
standard error of an arithmetic mean is computed according to the formula
120 Descriptive statistics

in (18), and from (18) you can already see that the larger the standard error
of a mean, the smaller the likelihood that that mean is a good estimate of
the population mean.

var sd
(18) SEmean = =
n n

Thus, the standard error of the mean length of disfluencies in our exam-
ple is:

> mean(LENGTH)¶
[1]—915.043
> sqrt(var(LENGTH)/length(LENGTH))—#—or—sd(LENGTH)/
sqrt(length(LENGTH))¶
[1]—12.08127

This also means that, the larger sample size n, the smaller the standard
error becomes.
You can also compute standard errors for statistics other than arithmetic
means but the only other example we look at here with an example is the
standard error of a relative frequency p, which is computed according to the
formula in (19):

p⋅(1− p )
(19) SEpercentage =
n

Thus, the standard error of the percentage of all silent disfluencies out
of all disfluencies (33.2%) is:

> prop.table(table(FILLER))¶
FILLER
silence——————uh—————uhm
——0.332———0.394———0.274
> sqrt(0.332*(1-0.332)/1000)¶
[1]—0.01489215

Standard errors will be much more important in Section 3.1.5 because


they are used to compute so-called confidence intervals. In Chapter 5 you
will also get see standard errors of differences of means, which are com-
puted according to the formula in (20).
Univariate statistics 121

2 2
(20) SEdifference between means = SE mean _ group1 + SEmean _ group 2

Warning/advice
Standard errors are only really useful if the data to which they are applied
are distributed normally or when the sample size n ≥ 30.

1.4. Centering and standardization (z-scores)

Very often it is useful or even necessary to compare values coming from


different scales. An example (from Bortz 2005): if a student X scored 80%
in a course and a student Y scored 60% in another course, can you then say
that student X was better than student Y? On the one hand, sure you can:
80% is better than 60%. On the other hand, the test in which student Y
participated could have been much more difficult than the one in which
student X participated. It can therefore be useful to relativize/normalize the
individual grades of the two students on the basis of the overall perfor-
mance of students in their courses. (You encountered a similar situation
above in Section 3.1.3.6 when you learned that it is not always appropriate
to compare different standard deviations directly.) Let us assume the grades
obtained in the two courses look as follows:

> grades.course.X<-rep((seq(0,—100,—20)),—1:6);—
grades.course.X¶
—[1]———0——20——20——40——40——40——60——60——60——60——80——80——80——80—
—80—100—100—100—100—100—100
> grades.course.Y<-rep((seq(0,—100,—20)),—6:1);—
grades.course.Y¶
—[1]———0———0———0———0———0———0——20——20——20——20——20——40——40——40—
—40——60——60——60——80——80—100

One way to normalize the grades is called centering and simply in-
volves subtracting from each individual value within one course the aver-
age of that course.

> a<-1:5¶
> centered.scores<-a-mean(a);—centered.scores¶
[1]—-2—-1——0——1——2

You can see how these scores relate to the original values in a: since the
mean of a is obviously 3, the first two centered scores are negative (they
122 Descriptive statistics

are smaller than a’s mean), the third is 0 (it does not deviate from the mean
of a), and the last two centered scores are positive (they are larger than the
mean of a).
Another more sophisticated way involves standardizing, i.e. trans-
forming the values to be compared into so-called z-scores, which indicate
how many standard deviations each value deviates from the mean of the
vector. The z-score of a value from a vector is the difference of that value
from the mean of the vector, divided by the vector’s standard deviation .
You can compute that manually as in this simple example:

> z.scores<-(a-mean(a))/sd(a);—z.scores¶
[1]—-1.2649111—-0.6324555——0.0000000——0.6324555——1.2649111

The relationship between the z-scores and a’s original values is very
similar to that between the centered scores and a’s values: since the mean
of a is obviously 3, the first two z-scores are negative (they are smaller than
a’s mean), the third z-score is 0 (it does not deviate from the mean of a),
and the last two z-scores are positive (they are larger than the mean of a).
Note that the z-scores have a mean of 0 and a standard deviation of 1:

> mean(z.scores)¶
[1]—0
> sd(z.scores)¶
[1]—1

Both normalizations can be performed with the function scale, which


takes three arguments, the vector to be normalized, center=… (the default is
T) and scale=… (the default is T). If you do not provide any arguments oth-
er than the vector to be standardized, then scale’s default setting returns a
matrix that contains the z-scores and the attributes of which correspond to
the mean and the standard deviation of the vector:

> scale(a)¶
———————————[,1]
[1,]—-1.2649111
[2,]—-0.6324555
[3,]——0.0000000
[4,]——0.6324555
[5,]——1.2649111
attr(,"scaled:center")
[1]—3
attr(,"scaled:scale")
[1]—1.581139
Univariate statistics 123

If you set scale to F, (or FALSE), then you get centered scores:

> scale(a,—scale=F)¶
—————[,1]
[1,]———-2
[2,]———-1
[3,]————0
[4,]————1
[5,]————2
attr(,"scaled:center")
[1]—3

If we apply both versions to our example with the two courses, then you
see that the 80% scored by student X is only 0.436 standard deviations and
13.33 percent points better than the mean of his course whereas the 60%
scored by student Y is actually 0.873 standard deviations and 26.67 percent
points above the mean of his course. Thus, X’s score is higher than Y’s
score, but if we include the overall results in the two courses, then Y’s per-
formance is better. It is therefore often useful to standardize data in this
way.

1.5. Confidence intervals

In most cases, you will not be able to investigate the whole population you
are actually interested in, e.g., because that population is too large and in-
vestigating it would be too time-consuming and/or expensive. However,
even though you also know that different samples will yield different statis-
tics, you of course hope that your sample would yield a reliable estimate
that tells you much about the population you are interested in:

− if you find in a sample of 1000 disfluencies that their average length is


approximately 915 ms, then you hope that you can generalize from that
to the population and future investigations;
− if you find in a sample of 1000 disfluencies that 33.2% of these are si-
lences, then you hope that you can generalize from that to the popula-
tion and future investigations.

So far, we have only discussed how you can compute percentages and
means for samples – the question of how valid these are for populations is
the topic of this section. In Section 3.1.5.1, I explain how you can compute
confidence intervals for arithmetic means, and Section 3.1.5.2 explains how
124 Descriptive statistics

to compute confidence intervals for percentages. The relevance of such


confidence intervals must not be underestimated: without a confidence
interval it is unclear how well you can generalize from a sample to a popu-
lation; apart from the statistics we discuss here, one can also compute con-
fidence intervals for many other statistics.

1.5.1. Confidence intervals of arithmetic means

If you compute a mean on the basis of a sample, you of course hope that it
represents that of the population well. As you know, the average length of
disfluencies in our example data is 915.043 ms (standard deviation:
382.04). But as we said above, other samples’ means will be different so
you would ideally want to quantify your confidence in this estimate. The
so-called confidence interval, which you should provide most of the time
together with your mean, is the interval of values around the sample mean
around which we accept there is no significant difference with the sample
mean. From the expression “significant difference”, it already follows that
a confidence interval is typically defined as 1-significance level, i.e., usual-
ly as 1-0.05 = 0.95, and the logic is that “if we derive a large number of
95% confidence intervals, we can expect the true value of the parameter [in
the population] to be included in the computed intervals 95% of the time”
(Good and Hardin 2006:111).
In a first step, you again compute the standard error of the arithmetic
mean according to the formula in (18).

> se<-sqrt(var(LENGTH)/length(LENGTH));—se¶
[1]—12.08127

This standard error is used in (21) to compute the confidence interval.


The parameter t in formula (21) refers to the distribution mentioned in Sec-
tion 1.3.4.2, and its computation requires the number of degrees of free-
dom. In this case, the number of degrees of freedom df is the length of the
vector-1, i.e. 999. Since you want to compute a t-value on the basis of a p-
value, you need the function qt, and since you want a two-tailed interval –
95% of the values around the observed mean, i.e. values larger and smaller
than the mean – you must compute the t-value for 2.5% (because 2.5% on
both sides result in the desired 5%):

(21) CI = x ±t·SE
Univariate statistics 125

> t<-qt(0.025,—df=999,—lower.tail=F);—t¶
[1]—1.962341

Now you can compute the confidence interval:

> mean(LENGTH)-(se*t)¶
[1]—891.3354
> mean(LENGTH)+(se*t)¶
[1]—938.7506

This 95% confidence interval means that the true population means that
could have generated the sample mean (of 915.043) with a probability of
95% are between 891.34 and 938.75; the limits for the 99% confidence
interval are 883.86 and 946.22 respectively.
To do this more simply, you can use the function t.test with the rele-
vant vector and use conf.level=… to define the relevant percentage. R then
computes a significance test the details of which are not relevant yet, which
is why we only look at the confidence interval (with $conf.int):

> t.test(LENGTH,—conf.level=0.95)$conf.int¶
[1]—891.3354—938.7506
attr(,"conf.level")
[1]—0.95

When you compare two means and their confidence intervals do not
overlap, then the sample means are significantly different and, therefore,
you would assume that there is a real difference between the population
means, too. Note however that means can be significantly different from
each other even when their confidence intervals overlap (cf. Crawley 2005:
169f.).

1.5.2. Confidence intervals of percentages

The above logic with regard to means also applies to percentages. Given a
particular percentage from a sample, you want to know what the corres-
ponding percentage in the population is. As you already know, the percen-
tage of silent disfluencies in our sample is 33.2%. Again, you would like to
quantify your confidence in that sample percentage. As above, you com-
pute the standard error for percentages according to the formula in (19),
and then this standard error is inserted into the formula in (22).
126 Descriptive statistics

> se<-sqrt(0.332*(1-0.332)/1000);—se¶
[1]—0.01489215

(22) CI = a±z·SE

The parameter z in this formula corresponds to the z-score mentioned


above in Section 1.3.4.2, which defines 5% of the area under a standard
normal distribution – 2.5% from the upper part and 2.5% from the lower
part. Thus, for a 95% confidence interval you enter:

> z<-qnorm(0.025,—lower.tail=F);—z¶
[1]—1.959964

And for a 99% confidence interval:

> z<-qnorm(0.005,—lower.tail=F);—z¶
[1]—2.575829

For a 95% confidence interval for the percentage of silences, you enter:

> z<-qnorm(0.025,—lower.tail=F)¶
> 0.332+z*se;—0.332-z*se¶
[1]—0.3611881
[1]—0.3028119

The simpler way requires the function prop.test, which tests whether
a percentage obtained in a sample is significantly different from an ex-
pected percentage. Again, the functionality of the significance test is not
relevant yet (however, cf. below Section 4.1.1.2), but this function also
returns the confidence interval for the observed percentage. R needs the
observed frequency (332), the sample size (1000), and the probability for
the confidence interval. R uses a formula different from ours but returns
nearly the same result.

> prop.test(332,—1000,—conf.level=0.95)$conf.int¶
[1]—0.3030166—0.3622912
attr(,"conf.level")
[1]—0.95

Recommendation(s) for further study


Dalgaard (2002: Ch. 7.1 and 4.1), Crawley (2005: 167ff.)
Bivariate statistics 127

Warning/advice
Since confidence intervals are based on standard errors, the warning from
above applies here, too: if data are not normally distributed or the samples
too small, then you must often use other methods to estimate confidence
intervals (e.g., bootstrapping).

2. Bivariate statistics

We have so far dealt with statistics and graphs that describe on variable or
vector/factor. In this section, we now turn to methods to characterize two
variables and their relation. We will again begin with frequencies, then we
will discuss means, and finally talk about correlations. You will see that we
can use many functions from the previous section.

2.1. Frequencies and crosstabulation

We begin with the case of two nominal/categorical variables. Usually, one


wants to know which combinations of variable levels occur how often. The
simplest way to do this uses cross-tabulation. Let us return to the disfluency
data:

> UHM<-read.table(choose.files(default="03-1_uh(m).txt"),—
header=T,—sep="\t",—comment.char="",—quote="")¶
> attach(UHM)¶

Let’s assume you wanted to see whether men and women differ with re-
gard to the kind of disfluencies they produce. First two questions: are there
dependent and independent variables in this design and, if so, which?

THINK
BREAK

In this case, SEX is the independent variable and FILLER is the depen-
dent variable. Computing the frequencies of variable level combinations in
R is easy because you can use the same function that you use to compute
frequencies of an individual variable’s levels: table. You just give table a
128 Descriptive statistics

second vector or factor as an argument and R lists the levels of the first
vector in the rows and the levels of the second in the columns:

> freqs<-table(FILLER,—SEX);—freqs¶
————————SEX
FILLER————female—male
——silence————171——161
——uh—————————161——233
——uhm————————170——104

In fact you can provide even more vectors to table, just try it out, and
in Section 5 we will return to this. Again, you can create tables of percen-
tages with prop.table, but with two-dimensional tables there are different
ways to compute percentages and you can specify one with margin=…. The
default is margin=NULL, which computes the percentages on the basis of the
total number of elements in the table. In other words, the sum of all percen-
tages in the table is 1. A different possibility is to compute row percentag-
es: set margin=1 and in the table you then get percentages that add up to 1
in every row. Finally, you can choose column percentages by setting
margin=2: the percentages in each column add up to 1. This is probably the
best way here since then the percentages that add up to 1 are those of the
dependent variable.

> percents<-prop.table(table(FILLER,—SEX),—margin=2)¶
> percents¶
—————————SEX
FILLER———————female——————male
——silence—0.3406375—0.3232932
——uh——————0.3207171—0.4678715
——uhm—————0.3386454—0.2088353

You can immediately see that men appear to prefer uh and disprefer
uhm while women appear to have no real preference for any disfluency.
However, this is of course not yet a significance test, which we will only
deal with in Section 4.1.2.2 below. The function addmargins outputs row
and column totals (or other user-defined margins):

> addmargins(freqs)—#—cf.—also—colSums—and—rowSums¶
—————————SEX
FILLER————female—male——Sum
——silence————171——161——332
——uh—————————161——233——394
——uhm————————170——104——274
——Sum————————502——498—1000
Bivariate statistics 129

Recommendation(s) for further study


the functions ftable and xtabs to generate other kinds of more complex
tables (cf. also Chapter 5 below)

2.1.1. Bar plots and mosaic plots

Of course you can also represent such tables graphically. The simplest way
to do this is to provide a formula as the main argument to plot. Such for-
mulae consist of a dependent variable (here: FILLER: FILLER), a tilde (“~”),
and an independent variable (here: GENRE: GENRE), and the following line
produces Figure 31.

> plot(FILLER~GENRE)¶

Figure 31. Bar plot / mosaic plot for FILLER~SEX

The width and the height of rows, columns, and the six individual boxes
represent the observed frequencies. For example, the column for dialogs is
a little wider than the columns for monologs because there are more dialogs
in the data; the row for uh is widest because uh is the most frequent disflu-
ency, etc. Other similar graphs can be generated with the following lines:
130 Descriptive statistics

> plot(GENRE,—FILLER)¶
> plot(table(GENRE,—FILLER))¶
> mosaicplot(table(GENRE,—FILLER))¶

These graphs are called stacked bar plots or mosaic plots and are – apart
from association plots to be introduced below – among the most useful
ways to represent crosstabulated data. In the code file for this chapter you
will find R code for another kind of useful (although too colorful) graph.
(You may not immediately understand the code, but with the help files for
these functions you will understand the code; consider this an appetizer.)

2.1.2. Spineplots

Sometimes, the dependent variable is nominal/categorical and the indepen-


dent variable is interval/ratio-scaled. Let us assume that FILLER is the de-
pendent variable, which is influenced by the independent variable LENGTH.
(This does not make much sense here, we just assume this for expository
purposes and so that I don’t have to invent more datasets). You can use the
function spineplot with a corresponding formula:

> spineplot(FILLER~LENGTH)¶

Figure 32. Spineplot for FILLER~LENGTH

The y-axis represents the dependent variable and its three levels. The x-
Bivariate statistics 131

axis represents the independent ratio-scaled variable, which is split up into


the value ranges that would also result from hist (which also means you
can change the ranges with breaks=…; cf. Section 3.1.1.5 above).

2.1.3. Line plots

Apart from these plots, you can also generate line plots that summarize
frequencies. If you generate a table of relative frequencies, then you can
create a primitive line plot by entering the following:

> fil.table<-prop.table(table(FILLER,—SEX),—2);—fil.table¶
—————————SEX
FILLER———————female——————male
——silence—0.3406375—0.3232932
——uh——————0.3207171—0.4678715
——uhm—————0.3386454—0.2088353
> plot(fil.table[,1],—ylim=c(0,—0.5),—xlab="Disfluency",—
ylab="Relative—frequency",—type="b")—#—column—1¶
> points(fil.table[,2],—type="b")—#—column—2¶

However, somewhat more advanced code in the companion file shows


you how you can generate the graph in Figure 33. (Again, you may not
understand the code immediately, but it will not take you long.)

Figure 33. Line plot with the percentages of the interaction of SEX and FILLER
132 Descriptive statistics

Warning/advice
Sometimes, it is recommended to not represent such frequency data with a
line plot like the above because the lines ‘suggest’ that there are frequency
values between the levels of the categorical variable, which is of course not
the case.

2.2. Means

If the dependent variable is interval/ratio-scaled or ordinal and the inde-


pendent variable is nominal/categorical, then one is often not interested in
the frequencies of particular values of the dependent variable, but the cen-
tral tendencies of each level of the independent variable. For example, you
might want to determine whether men and women differ with regard to the
average disfluency lengths. One way to get these means is the following:

> mean(LENGTH[SEX=="male"])¶
[1]—901.5803
> mean(LENGTH[SEX=="female"])¶
[1]—928.3984

This approach is too primitive for three reasons:

− you must define the values of LENGTH that you want to include manual-
ly, which requires a lot of typing (especially when the independent vari-
able has more than two levels or, even worse, when you have more than
one independent variable);
− you must know the levels of the independent variables – otherwise you
couldn’t use them for subsetting in the first place;
− you only get the means of the variable levels you have explicitly asked
for. However, if, for example, you made a coding mistake in one row –
such as entering “malle” instead of “male” – this approach will not
show you that.

Thus, we use tapply and I will briefly talk about three arguments of
this function. The first is a vector or factor to which you want to apply a
function – here, this is LENGTH, to which we want to apply mean. The
second argument is a vector or factor that has as many elements as the first
one and that specifies the groups to which the function is to be applied. The
last argument is the relevant function, here mean. We get:
Bivariate statistics 133

> tapply(LENGTH,—SEX,—mean)¶
——female—————male
928.3984—901.5803

Of course the result is the same as above, but you obtained it in a better
way. Note also that you can of course use functions other than mean: me-
dian, IQR, sd, var, …, even functions you wrote yourself. For example,
what do you get when you use length instead of mean?

THINK
BREAK

You get the numbers of lengths that were observed for each sex.

2.2.1. Boxplots

In Section 3.1.3.7 above, we looked at boxplots, but restricted our attention


to cases where we have one or more vectors dependent variables (such as
town1 and town2). However, you can also use boxplots for cases where
you have one or more independent variables and a dependent variable.
Again, the easiest way is to use a formula again (cf. also Section 3.2.2.2
below):

> boxplot(LENGTH~GENRE,—notch=T,—ylim=c(0,—1600))¶

Figure 34. Boxplot for LENGTH~GENRE


134 Descriptive statistics

(If you only want to plot a boxplot and not provide any further argu-
ments, it is actually enough to just enter plot(LENGTH~GENRE)¶: R ‘knows’
you want a boxplot because LENGTH is a numerical vector and GENRE is a
factor.) Again, you can infer a lot from that plot: both medians are close to
900 ms and do most likely not differ significantly from each other (since
the notches overlap). Both genres appear to have about the same amount of
dispersion since the notches, the boxes, and the whiskers are nearly equally
large, and both genres have no outliers.

Recommendation(s) for further study


the function plotmeans (from the library(gplots)) to plot line plots with
means and confidence intervals

2.2.2. Interaction plots

So far we have looked at graphs representing one variable or one variable


depending on another variable. As we have seen in Section 1.3.2.3 above,
however, there are also cases where you would want to characterize the
distribution of one interval/ratio-scaled variable depending on two nomin-
al/categorical variables. You can obtain the means of the variable level
combinations of the independent variables with tapply. You must specify
the two independent variables in the form of a list, and the following two
examples show you how you get the same means in two different ways (so
that you see which variable goes into the rows and which into the col-
umns):

> tapply(LENGTH,—list(SEX,—FILLER),—mean)¶
————————silence———————uh——————uhm
female—942.3333—940.5652—902.8588
male———891.6894—904.9785—909.2788
> tapply(LENGTH,—list(FILLER,—SEX),—mean)¶
——————————female—————male
silence—942.3333—891.6894
uh——————940.5652—904.9785
uhm—————902.8588—909.2788

Such results are best shown in tabular form in such that you don’t just
provide the above means of the interactions, but also the means of the indi-
vidual variables as they were represented in Figure 28 above. Consider
Table 17 and especially its caption. A plus sign between variables refers to
just adding main effects of variables (i.e., effects of variables in isolation as
Bivariate statistics 135

when you inspect the two means for SEX in the bottom row and the three
means for FILLER in the rightmost column). A colon between variables
refers to only the interaction of the variables (i.e., effects of combinations
of variables as when you inspect the six means in the main body of the
table where SEX and FILLER are combined). Finally, an asterisk between
variables denotes both the main effects and the interaction (here, all 12
means). With two variables A and B, A*B is the same as A—+—B—+—A:B.

Table 17. Means for LENGTH~FILLER*SEX


SEX: FEMALE SEX: MALE Total
FILLER: UH 940.57 904.98 919.52
FILLER: UHM 902.86 909.28 905.3
FILLER: SILENCE 942.33 891.69 917.77
TOTAL 928.4 901.58 915.04

Now to the results. These are often easier to understand when they are
represented graphically. You can create and configure an interaction plot
manually, but for a quick and dirty glance at the data, you can also use the
function interaction.plot. As you might expect, this function takes at
least three arguments:

− x.factor: a vector/factor whose values/levels are represented on the x-


axis;
− trace.factor: the second argument is a vector/factor whose val-
ues/levels are represented with different lines;
− response: the third argument is a vector whose means for all variable
level combinations will be represented on the y-axis by the lines.

That means, you can choose one of two formats, depending on which
independent variable is shown on the x-axis and which is shown with dif-
ferent lines. While the represented means will of course be identical, I ad-
vise you to always generate and inspect both graphs anyway because I
usually find that one of the two graphs is much easier to interpret. In Figure
35 you find both graphs for the above values and I for myself find the low-
er panel easier to interpret.

> interaction.plot(SEX,—FILLER,—LENGTH);—grid()¶
> interaction.plot(FILLER,—SEX,—LENGTH);—grid()¶
136 Descriptive statistics

Figure 35. Interaction plot for LENGTH~FILLER:SEX

Obviously, uhm behaves differently from uh and silences: the average


lengths of women’s uh and silence are larger than those of men, but the
average length of women’s uhm is smaller than that of men. But now an
important question: why should you now not just report the means you
computed with tapply and the graphs in Figure 35 in your study?

THINK
BREAK
Bivariate statistics 137

First, you should not just report the means like this because I told you
above in Section 3.1.3 that you should never ever report means without a
measure of dispersion. Thus, when you want to provide the means, you
must also add, say, standard deviations (cf. Section 3.1.3.5), standard errors
(cf. Section 3.1.3.8), confidence intervals (cf. Section 3.1.5.1):

> tapply(LENGTH,—list(SEX,—FILLER),—sd)¶
————————silence———————uh——————uhm
female—361.9081—397.4948—378.8790
male———370.6995—397.1380—382.3137

How do you get the standard errors and the confidence intervals?

THINK
BREAK

> se<-tapply(LENGTH,—list(SEX,—FILLER),—sd)/
sqrt(tapply(LENGTH,—list(SEX,—FILLER),—length));—se¶
————————silence———————uh——————uhm
female—27.67581—31.32698—29.05869
male———29.21522—26.01738—37.48895

> t<-qnorm(0.025,—df=999,—lower.tail=F);—t¶
[1]—1.962341
> tapply(LENGTH,—list(SEX,—FILLER),—mean)-(t*se)—#—lower¶
————————silence———————uh——————uhm
female—888.0240—879.0910—845.8357
male———834.3592—853.9236—835.7127
> tapply(LENGTH,—list(SEX,—FILLER),—mean)+(t*se)—#—upper¶
————————silence————————uh—————uhm
female—996.6427—1002.0394—959.882
male———949.0197——956.0335—982.845

And this output immediately shows again why measures of dispersion


are so important: the standard deviations are large, the means plus/minus
one standard error and the confidence intervals overlap clearly, which
shows that the differences, which seem so large in Figure 35, are most like-
ly insignificant. You can test this with boxplot, which also allows formu-
lae with more than one independent variable (combined with an asterisk for
the interaction):

> boxplot(LENGTH~SEX*FILLER,—notch=T)¶
138 Descriptive statistics

Second, the graphs should not be used as they are (at least not uncriti-
cally) because R has chosen the range of the y-axis such that it is as small
as possible but still covers all necessary data points. However, this small
range on the y-axis has visually inflated the differences, and a more realis-
tic representation would have either included the value y = 0 (as in the first
pair of the following four lines) or chosen the range of the y-axis such that
the complete range of LENGTH is included (as in the second pair of the
following four lines):

> interaction.plot(SEX,—FILLER,—LENGTH,—ylim=c(0,—1000))¶
> interaction.plot(FILLER,—SEX,—LENGTH,—ylim=c(0,—1000))¶
> interaction.plot(SEX,—FILLER,—LENGTH,—ylim=range(LENGTH))¶
> interaction.plot(FILLER,—SEX,—LENGTH,—ylim=range(LENGTH))¶

2.3. Coefficients of correlation and linear regression

The last section in this chapter is devoted to cases where both the depen-
dent and the independent variable are ratio-scaled. For this scenario we turn
to a new data set. First, we clear out memory of all data structures we have
used so far:

> rm(list=ls(all=T))¶

We look at data to determine whether there is a correlation between the


reaction times in ms of second language learners in a lexical decision task
and the length of the stimulus words. We have

− a dependent ratio-scaled variable: the reaction time in ms


MS_LEARNER, whose correlation with the following independent varia-
ble we are interested in;
− an independent ratio-scaled variable: the length of the stimulus words
LENGTH (in letters).

Such correlations are typically quantified using a so-called coefficient


of correlation r. This correlation coefficient, and many others, are defined
to fall in the range between -1 and +1. Table 18 explains what the values
mean: the sign of a correlation coefficient reflects the direction of the cor-
relation, and the absolute size reflects the strength of the correlation. When
the correlation coefficient is 0, then there is no correlation between the two
Bivariate statistics 139

variables in question, which is why the null hypothesis says r = 0 – the


two-tailed alternative hypothesis says r ≠ 0.

Table 18. Correlation coefficients and their interpretation


Correlation Labeling the Kind of correlation
coefficient correlation
0.7 < r ≤ 1 very high positive correlation:
0.5 < r ≤ 0.7 high the more/higher …, the more/higher …
0.2 < r ≤ 0.5 intermediate the less/lower …, the less/lower …
0 < r ≤ 0.2 low
r≈0 no statistical correlation
0 > r ≥ -0.2 low negative correlation:
-0.2 > r ≥ -0.5 intermediate the more/higher …, the less/lower …
-0.5 > r ≥-0.7 high the less/lower …, the more/higher …
-0.7 > r ≥ -1 very high

Let us load and plot the data, using by now familiar lines of code:

> ReactTime<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(ReactTime);—str(ReactTime)¶
'data.frame':———20—obs.—of——3—variables:
—$—CASE——————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—LENGTH————:—int——14—12—11—12—5—9—8—11—9—11—...
—$—MS_LEARNER:—int——233—213—221—206—123—176—195—207—172—...
> plot(MS_LEARNER~LENGTH,—xlim=c(0,—15),—ylim=c(0,—300),—
xlab="Word—length—in—letters",—ylab="Reaction—time—of—
learners—in—ms");—grid()¶

Cf. Figure 36 before you read on – what kind of correlation is that, a


positive or a negative one?

THINK
BREAK

This is a positive correlation, because we can describe the correlation


between reaction time and word length with a “the more …, the more …”
statement: the longer the word, the longer the reaction time because when
you move from the left (short words) to the right (long words), the reaction
140 Descriptive statistics

times get higher. But we also want to quantify the correlation and compute
the so-called Pearson product-moment correlation r.

Figure 36. Scatterplot for MS_LEARNER~LENGTH

First, we do this manually: We begin by computing the covariance of


the two variables according to the formula in (23).

∑ (x
i =1
i )(
− x ⋅ yi − y )
(23) Covariancex, y =
n −1

As you can see, the covariance involves computing the differences of


each variable’s value from the variable’s mean. For example, when the i-th
value of both the vector x and the vector y are above the averages of x and
y, then this pair of i-th values will contribute a high positive value to the
covariance. In R, we can compute the covariance manually or with the
function cov, which requires the two relevant vectors:

> covariance<-sum((LENGTH-mean(LENGTH))*(MS_LEARNER-
mean(MS_LEARNER)))/(length(MS_LEARNER)-1)¶
> covariance<-cov(LENGTH,—MS_LEARNER);—covariance¶
[1]—79.28947
Bivariate statistics 141

The sign of the covariance already indicates whether two variables are
positively or negatively correlated; here it is positive. However, we cannot
use the covariance to quantify the correlation between two vectors because
its size depends on the scale of the two vectors: if you multiply both vec-
tors with 10, the covariance becomes 100 times as large as before although
the correlation as such has of course not changed:

> cov(MS_LEARNER*10,—LENGTH*10)¶
[1]—7928.947

Therefore, we divide the covariance by the product of the standard devi-


ations of the two vectors and obtain r:

> covariance/(sd(LENGTH)*sd(MS_LEARNER))¶
[1]—0.9337171

This is a very high positive correlation, r is close to the theoretical max-


imum of 1. In R, we can do all this much more efficiently with the function
cor. Its first two arguments are the two vectors in question, and the third
argument specifies the desired kind of correlation:

> cor(MS_LEARNER,—LENGTH,—method="pearson")¶
[1]—0.9337171

The correlation can be investigated more closely, though. We can try to


predict values of the dependent variable on the basis of the independent
one. This method is called linear regression. In its simplest form, it in-
volves trying to draw a straight line in such a way that it represents the
scattercloud best. Here, best is defined as ‘minimizing the sums of the
squared vertical distances of the observed y-values (here: reaction times)
and the y-values predicted on the basis of the regression line.’ That is, the
regression line is drawn fairly directly through the scattercloud because
then these deviations are smallest. It is defined by a regression equation
with two parameters, an intercept a and a slope b. Without discussing the
relevant formulae here, I immediately explain how to get these values with
R. Using the formula notation you already know, you define and inspect a
so-called linear model using the function lm:

> model<-lm(MS_LEARNER~LENGTH);—model¶
Call:
lm(formula—=—MS_LEARNER—~—LENGTH)
Coefficients:
142 Descriptive statistics

(Intercept)———————LENGTH
——————93.61————————10.30

That is, the intercept – the y-value of the regression line at x = 0 – is


93.61, and the slope of the regression line is 10.3, which means that for
every letter the estimated reaction time increases by 10.3 ms. For example,
our data do not contain a word with 16 letters, but since the correlation
between the variables is so strong, we can come up with a good prediction
for the reaction time this word might result in:

predicted reaction time = intercept + b · LENGTH


258.41 ≈ 93.61 + 10.3 · 16

> 93.61+10.3*16¶
[1]—258.41

(This prediction of the reaction time is of course overly simplistic as it


neglects the large number of other factors that influence reaction times but
within the current linear model this is how it would be computed.) Alterna-
tively, you can use the function predict, whose first argument is the linear
model and whose second argument is a list called newdata that contains a
set of those values for the independent variable(s) for which you want to
make a prediction. With the exception of differences resulting from me
only using two decimals, you get the same result:

> predict(model,—newdata=list(LENGTH=16))¶
[1]—258.4850

If you only use the linear model as an argument, you get all predicted
values in the order of the data points (as you would with fitted).

> round(predict(model),—2)¶
—————1——————2——————3——————4——————5——————6——————7——————8
237.88—217.27—206.96—217.27—145.14—186.35—176.05—206.96
—————9—————10—————11—————12—————13—————14—————15—————16
186.35—206.96—196.66—165.75—248.18—227.57—248.18—186.35
—————17————18—————19—————20
196.66—155.44—176.05—206.96

The first value of LENGTH is 14, so the first of the above values is the
reaction time we expect for a word with 14 letters, etc. Since you now have
the needed parameters, you can also draw the regression line. You do this
Bivariate statistics 143

with the function abline, which either takes a linear model object as an
argument or the intercept and the slope:

> plot(MS_LEARNER~LENGTH,—xlim=c(0,—15),—ylim=c(0,—300),—
xlab="Word—length—in—letters",—ylab="Reaction—time—of—
learners—in—ms");—grid()¶
> abline(93.61,—10.3)—#—or—abline(model)¶

Figure 37. Scatterplot with regressions line for MS_LEARNER~LENGTH

It is immediately clear why the correlation coefficient summarizes the


data so well: the regression line is an excellent summary of the data points
since all points are fairly close to the regression line. (The code file for this
chapter will show you two ways of making this graph a bit more informa-
tive.) We can even easily check how far away every predicted value is from
its observed value. This difference – the vertical distance between an ob-
served y-value / reaction time and the y-value on the regression line for the
corresponding x-value – is called a residual, and the function residuals
requires just the linear model object as its argument:

> round(residuals(model),—2)¶
—————1——————2——————3——————4——————5——————6——————7——————8
—-4.88——-4.27——14.04—-11.27—-22.14—-10.35——18.95———0.04
——————9————10—————11—————12—————13—————14—————15—————16
-14.35——-6.96———8.34——11.25———7.82—-14.57———7.82———1.65
144 Descriptive statistics

—————17————18—————19—————20
——-1.66—10.56———6.95———3.04

You can easily test manually that these are in fact the residuals:

> round(MS_LEARNER-(predict(model)+residuals(model)),—2)¶
—1——2——3——4——5——6——7——8——9—10—11—12—13—14—15—16—17—18—19—20
—0——0——0——0——0——0——0——0——0——0——0——0——0——0——0——0——0——0——0——0

Note two important points though: First, regression equations and lines
are most useful for the range of values covered by the observed values.
Here, the regression equation was computed on the basis of lengths be-
tween 5 and 15 letters, which means that it will probably be much less reli-
able for lengths of 50+ letters. Second, in this case the regression equation
also makes some rather non-sensical predictions because theoretical-
ly/mathematically it predicts reactions times of around 0 ms for word
lengths of -9. Such considerations will become important later on.
The correlation coefficient r also allows you to specify how much of the
variance of one variable can be accounted for by the other variable. What
does that mean? In our example, the values of both variables –
MS_LEARNER and LENGTH – are not all identical: they vary around their
means (199.75 and 10.3), and this variation was called dispersion and
measured or quantified with the standard deviation or the variance. If you
square r and multiply the result by 100, then you obtain the amount of va-
riance of one variable that the other variable accounts for. In our example, r
= 0.933, which means that 87.18% of the variance of the reaction times can
be accounted for on the basis of the word lengths. This value is referred to
as coefficient of determination, r2.
The product-moment correlation r is probably the most frequently used
correlation. However, there are a few occasions on which it should not be
used. First, when the relevant variables are not interval/ratio-scaled but
ordinal or when they are not both normally distributed (cf. below Section
4.4), then it is better to use another correlation coefficient, for example
Kendall’s tau τ. This correlation coefficient is based only on the ranks of
the variable values and thus more suited for ordinal data. Second, when
there are marked outliers in the variables, then you should also use Ken-
dall’s τ, because as a measure that is based on ordinal information only it is,
just like the median, less sensitive to outliers. Cf. Figure 38.
In Figure 38 you see a scatterplot, which has one noteworthy outlier in
the top right corner. If you cannot justify excluding this data point, then it
can influence r very strongly, but not τ. Pearson’s r and Kendall’s τ for all
Bivariate statistics 145

data points but the outlier are 0.11 and 0.1 respectively, and the regression
line with the small slope shows that there is clearly no correlation between
the two variables. However, if we include the outlier, then Pearson’s r sud-
denly becomes 0.75 (and the regression line’s slope is changed markedly)
while Kendall’s τ remains appropriately small: 0.14.

Figure 38. The effect of outliers on r

But how do you compute Kendall’s τ? The computation of Kendall’s τ


is rather complex (especially with larger samples and ties), which is why I
only explain how to compute it with R. The function is actually the same as
for Pearson’s r – cor – but the argument method=… is changed. For our
experimental data we again get a high correlation, which turns out to be a
little bit smaller than r. (Note that correlations are bidirectional – so the
order of the vectors does not matter – but linear regressions are not because
you have a dependent and an independent variable and it very much mat-
ters what goes before the tilde – namely what is predicted – and what goes
after it.)

> cor(LENGTH,—MS_LEARNER,—method="kendall")¶
[1]—0.8189904

The previous explanations were all based on the assumption that there is
in fact a linear correlation between the two variables. This need not be the
case, though, and a third scenario in which neither r nor τ are particularly
useful involves nonlinear relations between the variables. This can often be
seen by just looking at the data. Figure 39 represents a well-known exam-
146 Descriptive statistics

ple from Anscombe (1973) (from <C:/_sflwr/_inputfiles/03-2-3_


anscombe.txt>), which has the intriguing characteristic that

− the means and variances of the x-variable;


− the means and variances of the y-variable;
− the correlations and the regression line of x and y;

are all identical although the distributions are obviously very different. In
the top left of Figure 39, there is a case where r and τ are unproblematic. In
the top right we have a situation where x and y are related in a curvilinear
fashion – using a linear correlation here does not make much sense.18 In the
two lower panels, you see distributions in which individual outliers have a
huge influence on r and the regression line. Since all the summary statistics
are identical, this example illustrates most beautifully how important, in
fact indispensable, a visual inspection of your data is, which is why in the
following chapters visual exploration nearly always precedes statistical
computation.
Now you should do the exercise(s) for Chapter 3 …

Recommendation(s) for further study


− the argument method="spearman" for the function cor to compute a
different kind of rank correlation
− the function s.hist (from the library(ade4)) to produce more refined
scatterplots with histograms
− Good and Hardin (2006: Ch. 8), Crawley (2007: Ch. 5, 27), and Braun
and Murdoch (2008: Section 3.2) for much advice to create good
graphs; cf. also <http://cran.r-project.org/src/contrib/Views/Graphics.
html>

Warning/advice
Do not let the multitude of graphical functions and settings of R and/or
your spreadsheet software tempt you to produce visual overkill. Just be-
cause you can use 6 different fonts, 10 colors, and three-dimensional
graphs does not mean you should. Also, make sure your graphs are under-
stood by using meaningful graph and axis labels, legends, etc.

18. I do not discuss nonlinear regressions here; cf. Crawley (2007: Ch. 18 and 20) for over-
views.
Bivariate statistics 147

Figure 39. The sensitivity of linear correlations: the Anscombe data


148 Analytical statistics

Chapter 4
Analytical statistics

The most important questions of life are,


for the most part, really only questions of probability.
Pierre-Simon Laplace
(from <http://www-rohan.sdsu.edu/%7Emalouf/>)

In the previous chapter, I discussed a variety of descriptive statistics. In this


chapter, I will now explain how these measures and others are used in the
domain of hypothesis-testing. For example, in Section 3.1 I explained how
you compute a measure of central tendency (such as a mean) or a measure
of dispersion (such as a standard deviation) for a particular sample. In this
chapter, you will see how you test whether such a mean or such a standard
deviation deviates significantly from a known mean or standard deviation
or the mean or standard deviation of a second sample. I will assume that
you have downloaded the data files from the companion website. Before
we get started, let me remind you once again that in your own data your
nominal/categorical variables should ideally always be coded with mea-
ningful character strings so that R recognizes them as factors when reading
in the data from a file.

1. Distributions and frequencies

In this section, I will illustrate how to test whether distributions and fre-
quencies from one sample differ significantly from a known distribution
(cf. Section 4.1.1) or from another sample (cf. Section 4.1.2). In both sec-
tions, we begin with variables from the interval/ratio level of measurement
and then proceed to lower levels of measurement.

1.1. Distribution fitting

1.1.1. One dep. variable (ratio-scaled)

In this section, I will discuss how you compare whether the distribution of
Distributions and frequencies 149

one dependent interval-/ratio-scaled variable is significantly different from


a known distribution. I will restrict my attention to one of the most frequent
cases, the situation where you test whether a variable is normally distri-
buted (because as was mentioned above in Section 1.3.4.2, many statistical
require a normal distribution so you must be able to do this test).
We will deal with an example from the first language acquisition of
tense and aspect in Russian. Simplifying a bit here, one general tendency
that is often observed is a relatively robust correlation between past tense
and perfective aspect as well as non-past tenses and imperfective aspect.
Such a correlation can be quantified with Cramer’s V values (cf. Stoll and
Gries, forthc., and Section 4.2.1 below). Let us assume you studied how
this association – the Cramer’s V values – changes for one child over time.
Let us further assume you had 117 recordings for this child, computed a
Cramer’s V value for each one, and now you want to see whether these are
normally distributed. This scenario involves

− a dependent interval/ratio-scaled variable called TENSEASPECT, consist-


ing of the Cramer’s V values;
− no independent variable because you are not testing whether the distri-
bution of the variable TENSEASPECT is influenced by, or correlated
with, something else.

You can test for normality in several ways. The test we will use is the
Shapiro-Wilk test, which does not really have any assumptions other than
ratio-scaled data and involves the following procedure:

Procedure
Formulating the hypotheses
Inspecting a graph
Computing the test statistic W and the probability of error p

We begin with the hypotheses:

H0: The data points are normally distributed; W = 1.


H1: The data points are not normally distributed; W ≠ 1.

First, you load the data from <C:/_sflwr/_inputfiles/04-1-1-1_tense-


aspect.txt> and create a graph; the code for the left panel is shown below
but you can also generate the right panel using the kind of code discussed
in Section 3.1.1.5.
150 Analytical statistics

> RussianTensAps<-
read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(RussianTensAps)¶
> hist(TENSE_ASPECT,—xlim=c(0,—1),—xlab="Tense-
Aspect—correlation",—
ylab="Frequency",—main="")—#—left—panel¶

Figure 40. Histogram of the Cramer’s V values reflecting the strengths of the
tense-aspect correlations

At first glance, this looks very much like a normal distribution, but of
course you must do a real test. The Shapiro-Wilk test is rather cumbersome
to compute semi-manually, which is why its manual computation will not
be discussed here (unlike nearly all other tests). In R, however, the compu-
tation could not be easier. The relevant function is called shapiro.test
and it only requires one argument, the vector to be tested:

> shapiro.test(TENSE_ASPECT)¶
————————Shapiro-Wilk—normality—test
data:——TENSE_ASPECT
W—=—0.9942,—p-value—=—0.9132

What does this mean? This simple output teaches an important lesson:
Usually, you want to obtain a significant result, i.e., a p-value that is small-
er than 0.05 because this allows you to accept the alternative hypothesis.
Here, however, you may actually welcome an insignificant result because
normally-distributed variables are often easier to handle. The reason for
this is again the logic underlying the falsification paradigm. When p < 0.05,
you reject the null hypothesis H0 and accept the alternative hypothesis H1.
But here you ‘want’ H0 to be true because H0 states that the data are nor-
Distributions and frequencies 151

mally distributed. You obtained a p-value of 0.9132, which means you


cannot reject H0 and, thus, consider the data to be normally distributed.
You would therefore summarize this result in the results section of your
paper as follows: “According to a Shapiro-Wilk test, the distribution of this
child’s Cramer’s V values measuring the tense-aspect correlation does not
deviate significantly from normality: W = 0.9942; p = 0.9132.” (In paren-
theses or after a colon you usually mention all statistics that helped you
decide whether or not to accept the alternative hypothesis H1.)

Recommendation(s) for further study


− the function shapiroTest (from the library(fBasics)) as an alterna-
tive to the above function
− the function ks.test (in ks.test(a.vector,—"pnorm",—mean=
mean(a.vector),—sd=sd(a.vector))) or the function ksnormTest
(from the library(fBasics)) as an alternative to the above function.
This test, the Kolmogorov-Smirnov test for distribution-fitting, is less
conservative and more flexible than the Shapiro-Wilk-Test, since it can
not only test for normality and can also be applied to vectors with more
than 5000 data points. We will discuss a variant of this test below
− as alternatives to the above functions, the functions jarqueberaTest
and dagoTest (both from the library(fBasics))
− the function mshapiro.test (from the library(mvnormtest)) to test
for multivariate normality
− Crawley (2005: 100f.), Crawley (2007: 316f.)

1.1.2. One dep. variable (nominal/categorical)

In this section, we are going to return to an example from Section 1.3, the
constructional alternation of particle placement in English, which is again
represented in (24).

(24) a. He picked up the book. (verb - particle - direct object)


b. He picked the book up. (verb - direct object - particle)

As you already know, usually both constructions are perfectly accepta-


ble and native speakers can often not explain their preference for one of the
two constructions. One may therefore expect that both constructions are
equally frequent, and this is what you are going to test. This scenario in-
volves
152 Analytical statistics

− a dependent nominal/categorical variable CONSTRUCTION: VERB-


PARTICLE-OBJECT vs. CONSTRUCTION: VERB-OBJECT-PARTICLE;
− no independent variable, because you do not investigate whether the
distribution of CONSTRUCTION is dependent on anything else.

Such questions are generally investigated with tests from the family of
chi-square tests, which is one of the most important and widespread tests.
Since there is no independent variable, you test the degree of fit between
your observed and an expected distribution, which should remind you of
Section 3.1.5.2. This test is referred to as the chi-square goodness-of-fit test
and involves the following steps:

Procedure
Formulating the hypotheses
Tabulating the observed frequencies; inspecting a graph
Computing the frequencies you would expect given H0
Testing the assumption(s) of the test:
all observations are independent of each other
80% of the expected frequencies are larger than or equal to 519
all expected frequencies are larger than 1
Computing the contributions to chi-square for all observed frequencies
Summing the contributions to chi-square to get the test statistic χ2
Determining the degrees of freedom df and the probability of error p

The first step is very easy here. As you know, the null hypothesis typi-
cally postulates that the data are distributed randomly/evenly, and that
means that both constructions occur equally often, i.e., 50% of the time
(just as tossing a fair coin many times will result in an approximately equal
distribution). Thus:

H0: The frequencies of the two variable levels of CONSTRUCTION are


identical – if you find a difference in your sample, this difference is

19. This threshold value of 5 is the one most commonly mentioned. There are a few studies
that show that the chi-square test is fairly robust even if this assumption is violated – es-
pecially when, as is here the case, the null hypothesis postulates that the expected fre-
quencies are equally high (cf. Zar 1999: 470). However, to keep things simple, I stick to
the most common conservative threshold value of 5 and refer you to the literature
quoted in Zar. If your data violate this assumption, then you must compute a binomial
test (if, as here, you have two groups) or a multinomial test (for three or more groups);
cf. the recommendations for further study.
Distributions and frequencies 153

just random variation.


H1: The frequencies of the two variable levels of CONSTRUCTION are
not identical.

From this, the statistical forms are obvious.

H0: nV Part DO = nV DO Part


H1: nV Part DO ≠ nV DO Part

Note that this is a two-tailed hypothesis; no direction of the difference is


provided. Next, you would collect some data and count the occurrences of
both constructions, but we will abbreviate this step and use frequencies
reported in Peters (2001). She conducted an experiment in which subjects
described pictures and obtained the construction frequencies represented in
Table 19.

Table 19. Observed construction frequencies of Peters (2001)


Verb - Particle - Direct Object Verb - Direct Object - Particle
247 150

Obviously, there is a strong preference for the construction in which the


particle follows the verb directly. At first glance, it seems very unlikely that
the null hypothesis could be correct, given these data.
First, you should have a look at a graphical representation of the data. A
first but again not optimal possibility would be to generate, say, a pie chart.
Thus, you first enter the data and then create a pie chart or a bar plot as
follows:

> VPCs<-c(247,—150)—#—VPCs="verb-particle—constructions"¶
> pie(VPCs,—labels=c("Verb-Particle-Direct—Object",—"Verb-
Direct—Object-Particle"))¶
> barplot(VPCs,—names.arg=c("Verb-Particle-
Direct—Object",—"Verb-Direct—Object-Particle"))¶

The question now of course is whether this preference is statistically


significant or whether it could just as well have arisen by chance. Accord-
ing to the above procedure, you must now compute the frequencies that
follow from H0. In this case, this is easy: since there are altogether 247+150
= 397 constructions, which should be made up of two equally large groups,
you divide 397 by 2:
154 Analytical statistics

> VPCs.exp<-rep(sum(VPCs)/length(VPCs),—length(VPCs))
[1]—198.5—198.5

Table 20. Expected construction frequencies for the data of Peters (2001)
Verb - Particle - Direct Object Verb - Direct Object - Particle
198.5 198.5

You must now check whether you can actually do a chi-square test here,
but the observed frequencies are obviously larger than 5 and we assume
that Peters’s data points are in fact independent (because we assume for
now that, for instance, each construction has been provided by a different
speaker). We can therefore proceed with the chi-square test, the computa-
tion of which is fairly straightforward and summarized in (25).

n
(observed − expected ) 2
(25) Pearson chi-square = χ = 2

i =1 expected

That is to say, for every value of your frequency distribution you com-
pute a so-called contribution to chi-square by (i) computing the difference
between the observed and the expected frequency, (ii) squaring this differ-
ence, and (iii) dividing that by the expected frequency again. The sum of
these contributions to chi-square is the test statistic chi-square. Here, chi-
square is approximately 23.7.

(26) Pearson χ =2 (247 − 198.5)2 + (150 − 198.5)2 ≈ 23.7


198.5 198.5

> sum(((VPCs-VPCs.exp)^2)/VPCs.exp)¶
[1]—23.70025

Obviously, this value increases as the differences between observed and


expected frequencies increase (because then the numerators become larg-
er). In addition, you can see from (26) that chi-square becomes 0 when all
observed frequencies correspond to all expected frequencies because then
the numerators become 0. We can therefore simplify our statistical hypo-
theses to the following:

H0: χ2 = 0.
H1: χ2 > 0.
Distributions and frequencies 155

But the chi-square value alone does not show you whether the differ-
ences are large enough to be statistically significant. So, what do you do
with this value? Before computers became more widespread, a chi-square
value was used to look up in a chi-square table whether the result is signifi-
cant or not. Such tables typically have the three standard significance levels
in the columns and different numbers of degrees of freedom (df) in the
rows. The number of degrees of freedom here is the number of categories
minus 1, i.e., df = 2-1 = 1, because when we have two categories, then one
category frequency can vary freely but the other is fixed (so that we can get
the observed number of elements, here 397). Table 21 is one such chi-
square table for the three significance levels and 1 to 3 degrees of freedom.

Table 21. Critical χ2-values for ptwo-tailed = 0.05, 0.01, and 0.001 for 1 ≤ df ≤ 3
p = 0.05 p = 0.01 p = 0.001
df = 1 3.841 6.635 10.828
df = 2 5.991 9.21 13.816
df = 3 7.815 11.345 16.266

You can actually generate those values yourself with the function
qchisq.As you saw above in Section 1.3.4.2, the function requires three
arguments:

− p: the p-value(s) for which you need the critical chi-square values (for
some df);
− df: the df-value(s) for the p-value for which you need the critical chi-
square value;
− lower.tail=F : the argument to instruct R to only use the area under
the chi-square distribution curve that is to the right of / larger than the
observed chi-square value.

That is to say:

> qchisq(c(0.05,—0.01,—0.001),—1,—lower.tail=F)¶
[1]——3.841459——6.634897—10.827566

Or, for more advanced users:

> p.values<-matrix(rep(c(0.05,—0.01,—0.001),—3),—byrow=T,—
ncol=3)¶
> df.values<-matrix(rep(1:3,—3),—byrow=F,—ncol=3)¶
> qchisq(p.values,—df.values,—lower.tail=F)¶
156 Analytical statistics

—————————[,1]——————[,2]—————[,3]
[1,]—3.841459——6.634897—10.82757
[2,]—5.991465——9.210340—13.81551
[3,]—7.814728—11.344867—16.26624

Once you have such a table, you can test your observed chi-square value
for significance by determining whether your chi-square value is larger
than the chi-square value(s) tabulated at the observed number of degrees of
freedom. You begin with the smallest tabulated chi-square value and com-
pare your observed chi-square value with it and continue to do so as long as
your observed value is larger than the tabulated ones. Here, you first check
whether the observed chi-square is significant at the level of 5%, which is
obviously the case: 23.7 > 3.841. Thus, you can check whether it is also
significant at the level of 1%, which again is the case: 23.7 > 6.635. Thus,
you can finallyeven check if the observed chi-square value is maybe even
highly significant, and again this is so: 23.7 > 10.827. You can therefore
reject the null hypothesis and the usual way this is reported in your results
section is this: “According to a chi-square goodness-of-fit test, the distribu-
tion of the two verb-particle constructions deviates highly significantly
from the expected distribution (χ2 = 23.7; df = 1; ptwo-tailed < 0.001): the con-
struction where the particle follows the verb directly was observed 247
times although it was only expected 199 times, and the construction where
the particle follows the direct objet was observed only 150 times although
it was expected 199 times.”
With larger and more complex amounts of data, this semi-manual way
of computation becomes more cumbersome (and error-prone), which is
why we will simplify all this a bit. First, you can of course compute the p-
value directly from the chi-square value using the mirror function of
qchisq, viz. pchisq, which requires the above three arguments:

> pchisq(23.7,—1,—lower.tail=F)¶
[1]—1.125825e-06

As you can see, the level of significance we obtained from our stepwise
comparison using Table 21 is confirmed: p is indeed much smaller than
0.001, namely 0.00000125825. However, there is another even easier way:
why not just do the whole test with one function? The function is called
chisq.test, and in the present case it requires maximally three arguments:

− x: a vector with the observed frequencies;


− p: a vector with the expected percentages (not the frequencies!);
Distributions and frequencies 157

− correct=T or correct=F: when the sample size n is small (15 ≤ n ≤


60), it is sometimes recommended to apply a so-called continuity cor-
rection (after Yates); correct=T is the default setting.20

In this case, this is easy: you already have a vector with the observed
frequencies, the sample size n is much larger than 60, and the expected
probabilities result from H0. Since H0 says the constructions are equally
frequent and since there are just two constructions, the vector of the ex-
pected probabilities contains two times 1/2 = 0.5. Thus:

> chisq.test(VPCs,—p=c(0.5,—0.5))¶
————————Chi-squared—test—for—given—probabilities
data:——VPCs
X-squared—=—23.7003,—df—=—1,—p-value—=—1.126e-06

You get the same result as from the manual computation but this time
you immediately also get the exact p-value. What you do not also get are
the expected frequencies, but these can be obtained very easily, too. The
function chisq.test does more than it returns. It returns a data structure (a
so-called list) so you can assign this list to a named data structure and then
inspect the list for its contents:

> test<-chisq.test(VPCs,—p=c(0.5,—0.5))¶
> str(test)¶
List—of—8
—$—statistic:—Named—num—23.7
——..-—attr(*,—"names")=—chr—"X-squared"
—$—parameter:—Named—num—1
——..-—attr(*,—"names")=—chr—"df"
—$—p.value——:—num—1.13e-06
—$—method———:—chr—"Chi-squared—test—for—given—probabilities"
—$—data.name:—chr—"VPCs"
—$—observed—:—num—[1:2]—247—150
—$—expected—:—num—[1:2]—199—199
—$—residuals:—num—[1:2]——3.44—-3.44
—-—attr(*,—"class")=—chr—"htest"

Thus, if you require the expected frequencies, you just ask for them as
follows, and of course you get the result you already know.

> test$expected¶
[1]—198.5—198.5

20. For further options, cf. ?chisq.test¶ or formals(chisq.test)¶.


158 Analytical statistics

Let me finally mention that the above method computes a p-value for a
two-tailed test. There are many tests in R where you can define whether
you want a one-tailed or a two-tailed test. However, this does not work
with the chi-square test. If you require the critical chi-square test value for
pone-tailed = 0.05 for df = 1, then you must compute the critical chi-square test
value for ptwo-tailed = 0.1 for df = 1 (with qchisq(0.1,—1,—lower.
tail=F)¶), since your prior knowledge is rewarded such that a less ex-
treme result in the predicted direction will be sufficient (cf. Section 1.3.4).
Also, this means that when you need the pone-tailed-value for a chi-square
value, just take half of the ptwo-tailed-value of the same chi-square value
(with, say, pchisq(23.7,—1,—lower.tail=F)/2¶). But again: only with
df = 1.

Warning/advice
Above I warned you to never change your hypotheses after you have ob-
tained your results and then sell your study as successful support of the
‘new’ alternative hypothesis. The same logic does not allow you to change
your hypothesis from a two-tailed one to a one-tailed one because your ptwo-
tailed = 0.08 (i.e., non-significant) so that the corresponding pone-tailed = 0.04
(i.e., significant). Your choice of a one-tailed hypothesis must be motivated
conceptually.

Recommendation(s) for further study


− the functions binom.test or dbinom to compute binomial tests
− the function prop.test (cf. Section 3.1.5.2) to test relative frequencies /
percentages for significant deviations from expected frequencies / per-
centages
− the function dmultinom to help compute multinomial tests
− Dalgaard (2002: Ch. 7), Baayen (2008: Section 4.1.1)

1.2. Tests for differences/independence

In Section 4.1.1, we looked at goodness-of-fit tests for distributions / fre-


quencies, but we now turn to tests for differences/independence.
Distributions and frequencies 159

1.2.1. One dep. variable (ordinal/interval/ratio scaled) and one indep.


variable (nominal) (indep. samples)

Let us look at an example in which two independent samples are compared


with regard to their overall distributions. You will test whether men and
women differ with regard to the frequencies of hedges they use in discourse
(i.e., expressions such as kind of or sort of). Again, note that we are here
only concerned with the overall distributions – not just means or just va-
riances. We could of course do such an investigation, too, but it is of course
theoretically possible that the means are very similar while the variances
are not and a test for different means might not uncover the overall distri-
butional difference.
Let us assume you have recorded 60 two-minute conversations between
a confederate of an experimenter, each with one of 30 men and 30 women,
and then counted the numbers of hedges that the male and female subjects
produced. You now want to test whether the distributions of hedge fre-
quencies differs between men and women. This question involves

− an independent nominal/categorical variable, SEX: MALE and SEX:


FEMALE;
− a dependent interval/ratio-scaled: the number of hedges produced:
HEDGES.

The question of whether the two sexes differ in terms of the distribu-
tions of hedge frequencies is investigated with the two-sample Kolmogo-
rov-Smirnov test:

Procedure
Formulating the hypotheses
Tabulating the observed frequencies; inspecting a graph
Testing the assumption(s) of the test: the data are continuous
Computing the cumulative frequency distributions for both samples
Computing the maximal absolute difference D of both distributions
Determining the probability of error p

First the hypotheses: the text form is straightforward and the statistical
version is based on a test statistic called D.

H0: The distribution of the dependent variable HEDGES does not differ
depending on the levels of the independent variable SEX; D = 0.
160 Analytical statistics

H1: The distribution of the dependent variable HEDGES differs depend-


ing on the levels of the independent variable SEX; D > 0.

Before we do the actual test, let us again inspect the data graphically.
You first load the data from <C:/_sflwr/_inputfiles/04-1-2-1_hedges.txt>,
make the variable names available, and check the data structure.

> Hedges<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Hedges);—str(Hedges)¶
'data.frame':———60—obs.—of——3—variables:
—$—CASE——:—int——1—2—3—4—5—6—7—8—9—10—...
—$—HEDGES:—int——17—17—17—17—16—13—14—16—12—11—...
—$—SEX———:—Factor—w/—2—levels—"F","M":—1—1—1—1—1—1—1—1—1—...

Since you are interested in the general distribution, you create a strip-
chart. In this kind of plot, the frequencies of hedges are plotted separately
for each sex, but to avoid that identical frequencies are plotted directly onto
each other (and can therefore not be distinguished anymore), you also use
the argument method=jitter to add a tiny value to each data point, which
in turn minimizes the proportion of overplotted data points. Then, you do
not let R decide about the range of the x-axis but include the meaningful
point at x = 0 yourself. Finally, with the function rug you add little bars to
the x-axis (side=1) which also get jittered. The result is shown in Figure
41.

> stripchart(HEDGES~SEX,—method="jitter",—xlim=c(0,—25),—
xlab="Number—of—hedges",—ylab="Sex")¶
> rug(jitter(HEDGES),—side=1)¶

It is immediately obvious that the data are distributed quite differently:


the values for women appear to be a little higher on average and more ho-
mogeneous than those of the men. The data for the men also appear to fall
into two groups, a suspicion that also receives some prima facie support
from the following two histograms in Figure 42. (Note that all axis limits
are again defined identically to make the graphs easier to compare.)

> par(mfrow=c(1,—2))—#—plot—into—two—adjacent—graphs¶
> hist(HEDGES[SEX=="M"],—xlim=c(0,—25),—ylim=c(0,—10),—ylab=
"Frequency",—main="")¶
> hist(HEDGES[SEX=="F"],—xlim=c(0,—25),—ylim=c(0,—10),—ylab=
"Frequency",—main="")¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶
Distributions and frequencies 161

Figure 41. Stripchart for HEDGES~SEX

Figure 42. Histograms of the number of hedges by men and women

The assumption of continuous data points is not exactly met because


frequencies are discrete – there are no frequencies 3.3, 3.4, etc. – but
HEDGES is ratio-scaled and we are therefore rather safe (and could in fact
jitter the values to avoid ties). To test these distributional differences with
the Kolmogorov-Smirnov test, you first rank-order the data. To that end,
you sort the values of SEX in the order in which you need to sort HEDGES,
and then you do the same to HEDGES itself:
162 Analytical statistics

> SEX<-SEX[order(HEDGES)]¶
> HEDGES<-HEDGES[order(HEDGES)]¶

The next step is a little more complex. You must now compute the max-
imum of all differences of the two cumulative distributions of the hedges.
You can do this in three steps: First, you generate a frequency table with
the numbers of hedges in the rows and the sexes in the columns. This table
in turn serves as input to prop.table, which generates a table of column
percentages (hence margin=2; cf. Section 3.2.1):

> dists<-prop.table(table(HEDGES,—SEX),—margin=2);—dists¶
——————SEX
HEDGES——————————F——————————M
————3——0.00000000—0.03333333
————4——0.00000000—0.10000000
————5——0.00000000—0.10000000
————6——0.00000000—0.13333333
————8——0.00000000—0.06666667
————9——0.03333333—0.06666667
————10—0.03333333—0.00000000
————11—0.03333333—0.06666667
————12—0.10000000—0.03333333
————13—0.06666667—0.13333333
————14—0.16666667—0.06666667
————15—0.06666667—0.13333333
————16—0.20000000—0.03333333
————17—0.23333333—0.00000000
————18—0.00000000—0.03333333
————19—0.06666667—0.00000000

That means that, say, 10% of all numbers of hedges of men are 4, but
these are of course not cumulative percentages yet. The second step is
therefore to convert these percentages into cumulative percentages. You
can use cumsum to generate the cumulative percentages for both columns
and can even compute the differences in the same line:

> differences<-cumsum(dists[,1])-cumsum(dists[,2])¶

That is, you subtract from every cumulative percentage of the first col-
umn (the values of the women) the corresponding value of the second col-
umn (the values of the men). The third and final step is then to determine
the maximal absolute difference, which is the test statistic D:

> max(abs(differences))¶
[1]—0.4666667
Distributions and frequencies 163

You can then look up this value in a table for Kolmogorov-Smirnov


tests; for a significant result, the computed value must be larger than the
tabulated one. For cases in which both samples are equally large, Table 22
shows the critical D-values for two-tailed Kolmogorov-Smirnov tests.

Table 22. Critical D-values for two-sample Kolmogorov-Smirnov tests (for equal
sample sizes)21
p = 0.05 p = 0.01
10 12
n1 = n2 = 29 /29 /29
10 12
n1 = n2 = 30 /30 /30
10 12
n1 = n2 = 31 /31 /31

The observed value of D = 0.4667 is not only significant (D > 10/30), but
even very significant (D > 12/30). You can therefore reject H0 and summar-
ize the results: “According to a two-sample Kolmogorov-Smirnov test,
there is a significant difference between the distributions of hedge frequen-
cies of men and women: on the whole, women seem to use more hedges
and behave more homogeneously than the men, who use fewer hedges and
whose data appear to fall into two groups (D = 0.4667, ptwo-tailed < 0.01).”
The logic underlying this test is not always immediately clear. Since it
is a very versatile test with hardly any assumptions, it is worth to briefly
explore what this test is sensitive to. To that end, we again look at a graphi-
cal representation. The following lines plot the two cumulative distribution
functions of men (in dark grey) and women (in black) as well as a vertical
line at position x = 8, where the largest difference (D = 0.4667) was found.
This graph in Figure 43 below shows what the Kolmogorov-Smirnov test
reacts to: different cumulative distributions.

> plot(cumsum(dists[,1]),—type="b",—col="black",—
xlab="Numbers—of—Hedges",—ylab="Cumulative—frequency—
in—%",—xlim=c(0,—16));—grid()¶
> lines(cumsum(dists[,2]),—type="b",—col="darkgrey")¶
> text(14,—0.1,—labels="Women",—col="black")¶
> text(2.5,—0.9,—labels="Men",—col="darkgrey")¶
> abline(v=8,—lty=2)¶

21. For sample sizes n ≥ 40, the D-values for ptwo-tailed = 0.05 are approximately 1.92/n0.5.
164 Analytical statistics

Figure 43. Cumulative distribution functions of the numbers of hedges of men and
women

For example, the facts that the values of the women are higher and more
homogeneous is indicated especially in the left part of the graph where the
low hedge frequencies are located and where the values of the men already
rise but those of the women do not. More than 40% of the values of the
men are located in a range where no hedge frequencies for women were
obtained at all. As a result, the largest different at position x = 8 is in the
middle where the curve for the men has already risen considerably while
the curve for the women has only just begun to rise. This graph also ex-
plains why H0 postulates D = 0. If the curves are completely identical, there
is no difference between them and D becomes 0.22
The above explanation simplified things a bit. First, you do not always
have two-tailed tests and identical sample sizes. Second, identical values –
so-called ties – can complicate the computation of the test. Fortunately, you
do not really have to worry about that because the R function ks.test does

22. An alternative way to produce a similar graph involves the function ecdf (for empirical
cumulative distribution function):

> plot(ecdf(HEDGES[SEX=="M"]),—do.points=F,—verticals=T,—
col.h="black",—col.v="black",—main="Hedges:—men—vs.—
women")¶
> lines(ecdf(HEDGES[SEX=="F"]),—do.points=F,—verticals=T,—
col.h="darkgrey",—col.v="darkgrey")¶
Distributions and frequencies 165

everything for you in just one line. You just need the following argu-
ments:23

− x and y: the two vectors whose distributions you want to compare;


− alternative="two-sided" for two-tailed tests (the default) or alter-
native="greater" or alternative="less" for one-sided tests de-
pending on which alternative hypothesis you want to test: the argument
alternative="…" refers to the first-named vector so that alterna-
tive="greater" means that the cumulative distribution function of the
first vector is above that of the second.

When you test a two-tailed H1 as we do here, then the line to enter into
R reduces to the following, and you get the same D-value and the p-value.
(I omitted the warning about ties here but, again, you can jitter the vectors
to get rid of it.)

> ks.test(HEDGES[SEX=="M"],—HEDGES[SEX=="F"])¶
————————Two-sample—Kolmogorov-Smirnov—test
data:——HEDGES[SEX—==—"M"]—and—HEDGES[SEX—==—"F"]
D—=—0.4667,—p-value—=—0.002908
alternative—hypothesis:—two-sided

Recommendation(s) for further study


− apart from the function mentioned in note 22 (plot(ecdf(…)), you can
create such graphs also with plot.stepfun or, even easier, with plot
and the argument type="s"; cf. the file with the R code for this chapter
− Crawley (2005: 100f.), Crawley (2007: 316f.), Baayen (2008: Section
4.2.1)

1.2.2 One dep. variable (nominal/categorical) and one indep. variable


(nominal/categorical) (indep. samples)

In Section 4.1.1.2 above, we discussed how you test whether the distribu-
tion of a dependent nominal/categorical variable is significantly different
from another known distribution. A probably more frequent situation is that
you test whether the distribution of one nominal/categorical variable is
dependent on another nominal/categorical variable.
Above, we looked at the frequencies of the two verb-particle construc-

23. Unfortunately, the function ks.test does not take a formula as input.
166 Analytical statistics

tions. We found that their distribution was not compatible with H0. Howev-
er, we also saw earlier that there are many variables that are correlated with
the constructional choice. One of these is whether the referent of the direct
object is given information, i.e., known from the previous discourse, or not.
More specifically, previous studies found that objects referring to given
referents prefer the position before the particle whereas objects referring to
new referents prefer the position after the particle. In what follows, we will
look at this hypothesis (as a two-tailed hypothesis, though). The question
involves

− a dependent nominal/categorical variable, namely CONSTRUCTION:


VERB-PARTICLE-OBJECT vs. CONSTRUCTION: VERB-OBJECT-PARTICLE;
− an independent variable nominal/categorical variable, namely the gi-
venness of the referent of the direct object: GIVENNESS: GIVEN vs.
GIVENNESS: NEW;
− independent samples because we will assume that, in the data below, the
fact that a particular object is given is unrelated to whether another ob-
ject is also given or not (this is often far from obvious, but I cannot dis-
cuss this issue here in more detail).

As before, such questions are investigated with chi-square tests: you test
whether the levels of the independent variable result in different frequen-
cies of the levels of the dependent variable. The overall procedure for a chi-
square test for independence is very similar to that of a chi-square test for
goodness of fit, but you will see below that the computation of the expected
frequencies is (only superficially) a bit different from above.

Procedure
Formulating the hypotheses
Tabulating the observed frequencies; inspecting a graph
Computing the frequencies you would expect given H0
Testing the assumption(s) of the test:
all observations are independent of each other
80% of the expected frequencies are larger than or equal to 5 (cf. n.
19)
all expected frequencies are larger than 1
Computing the contributions to chi-square for all observed frequencies
Summing the contributions to chi-square to get the test statistic χ2
Determining the degrees of freedom df and the probability of error p
Distributions and frequencies 167

The text forms of the hypotheses are simple:

H0: The frequencies of the levels of the dependent variable


CONSTRUCTION do not vary as a function of the levels of the inde-
pendent variable GIVENNESS.
H1: The frequencies of the levels of the dependent variable
CONSTRUCTION vary as a function of the levels of the independent
variable GIVENNESS.

Formulating the statistical hypothesis is a bit more complex here and


can be seen as related to the tabulation of the observed frequencies and the
computation of the expected frequencies, which is why I will discuss these
three things together and only explain later how you can stick to the order
of the procedure above.
In order to discuss this version of the chi-square test, we return to the
data from Peters (2001). As a matter of fact, the above discussion did not
utilize all of Peters’s data because I omitted an independent variable, name-
ly GIVENNESS. Peters (2001) did not just study the frequency of the two
constructions – she studied what we are going to look at here, namely
whether GIVENNESS is correlated with CONSTRUCTION. In the picture-
description experiment described above, she manipulated the variable
GIVENNESS and obtained the already familiar 397 verb-particle construc-
tions, which patterned as represented in Table 23.
Before we discuss how to do the significance test, let us first explore the
data graphically. You load the data from <C:/_sflwr/_inputfiles/04-1-2-
2_vpcs.txt>, create a table of the two factors, and get a first visual impres-
sion of the distribution of the data:

Table 23. Observed construction frequencies of Peters (2001)


GIVENNESS: GIVENNESS: Row
GIVEN NEW totals
CONSTRUCTION:
85 65 150
V DO PART
CONSTRUCTION:
100 147 247
V PART DO
Column totals 185 212 397

> VPCs<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(VPCs);—str(VPCs)¶
'data.frame':———397—obs.—of——3—variables:
168 Analytical statistics

—$—CASE————————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—GIVENNESS———:—Factor—w/—2—levels—"given","new":—1—1—1—...
—$—CONSTRUCTION:—Factor—w/—2—levels—"V_DO_Part","V_Part_DO":—
1—1—...
> Peters.2001<-table(CONSTRUCTION,—GIVENNESS)¶
> plot(CONSTRUCTION~GIVENNESS)¶

Figure 44. Mosaic plot for CONSTRUCTION~GIVENNESS

Obviously, the differently-colored areas are differently big between


rows/columns. To test these differences for significance, we need the fre-
quencies expected from H0. But how do we formulate H0 and compute
these frequencies? Since this is a central question, we will discuss this in
detail. Let us assume Peters had obtained the totals in Table 24.

Table 24. Fictitious observed construction frequencies of Peters (2001)


GIVENNESS: GIVENNESS: Row
GIVEN NEW totals
CONSTRUCTION:
100
V DO PART
CONSTRUCTION:
100
V PART DO
Column totals 100 100 200

What would the distribution following from H0 look like? Above in Sec-
tion 4.1.1.2, we said that H0 typically postulates equal frequencies. Thus,
you might assume – correctly – that the expected frequencies are those
Distributions and frequencies 169

represented in Table 25. All marginal totals are 100 and every variable has
two equally frequent levels so we have 50 in each cell.

Table 25. Fictitious expected construction frequencies of Peters (2001)


GIVENNESS: GIVENNESS: Row
GIVEN NEW totals
CONSTRUCTION:
50 50 100
V DO PART
CONSTRUCTION:
50 50 100
V PART DO
Column totals 100 100 200

The statistical hypotheses would then be:

H0: nV DO Part & Ref DO = given = nV DO Part & Ref DO ≠ given = nV Part DO & Ref DO = given
= nV Part DO & Ref DO ≠ given
H1: as H0, but there is at least one “≠” instead of an “=“.

However, life is usually not that simple, for example when (a) as in Pe-
ters (2001) not all subjects answer all questions or (b) naturally-observed
data are counted that are not as nicely balanced. Thus, let us now return to
Peters’s real data. In her case, it does not make sense to simply assume
equal frequencies. Put differently, H1 cannot be the above because we
know from the row totals of Table 23 that the different levels of
GIVENNESS are not equally frequent. If GIVENNESS had no influence on
CONSTRUCTION, then you would expect that the frequencies of the two
constructions for each level of GIVENNESS would exactly reflect the fre-
quencies of the two constructions in the sample as whole. That means (i) all
marginal totals must remain constant (since they reflect the numbers of the
investigated elements), and (ii) the proportions of the marginal totals de-
termine the cell frequencies in each row and column. From this, a rather
complex set of hypotheses follows (which we will simplify presently):

H0: nV DO Part & Ref DO = given : nV DO Part & Ref DO ≠ given ∝


nV Part DO & Ref DO = given : nV Part DO & Ref DO ≠ given ∝
nRef DO = given : nRef DO ≠ given and
nV DO Part & Ref DO = given : nV Part DO & Ref DO = given ∝
nV DO Part & Ref DO ≠ given : nV Part DO & Ref DO ≠ given ∝
n V DO Part : n V Part DO
170 Analytical statistics

H1: as H0, but there is at least one “≠” instead of an “=“.

In other words, you cannot simply say, “there are 2·2 = 4 cells and I as-
sume each expected frequency is 397 divided by 4, i.e., approximately
100.” If you did that, the upper row total would amount to nearly 200 – but
that can’t be correct since there are only 150 cases of CONSTRUCTION:
VERB-OBJECT-PARTICLE and not ca. 200. Thus, you must include this in-
formation, that there are only 150 cases of CONSTRUCTION: VERB-OBJECT-
PARTICLE into the computation of the expected frequencies. The easiest
way to do this is using percentages: there are 150/397 cases of
CONSTRUCTION: VERB-OBJECT-PARTICLE (i.e. 0.3778 = 37.78%). Then,
there are 185/397 cases of GIVENNESS: GIVEN (i.e., 0.466 = 46.6%). If the two
variables are independent of each other, then the probability of their joint
occurrence is 0.3778·0.466 = 0.1761. Since there are altogether 397 cases
to which this probability applies, the expected frequency for this combina-
tion of variable levels is 397·0.1761 = 69.91. This logic can be reduced to
the formula in (27).

row sum ⋅ column sum


(27) nexpected cell frequency =
n

If you apply this logic to every cell, you get Table 26.

Table 26. Expected construction frequencies of Peters (2001)


GIVENNESS: GIVENNESS: Row
GIVEN NEW totals
CONSTRUCTION:
69.9 80.1 150
V DO PART
CONSTRUCTION:
115.1 131.9 247
V PART DO
Column totals 185 212 397

You can immediately see that this table corresponds to the above null
hypothesis: the ratios of the values in each row and column are exactly
those of the row totals and column totals respectively. For example, the
ratio of 69.9 to 80.1 to 150 is the same as that of 115.1 to 131.9 to 247 and
as that of 185 to 212 to 397, and the same is true in the other dimension.
Thus, the null hypothesis does not mean ‘all cell frequencies are identical’
– it means ‘the ratios of the cell frequencies are equal (to each other and the
Distributions and frequencies 171

respective marginal totals.’


This method to compute expected frequencies can be extended to arbi-
trarily complex frequency tables (as you will see again in Chapter 5). But
how do we test whether these deviate strongly enough from the observed
frequencies? And do we really need such complicated hypotheses? Fortu-
nately, there are simple and interrelated answers to these questions. As was
mentioned above, the chi-square test for independence is very similar to the
chi-square goodness-of-fit test: for each cell, you compute a contribution to
chi-square, you sum those up to get the chi-square test statistic, and since
we have discussed above that chi-square becomes zero when the observed
frequencies are the same as the expected, we can abbreviate our hypothesis
considerably:

H0: χ2 = 0.
H1: χ2 > 0.

And since this kind of null hypothesis does not require any specific ob-
served or expected frequencies, it allows you to stick to the order of steps
in the above procedure and formulate hypotheses before having data.
As before, the chi-square test can only be used when its assumptions are
met. The expected frequencies are large enough and for simplicity’s sake
we assume here that every subject only gave just one sentence so that the
observations are independent of each other: for example, the fact that some
subject produced a particular sentence on one occasion does then not affect
any other subject’s formulation. We can therefore proceed as above and
compute (the sum of) the contributions to chi-square on the basis of the
same formula, here repeated as (28):

n
(observed − expected ) 2
(28) Pearson χ2 = ∑
i =1 expected

The results are shown in Table 27 and the sum of all contributions to
chi-square, chi-square itself, is 9.82. However, we again need the number
of degrees of freedom. For two-dimensional tables and when the expected
frequencies are computed on the basis of the observed frequencies as you
did above, the number of degrees of freedom is computed as shown in
(29).24

24. In our example, the expected frequencies were computed from the observed frequencies
172 Analytical statistics

Table 27. Contributions to chi-square for the data of Peters (2001)


GIVENNESS: GIVENNESS: Row
GIVEN NEW totals
CONSTRUCTION:
3.26 2.85
V DO PART
CONSTRUCTION:
1.98 1.73
V PART DO
Column totals 9.82

(29) df = (no. of rows-1) ⋅ (no. of columns-1) = (2-1)⋅(2-1) = 1

With both the chi-square and the df-value, you can look up the result in
a chi-square table. As above, if the observed chi-square value is larger than
the one tabulated for p = 0.05 at the required df-value, then you can reject
H0. Thus, Table 28 is the same as Table 21 and can be generated with
qchisq as explained above.

Table 28. Critical χ2-values for ptwo-tailed = 0.05, 0.01, and 0.001 for 1 ≤ df ≤ 3
p = 0.05 p = 0.01 p = 0.001
df = 1 3.841 6.635 10.828
df = 2 5.991 9.21 13.816
df = 3 7.815 11.345 16.266

Here, chi-square is not only larger than the critical value for p = 0.05
and df = 1, but also larger than the critical value for p = 0.01 and df = 1.
But, since the chi-square value is not also larger than 10.827, the actual p-
value is somewhere between 0.01 and 0.001: the result is very significant,
but not highly significant.
Fortunately, all this is much easier when you use R’s built-in function.
Either you compute just the p-value as before,

> pchisq(9.82,—1,—lower.tail=F)¶
[1]—0.001726243

or you use the function chisq.test and do everything in a single step.


The most important arguments for our purposes are

in the marginal totals. If you compute the expected frequencies not from your observed
data but from some other distribution, the computation of df changes to: df = (number of
rows ⋅ number of columns)-1.
Distributions and frequencies 173

− x: the two-dimensional table for which you want to do a chi-square test;


− correct=T or correct=F; cf. above for the continuity correction.25

> eval.Pet<-chisq.test(Peters.2001,—correct=F);—eval.Pet¶
————————Pearson's—Chi-squared—test
data:——Peters.2001
X-squared—=—9.8191,—df—=—1,—p-value—=—0.001727

As before, you can also obtain the expected frequencies or just the chi-
square value itself:

> eval.Pet$expected¶
————————————GIVENNESS
CONSTRUCTION—————given———————new
———V_DO_Part——69.89924——80.10076
———V_Part_DO—115.10076—131.89924
> eval.Pet$statistic¶
X-squared
—9.819132

You now know that GIVENNESS is correlated with CONSTRUCTION, but


you neither know yet how strong that effect is nor which variable level
combinations are responsible for this result. As for the effect size, even
though you might be tempted to use the size of the chi-square value to
quantify the effect, you must not do that. This is because the chi-square
value is dependent on the sample size, as we can easily see:

> chisq.test(Peters.2001*10,—correct=F)¶
————————Pearson's—Chi-squared—test
data:——Peters.2001—*—10
X-squared—=—98.1913,—df—=—1,—p-value—<—2.2e-16

For effect sizes, this is of course a disadvantage since just because the
sample size is larger, this does not mean that the relation of the values to
each other has changed, too. You can easily verify this by noticing that the
ratios of percentages, for example, have stayed the same. For that reason,
the effect size is often quantified with a coefficient of correlation (called φ
in the case of k×2/m×2 tables or Cramer’s V for k×m tables with k or m >
2), which falls into the range between 0 and 1 (0 = no correlation; 1 = per-
fect correlation) and is unaffected by the sample size. This correlation coef-
ficient is computed according to the formula in (30):

25. For further options, cf. again ?chisq.test¶. Note also what happens when you enter
summary(Peters.2001)¶.
174 Analytical statistics

(30) φ / Cramer’s V / Cramer’s index I =

2
χ
n ⋅ (min[ n ,n ] − 1)
rows columns

In R, you can of course do this in one line of code:

> sqrt(eval.Pet$statistic/
sum(Peters.2001)*(min(dim(Peters.2001))-1))¶
X-squared
0.1572683

Given the theoretical range of values, this is a rather small effect size.26
Thus, the correlation is probably not random, but practically not extremely
relevant.
Another measure of effect size, which can however only be applied to
2×2 tables, is the so-called odds ratio. An odds ratio tells you how the like-
lihood of one variable level changes in response to a change of the other
variable’s level. The odds of an event E correspond to the fraction in (31).

pE odds
(31) odds = (you get probabilities from odds with )
1 − pE 1 + odds

The odds ratio for a 2×2 table such as Table 23 is the ratio of the two
odds (or 1 divided by that ratio, depending on whether you look at the
event E or the event ¬E (not E)):

85 ÷ 65
(32) odds ratio for Table 23 = = 1.9223
100 ÷ 147

In words, the odds of CONSTRUCTION: V DO PART are (85/185) / (1-85/185) =


85
/100 = 0.85 when the referent of the direct object is given and 65/147 =
0.4422 when the referent of the direct object is new. This in turn means that
CONSTRUCTION: V DO PART is 0.85/0.4422 ≈ 1.9223 times more likely when the
referent of the direct object is given than when it is not. From this, it also

26. The theoretical range from 0 to 1 is really only possible in particular situations, but still
a good heuristic to interpret this value.
Distributions and frequencies 175

follows that the odds ratio in the absence of an interaction is ≈ 1.27


This is how you would summarize the above results: “New objects are
strongly preferred in the construction Verb-Particle-Direct Object and are
dispreferred in Verb-Direct Object-Particle. The opposite kind of construc-
tional preference is found for given objects. According to a chi-square test
for independence, this correlation is very significant (χ2 = 9.82; df = 1; ptwo-
tailed < 0.002), but the effect is not particularly strong (φ = 0.157, odds ratio
= 1.9223).
Table 27 also shows which variable level combinations contribute most
to the significant correlation: the larger the contribution to chi-square of a
cell, the more that cell contributes to the overall chi-square value; in our
example, these values are all rather small – none exceeds the chi-square
value for p = 0.05 and df = 1, i.e., 3.841. In R, you can get the contributions
to chi-square as follows:

> eval.Pet$residuals^2¶
————————————GIVENNESS
CONSTRUCTION————given——————new
———V_DO_Part—3.262307—2.846825
———V_Part_DO—1.981158—1.728841

That is, you compute the Pearson residuals and square them. The Pear-
son residuals in turn can be computed as follows; negative and positive
values mean that observed values are smaller and larger than the expected
values respectively.

> eval.Pet$residuals¶
————————————GIVENNESS
CONSTRUCTION—————given———————new
———V_DO_Part——1.806186—-1.687254
———V_Part_DO—-1.407536——1.314854

Thus, if, given the small contributions to chi-square, one wanted to draw
any further conclusions, then one could only say that the variable level
combination contributing most to the significant result is the combination
of CONSTRUCTION: V DO PART and GIVENNESS: GIVEN, but the individual

27. Often, you may find the logarithm of the odds ratio. When the two variables are not
correlated, this log odds ratio is log 1 = 0, and positive/negative correlations result in
positive/negative log odds ratios, which is often a little easier to interpret. For example,
if you have two odds ratios such as odds ratio1 = 0.5 and odds ratio2 = 1.5, then you
cannot immediately see, which effect is larger. The logs of the odds ratios – log10 odds
ratio1 = -0.301 and log10 odds ratio2 = 0.176 – tell you immediately the former is larger
because its absolute value is larger.
176 Analytical statistics

cells’ effects here are really rather small. An interesting and revealing gra-
phical representation is available with the function assocplot, whose most
relevant argument is the two-dimensional table under investigation: In this
plot, “the area of the box is proportional to the difference in observed and
expected frequencies” (cf. R Documentation, s.v. assocplot for more de-
tails). The black rectangles above the dashed lines indicate observed fre-
quencies exceeding expected frequencies; grey rectangles below the dashed
lines indicate observed frequencies smaller than expected frequencies; the
heights of the boxes are proportional to the above Pearson residuals and the
widths are proportional to the square roots of the expected frequencies.

> assocplot(Peters.2001)¶

Figure 45. Association plot for CONSTRUCTION~GIVENNESS

(As a matter of fact, I usually prefer to transpose the table before I plot
an association plot because then the row/column organization of the plot
corresponds to that of the original table: assocplot(t(Peters.2001))¶)
Another interesting way to look at the data is a mixture between a plot and
a table. The table/graph in Figure 46 has the same structure as Table 23, but
(i) the sizes in which the numbers are plotted directly reflects the size of the
residuals (i.e., bigger numbers deviate more from the expected frequencies
than smaller numbers, where bigger and smaller are to be understood in
terms of plotting size), and (ii) the coloring indicates how the observed
Distributions and frequencies 177

frequencies deviate from the expected ones: dark grey indicates positive
residuals and light grey indicates negative residuals. (The function to do
this is available from me upon request; for lack of a better terms, for now I
refer to this as a cross-tabulation plot.)
Let me finally emphasize that the above procedure is again the one pro-
viding you with a p-value for a two-tailed test. In the case of 2×2 tables,
you can perform a one-tailed test as discussed in Section 4.1.1.2 above, but
you cannot do one-tailed tests for tables with df > 1. In Section 5.1, we will
discuss an extension of chi-square tests to tables with more than two va-
riables.

Figure 46. Cross-tabulation plot for CONSTRUCTION~GIVENNESS

Recommendation(s) for further study


− the functions dotchart and sieve (from the library(vcd)) as well as
table.cont (from the library(ade4)) for other kinds of plots
− the function assocstats (from the library(vcd)) for a different way
to compute chi-square tests and effect sizes at the same time
− the function CrossTable (from the library(gmodels)) for more com-
prehensive tables
− the argument simulate.p.value=T of the function chisq.test and the
function fisher.test, which you can use when the expected frequen-
cies are too small for a regular chi-square test
178 Analytical statistics

− the function mantelhaen.test to test for three-way interactions


− the Marascuilo procedure to test which observed row or column fre-
quencies are different from each other in pairwise tests (cf. Gries,
forthc., who also discusses how to test a subtable out of a larger table)
− Dalgaard (2002: Ch. 7), Crawley (2005: 85ff.), Crawley (2007: 301ff.)
− Good and Hardin (2006: 79f.) on problems of chi-square tests and some
alternative suggestions

Let me mention one additional useful application of the chi-square test


that is similar to the Mantel-Haenszel test (from Zar 1999: Section 23.4).
Sometimes, you may have several isomorphic 2×2 tables on the same phe-
nomenon, maybe because you found another source that discusses the same
kind of data. You may then want to know whether or not the data are so
similar that you can actually merge or amalgamate the data into one single
data set. Here are the text hypotheses for that kind of question:

H0: The trends in the data are identical: heterogeneity χ2 = 0.


H1: The trends in the data are not identical: heterogeneity χ2 > 0.

To explore this approach, let us compare Peters’s data to those of Gries


(2003a). You can enter the latter into R directly:

> Gries.2003<-matrix(c(143,—66,—53,—141),—ncol=2,—byrow=T)¶
> rownames(Gries.2003)<-rownames(Peters.2001)¶
> colnames(Gries.2003)<-colnames(Peters.2001)¶
> Gries.2003¶
——————————given—new
V_DO_Part———143——66
V_Part_DO————53—141

On the one hand, these data look very different from those of Peters
(2001) because, here, when GIVENNESS is GIVEN, then CONSTRUCTION:
V_DO_PART is nearly three times as frequent as CONSTRUCTION: V_
PART_DO (and not in fact less frequent, as in Peters’s data). On the other
hand, the data are also similar because in both cases given direct objects
increase the likelihood of CONSTRUCTION:V_DO_PART. A direct compari-
son of the association plots (not shown here, but you can use the following
code to generate them) makes the data seem very much alike – how much
more similar could two association plots be?

> par(mfrow=c(1,2))¶
> assocplot(Peters.2001)¶
Distributions and frequencies 179

> assocplot(Gries.2003,—xlab="CONSTRUCTION",—
ylab="GIVENNESS")¶
> par(mfrow=c(1,1))—#—restore—the—standard—plotting—setting¶

However, you should not really compare the sizes of the boxes in asso-
ciation plots – only the overall tendencies – so we now turn to the hetero-
geneity chi-square test. The heterogeneity chi-square value is computed as
the difference between the sum of chi-square values of the original tables
and the chi-square value you get from the merged tables (that’s why they
have to be isomorphic), and it is evaluated with a number of degrees of
freedom that is the difference between the sum of the degrees of freedom of
all merged tables and the degrees of freedom of the merged table. Sounds
pretty complex, but in fact it is not. The following code should make every-
thing clear.
First, you compute the chi-square test for the data from Gries (2003):

> eval.Gr<-chisq.test(Gries.2003,—correct=F);—eval.Gr¶
————————Pearson's—Chi-squared—test
data:——Gries.2003
X-squared—=—68.0364,—df—=—1,—p-value—<—2.2e-16

Then you compute the sum of chi-square values of the original tables:

> sum.chisq.indiv.tables<-eval.Pet$stat+eval.Gr$stat¶

After that, you compute the chi-square value of the combined table:

> eval.total<-chisq.test(Peters.2001+Gries.2003,—correct=F)¶
> sum.chisq.merged.table<-eval.total$stat¶

And then the heterogeneity chi-square and its degrees of freedom (you
get the df-values with $parameter):

> het.chisq<-sum.chisq.indiv.tables-sum.chisq.merged.table¶
> het.df<-sum(eval.Pet$parameter,—eval.Gr$parameter)-
eval.tot$parameter¶

How do you now get the p-value for these results?

THINK
BREAK
180 Analytical statistics

> pchisq(het.chisq,—het.df,—lower.tail=F)¶
[1]—0.0005387754

As you can see, the data from the two studies are actually rather differ-
ent: yes, they exhibit the same overall trends (given objects increase the
likelihood of CONSTRUCTION:V_DO_PART, but they still differ highly sig-
nificantly from each other (χ2heterogeneity = 11.98; df = 1; ptwo-tailed < 0.001).
What is responsible for this difference? The different effect sizes: the odds
ratio for Peters’s data was 1.92, but in Gries’s data it is nearly exactly three
times as large:

> (143/66)/(53/141)¶
[1]—5.764151

And that is also what you would write in your results section.

1.2.3. One dep. variable (nominal/categorical) (dep. samples)

One central requirement of the chi-square test for independence is that the
tabulated data points are independent of each other. There are situations,
however, where this is not the case, and in this section I discuss one me-
thod you can use on one such occasion.
Let us assume you want to test whether metalinguistic knowledge influ-
ences acceptability judgments. This is relevant because many acceptability
judgments used in linguistic research were produced by the investigating
linguists themselves, and one may well ask oneself whether it is really
sensible to rely on judgments by linguists with all their metalinguistic
knowledge instead of on judgments by linguistically naïve subjects. This is
especially relevant since studies have shown that judgments by linguists,
who after all think a lot about sentences constructions, and other expres-
sions, can deviate a lot from judgments by laymen, who usually don’t (cf.
Spencer 1973, Labov 1975, or Greenbaum 1976). In an admittedly over-
simplistic case, you could ask 100 linguistically naïve native speakers to
rate a sentence as ‘acceptable’ or ‘unacceptable’. After the ratings have
been made, you could tell the subjects which phenomenon the study inves-
tigated and which variable you thought influenced the sentences’ accepta-
bility. Then, you would give the sentences back to the subjects to have
them rate them once more. The question would be whether the subjects’
newly acquired metalinguistic knowledge would make them change their
ratings and, if so, how. This question involves
Distributions and frequencies 181

− a dependent nominal/categorical variable, namely BEFORE: ACCEPTABLE


vs. BEFORE: UNACCEPTABLE;
− a dependent nominal/categorical variable, namely AFTER: ACCEPTABLE
vs. AFTER: UNACCEPTABLE;
− dependent samples since every subject produced two judgments.

For such scenarios, you use the McNemar test (or Bowker test, cf. be-
low). This test is related to the chi-square tests discussed above in Sections
4.1.1.2 and 4.1.2.2 and involves the following procedure:

Procedure
Formulating the hypotheses
Testing the assumption(s) of the test:
the observed variable levels are related in a pairwise manner
the expected frequencies are larger than 5
Computing the frequencies you would expect given H0
Computing the contributions to chi-square for all observed frequencies
Summing the contributions to chi-square to get the test statistic χ2
Determining the degrees of freedom df and the probability of error p

First, the hypotheses:

H0: The frequencies of the two possible ways in which subjects pro-
duce a judgment in the second rating task that differs from that in
the first rating task are equal (or shorter χ2 = 0).
H1: The frequencies of the two possible ways in which subjects pro-
duce a judgment in the second rating task that differs from that in
the first rating task are not equal (or shorter χ2 > 0).

To get to know this test, we use the fictitious data summarized in Table
29, which you first read in from the file <C:/_sflwr/_inputfiles/04-1-2-
3_accjudg.txt>.

> AccBeforeAfter<-read.table(choose.files(),—header=T,—
sep="\t",—comment.char="",—quote="")¶
> attach(AccBeforeAfter);—str(AccBeforeAfter)¶
`data.frame':———100—obs.—of——3—variables:
—$—SENTENCE:—int——1—2—3—4—5—6—7—8—9—10—...
—$—BEFORE——:—Factor—w/—2—levels—"acceptable","inacceptable":—
1—...
—$—AFTER———:—Factor—w/—2—levels—"acceptable","inacceptable":—
1—...
182 Analytical statistics

Table 29. Observed frequencies in a fictitious study on acceptability judgments


AFTER
ACCEPTABLE INACCEPTABLE Row totals
BEFORE ACCEPTABLE 31 39 70
INACCEPTABLE 13 17 30
Column totals 44 56 100

Table 29 already suggests that there has been a major change of judg-
ments: Of the 100 rated sentences, only 31+17 = 48 sentences – not even
half! – were judged identically in both ratings. But now you want to know
whether the way in which the 52 judgments changed is significantly differ-
ent from chance. But what does the chance expectation look like?
The McNemar test only involves those cases where the subjects
changed their opinion. If these are distributed equally, then the expected
distribution of the 52 cases in which subjects change their opinion is that in
Table 30.

Table 30. Expected frequencies in a fictitious study on acceptability judgments


AFTER
ACCEPTABLE INACCEPTABLE Row totals
BEFORE ACCEPTABLE 26
INACCEPTABLE 26
Column totals

From this, you can see that both expected frequencies are larger than 5
so you can indeed do the McNemar test. As before, you compute a chi-
square value (using the by now familiar formula in (33)) and a df-value
according to the formula in (34) (where k is the number of rows/columns):

n
(observed − expected ) 2
(33) χ2 = ∑
i =1 expected
= 13

k ⋅ (k − 1)
(34) df = =1
2

As before, you can look up this chi-square value in the kind of chi-
square table and, again as before, if the computed chi-square value is larger
than the tabulated one for the relevant df-value for p = 0.05, you may reject
H0. As you can see, the number of changes is too large to be compatible
Distributions and frequencies 183

with H0 and we accept H1. As usual, you can of course compute the exact
p-value with pchisq(13,—1,—lower.tail=F)¶.
This is how you summarize this finding in the results section: “Accord-
ing to a McNemar test, the way 52 out of 100 subjects changed their judg-
ments after they were informed of the purpose of the experiment is signifi-
cantly different from chance: in the second rating task, the number of ‘ac-
ceptable’ judgments is much smaller (χ2 = 13; df = 1; ptwo-tailed < 0.001).”

Table 31. Critical chi-square values for ptwo-tailed = 0.05, 0.01, and 0.001 for
1 ≤ df ≤ 3
p = 0.05 p = 0.01 p = 0.001
df = 1 3.841 6.635 10.828
df = 2 5.991 9.21 13.816
df = 3 7.815 11.345 16.266

In R, this is again much easier. You need the function mcnemar.test


and it typically requires two arguments:

− x: a two-dimensional table for which you want to compute the McNe-


mar test;
− correct=F or correct=T (the default): when the number of changes is
smaller than 30, then sometimes the continuity correction is recom-
mended.

> mcnemar.test(table(BEFORE,—AFTER),—correct=F)¶
————————McNemar's—Chi-squared—test
data:——table(BEFORE,—AFTER)
McNemar's—chi-squared—=—13,—df—=—1,—p-value—=—0.0003115

The summary and conclusions are of course the same. When you do this
test for k×k tables (with k > 2), this test is sometimes called Bowker test.

Recommendation(s) for further study


− for Cochran’s extension of the McNemar test to test three or more mea-
surements of a dichotomous variable, cf. Bortz (2005: 161f.)
− when a McNemar test yields a significant result, this result may theoret-
ically be (in part) attributable to the order of the tests, which you can
check with the Gart test
− the function runs.test (from the library(tseries)) to test the ran-
domness of a binary sequence
184 Analytical statistics

2. Dispersions

Sometimes, it is necessary and/or interesting to not just look at the general


characteristics of a distribution but with more narrowly defined distribu-
tional characteristics. The two most obvious characteristics are the disper-
sion and the central tendency of a distribution. This section is concerned
with the dispersion – more specifically, the variance or standard deviation –
of a variable; Section 4.3 discusses measures of central tendency.
For some research questions, it is useful to know, for example, whether
two distributions have the same or a similar dispersion. Put differently, do
two distributions spread around their means in a similar or in a different
way. We touched upon this topic a little earlier in Section 3.1.3.6, but to
illustrate the point once more, consider Figure 47.

Figure 47. Two fictitious distributions

Figure 47 shows two distributions, one group of 10 values (represented


by round points) and another group of 10 values (represented by crosses).
The means of these groups are shown with the two horizontal lines (dashed
Dispersions 185

for the first group), and the deviations of each point from its group mean
are shown with the vertical lines. As you can easily see, the groups do not
just differ in terms of their means (meangroup 2 = 1.99; meangroup 1 = 5.94),
but also in terms of their dispersion: the deviations of the points of group 1
from their mean are much larger than their counterparts in group 2. While
this difference is obvious in Figure 47, it can be much harder to discern in
other cases, which is why we need a statistical test. In Section 4.2.1, we
discuss how you test whether the dispersion of one dependent inter-
val/ratio-scaled variable is significantly difference from a known dispersion
value. In Section 4.2.2, we discuss how you test whether the dispersion of
one dependent ratio-scaled variable differs significantly in two groups.

2.1. Goodness-of-fit test for one dep. variable (ratio-scaled)

As an example for this test, we return to the above example on first lan-
guage acquisition of Russian tense-aspect patterning. In Section 4.1.1.1
above, we looked at how the correlation between the use of tense and as-
pect of one child developed over time. Let us assume, you now want to test
whether the overall variability of the values for this child is significantly
different from that of other children for which you already have data. Let as
further assume that for these other children you found a variance of 0.025.
This question involves the following variables and is investigated with a
chi-square test as described below:

− a dependent ratio-scaled variable, namely the variable TENSEASPECT,


consisting of the Cramer’s V values;
− no independent variable because you are not testing whether the distri-
bution of the variable TENSEASPECT is influenced by, or correlated
with, something else.

Procedure
Formulating the hypotheses
Computing the observed sample variance
Testing the assumption(s) of the test: the population from which the sample
has been drawn or at least the sample from which the sample va-
riance is computed is normally distributed
Computing the test statistic χ2, the degrees of freedom df, and the probabili-
ty of error p
186 Analytical statistics

As usual, you begin with the hypotheses:

H0: The variance of the data for the newly investigated child does not
differ from the variance of the children investigated earlier; sd2
TENSEASPECT of the new child = sd2 TENSEASPECT of the already
investigated children, or sd2 of the new child = 0.025, or the quo-
tient of the two variances is 1.
H1: The variance of the data for the newly investigated child differs
from the variance of the children investigated earlier; sd2
TENSEASPECT of the new child ≠ sd2 TENSEASPECT of the already
investigated children, or sd2 of the new child ≠ 0.025, or the quo-
tient of the two variances is not 1.

You load the data from <C:/_sflwr/_inputfiles/04-2-1_tense-aspect.txt>.

> RussTensAsp<-read.table(choose.files(),—header=T,—
sep="\t",—comment.char="",—quote="")¶
> attach(RussTensAsp)¶

As a next step, you must test whether the assumption of this chi-square
test is met and whether the data are in fact normally distributed. We have
discussed this in detail above so we run the test here without further ado.

> shapiro.test(TENSE_ASPECT)¶
————————Shapiro-Wilk—normality—test
data:———TENSE_ASPECT
W—=—0.9942,—p-value—=—0.9132

Just like in Section 4.1.1.1 above, you get a p-value of 0.9132, which
means you must not reject H0, you can consider the data to be normally
distributed, and you can compute the chi-square test. You first compute the
sample variance that you want to compare to the previous results:

> var(TENSE_ASPECT)¶
[1]—0.01687119

To test whether this value is significantly different from the known va-
riance of 0.025, you compute a chi-square statistic as in formula (35).

(35) χ2 =
(n − 1) ⋅ sample variance
population variance
Dispersions 187

This chi-square value has n-1 = 116 degrees of freedom. In R:

> chi.square<-((length(TENSE_ASPECT)-
1)*var(TENSE_ASPECT))/0.025¶
> chi.square¶
[1]—78.28232

As usual, you can create those critical values yourself or you then look
up this chi-square value in the familiar kind of table.

> qchisq(c(0.05,—0.01,—0.001),—116,—lower.tail=F)¶

Table 32. Critical chi-square values for ptwo-tailed = 0.05, 0.01, and 0.001 for
115 ≤ df ≤ 117
p = 0.05 p = 0.01 p = 0.001
df = 115 141.03 153.191 167.61
df = 116 142.138 154.344 168.813
df = 117 143.246 155.496 170.016

Since the obtained value of 78.28 is much smaller than the relevant crit-
ical value of 142.138, the difference between the two variances is not sig-
nificant. You can compute the exact p-value as follows:

> pchisq(chi.square,—(length(TENSE_ASPECT)-1),—lower.tail=F)¶
[1]—0.9971612¶

This is how you would summarize the result: “According to a chi-


square test, the variance of the newly investigated child (0.017) does not
differ significantly from the variance of the children investigated earlier
(0.025): χ2 = 78.28; df = 116; ptwo-tailed > 0.05.”

2.2. One dep. variable (ratio-scaled) and one indep. variable (nominal)

The probably more frequent scenario in the domain ‘testing dispersions’ is


the case where you test whether two samples or two variables exhibit the
same dispersion (or at last two dispersions that do not differ significantly.
Since the difference of dispersions or variances is probably not a concept
you spent much time thinking about so far, let us look at one illustrative
example from the domain of sociophonetics). Gaudio (1994) studied the
pitch range of heterosexual and homosexual men. At issue was therefore
188 Analytical statistics

not the average pitch, but its variability, a good example for how variability
as such can be interesting, In that study, four heterosexual and four homo-
sexual men were asked to read aloud two text passages and the resulting
recordings were played to 14 subjects who were asked to guess which
speakers were heterosexual and which were homosexual. Interestingly, the
subjects were able to distinguish the sexual orientation nearly perfectly.
The only (insignificant) correlation which suggested itself as a possible
explanation was that the homosexual men exhibited a wider pitch range in
one of the text types, i.e., a result that has to do with variability and disper-
sion.
To get to know the statistical procedure needed for such cases we look
at an example from the domain of second language acquisition. Let us as-
sume you want to study how native speakers of a language and very ad-
vanced learners of that language differed in a synonym-finding task in
which both native speakers and learners are presented with words for which
they are asked to name synonyms. You may now be not be interested in the
exact numbers of synonyms – maybe, the learners are so advanced that
these are actually fairly similar in both groups – but in whether the learners
exhibit more diversity in the amounts of time they needed to come up with
all the synonyms they can name. This question involves

− a dependent ratio-scaled variable, namely SYNTIMES, the time subjects


needed to name the synonyms;
− a nominal/categorical independent variable, namely SPEAKER: LEARNER
and SPEAKER: NATIVE.

This kind of questions is investigated with the so-called F-test for ho-
mogeneity of variances, which involves the following steps.

Procedure
Formulating the hypotheses
Computing the sample variance; inspecting a graph
Testing the assumption(s) of the test:
the population from which the sample mean has been drawn or at
least the sample itself is normally distributed
the samples are independent of each other
Computing the test statistic t, the degrees of freedom df, and the probability
of error p
Dispersions 189

First, you formulate the hypotheses. Note that the alternative hypothesis
is non-directional / two-tailed.

H0: The times the learners need to name the synonyms they can think
of are not differently variable from the times the native speakers
need to name the synonyms they can think of; sd2learner = sd2native.
H1: The times the learners need to name the synonyms they can think
of are differently variable from the times the native speakers need
to name the synonyms they can think of; sd2learner ≠ sd2native.

As an example, we use the (fictitious) data in the file


<C:/_sflwr/_inputfiles/04-2-2_synonymtimes.txt>:

> SynonymTimes<-read.table(choose.files(),—header=T,—
sep="\t",—comment.char="",—quote="")¶
> attach(SynonymTimes);—str(SynonymTimes)¶
`data.frame':———80—obs.—of——3—variables:
—$—CASE————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—SPEAKER—:—Factor—w/—2—levels—"Learner","Native":—1—1—1—...
—$—SYNTIMES:—int——11—7—11—11—8—4—7—10—12—7—...

You compute the variances for both subject groups and plot the data:

> tapply(SYNTIMES,—SPEAKER,—var)¶
—Learner———Native
10.31731—15.75321
> boxplot(SYNTIMES~SPEAKER,—notch=T)¶
> rug(jitter(SYNTIMES),—side=2)¶

At first sight, the data are very similar to each other: the medians are
very close to each other, each median is within the notch of the other, the
boxes have similar sizes, only the ranges of the whiskers differ.
The F-test requires a normal distribution of the population or at least the
sample. We again use the Shapiro-Wilk test from Section 4.1.1.1.

> shapiro.test(SYNTIMES[SPEAKER=="Learner"])¶
————————Shapiro-Wilk—normality—test
data:——SYNTIMES[SPEAKER—==—"Learner"]
W—=—0.9666,—p-value—=—0.279
> shapiro.test(SYNTIMES[SPEAKER=="Native"])¶
————————Shapiro-Wilk—normality—test
data:———SYNTIMES[SPEAKER—==—"Native"]
W—=—0.9774,—p-value—=—0.5943
190 Analytical statistics

Figure 48. Boxplot for SYNTIMES~SPEAKER

By the way, this way of doing the Shapiro-Wilk test is not particularly
elegant – can you think of a better one?

THINK
BREAK

In Section 3.2.2 above, we used the function tapply, which allows you
to apply a function to elements of a vector that are grouped according to
another vector/factor. You can therefore write:

>tapply(SYNTIMES,—SPEAKER,—shapiro.test)¶
$Learner
————————Shapiro-Wilk—normality—test
data:——X[[1L]]
W—=—0.9666,—p-value—=—0.2791
$Native
————————Shapiro-Wilk—normality—test
data:——X[[2L]]
W—=—0.9774,—p-value—=—0.5943

Nothing to worry about: both samples do not deviate significantly from


normality and you can do an F-test. This test requires you to compute the
Dispersions 191

quotient of the two variances (traditionally, the larger variance is used as


the numerator). You can therefore adapt your statistical hypotheses:
variance1
H0: /variance2 = F = 1
variance1
H1: /variance2 = F ≠ 1

If the result is significant, you must reject the null hypothesis and con-
sider the variances as heterogeneous – if the result is not significant, you
must not accept the alternative hypothesis: the variances are homogeneous.

> F.val<-var(SYNONYME[SPEAKER=="Native"])/
var(SYNONYME[SPEAKER=="Learner"]);—F.val¶
[1]—1.526872

You again need to consider degrees of freedom, this time even two: one
for the numerator, one for the denominator. Both can be computed very
easily by just subtracting one from the sample sizes (of the samples for the
variances); cf. the formulae in (36).

(36) dfnumerator = nnumerator sample-1; dfdenominator = ndenominator sample-1

You get 39 in both cases and can look up the result in an F-table.

Table 33. Critical F-values for ptwo-tailed = 0.05 and 38 ≤ df1, 2 ≤ 40


df2 = 38 df2 = 39 df2 = 40
df1 = 38 1.907 1.8963 1.8862
df1 = 39 1.9014 1.8907 1.8806
df1 = 40 1.8961 1.8854 1.8752

Obviously, the result is not significant: the computed F-value is smaller


than the tabulated one for p = 0.05 (which is 1.8907). As usual, you can
compute the critical F-values yourself, and you would have to use the func-
tion qf for that. We need four arguments:

− p: the p-value for which you want to determine the critical F-value (for
some df-values);
− df1 and df2: the two df-values for the p-value for which you want to
determine the critical F-value;
− the argument lower.tail=F, to instruct R to only consider the area
under the curve above / to the right of the relevant F-value.
192 Analytical statistics

There is one last thing, though. When we discussed one- and two-tailed
tests in Section 1.3.4 above, I mentioned that in the graphical representa-
tion of one-tailed tests (cf. Figure 6 and Figure 8) you add the probabilities
of the events you see when you move away from the expectation of the null
hypothesis in one direction while in the graphical representation of two-
tailed tests (cf. Figure 7 and Figure 9) you add the probabilities of the
events you see when you move away from the expectation of the null hypo-
thesis in both directions. The consequence of that was that prior knowledge
that allowed you to formulate a directional alternative hypothesis was re-
warded such that you needed a less extreme findings to get a significant
result. This also means, however, that when you want to compute a two-
tailed p-value using lower.tail=F, then you need the p-value for 0.05/2 =
0.025. This value tells you which F-value cuts off 0.025 on the right side of
the graph, but since a two-tailed test requires that you cut off the same area
on the left side, too, this means that this is also the desired critical F-value
for ptwo-tailed = 0.05. Figure 49 illustrates this logic:

Figure 49. Density function for an F-distribution with df1 = df2 = 39, two-tailed
test

The right vertical line indicates the F-value you need to obtain for a sig-
nificant two-tailed test with df1, 2 = 39; this F-value is the one you already
know from Table 33 – 1.8907 – which means you get a significant two-
tailed result if one variance is 1.8907 times larger than the other. The left
Dispersions 193

vertical line indicates the F-value you need to obtain for a significant one-
tailed test with df1, 2 = 39; this F-value is 1.7045, which means you get a
significant one-tailed result if the variance you predict to be larger (!) is
1.7045 times larger than the other. To compute the F-values for the two-
tailed tests yourself, as a beginner you may want to enter just these lines
and proceed in a similar way for all other cells in Table 33.

> qf(0.025,—39,—39,—lower.tail=T)—#—the—value—at—the—right
margin—of—the—left—grey—area¶
[1]—0.5289
> qf(0.025,—39,—39,—lower.tail=F)—#—the—value—at—the—left
margin—of—the—right—grey—area
[1]—1.890719

Alternatively, if you are more advanced already, you can generate all of
Table 33 right away:

> p.values<-matrix(rep(0.025,—9),—byrow=T,—ncol=3)¶
> df1.values<-matrix(rep(c(38,—39,—40),—3),—byrow=F,—ncol=3)¶
> df2.values<-matrix(rep(c(38,—39,—40),—3),—byrow=T,—ncol=3)¶
> qf(p.values,—df1.values,—df2.values,—lower.tail=F)¶
—————————[,1]—————[,2]—————[,3]
[1,]—1.907004—1.896313—1.886174
[2,]—1.901431—1.890719—1.880559
[3,]—1.896109—1.885377—1.875197

The observed F-value is obviously too small for a significant result:


1.53 < 1.89. It is more useful, however, to immediately compute the p-
value for your F-value. Since you now use the reverse of qf, pf, you must
now not divide by 2 but multiply by 2:

> 2*pf(F.val,—39,—39,—lower.tail=F)¶
[1]—0.1907904

As we’ve seen, with a p-value of p = 0.1908, the F-value of about 1.53


for df1, 2 = 39 is obviously not significant. The function for the F-test in R
that easily takes care of all of the above is called var.test and it requires
at least two arguments, the two samples. Just like many other functions,
you can approach this in two ways: you can provide R with a formula,

> var.test(SYNTIMES~SPEAKER)¶
————————F—test—to—compare—two—variances
data:——SYNTIMES—by—SPEAKER
F—=—0.6549,—num—df—=—39,—denom—df—=—39,—p-value—=—0.1908
194 Analytical statistics

alternative—hypothesis:—true—ratio—of—variances—is—not—
equal—to—1
95—percent—confidence—interval:
—0.3463941—1.2382959
sample—estimates:
ratio—of—variances
—————————0.6549339

or you can use a vector-based alternative:

> var.test(SYNTIMES[SPEAKER=="Learner"],—
SYNTIMES[SPEAKER=="Native"])¶

Do not be confused if the F-value you get from R is not the same as the
one you computed yourself. Barring mistakes, the value outputted by R is
then 1/F-value – R does not automatically put the larger variance into the
numerator, but the variance whose name comes first in the alphabet, which
here is “Learner” (before “Native”). The p-value then shows you that R’s
result is the same as yours. You can now sum this up as follows: “The
learners synonym-finding times exhibit a variance that is approximately
50% larger than that of the native speakers (15.75 vs. 10.32), but according
to an F-test, this different is not significant: F = 1.53; dflearner = 39; dfnative =
39; ptwo-tailed = 0.1908.”

Recommendation(s) for further study


− Dalgaard (2002: 89), Crawley (2007: 289ff.), Baayen (2008: Section
4.2.3)
− the function fligner.test to test the homogeneity of variance when
the data violate the assumption of normality
− Good and Hardin (2006: 61f., 67–70) for other possibilities to compare
variances, to compensate for unequal variances, and for discussion of
the fact that unequal variances can actually be more interesting than un-
equal means

3. Means

The probably most frequent use of simple significance tests apart from chi-
square tests are tests of differences between means. In Section 4.3.1, we
will be concerned with goodness-of-fit tests, i.e., scenarios where you test
whether an observed measure of central tendency is significantly different
from another already known mean (recall this kind of question from Sec-
Means 195

tion 3.1.5.1); in Section 4.3.2, we then turn to tests where measures of cen-
tral tendencies from two samples are compared to each other.

3.1. Goodness-of-fit tests

3.1.1. One dep. variable (ratio-scaled)

Let us assume you are again interested in the use of hedges. Early studies
suggested that men and women exhibit different communicative styles with
regard to the frequency of hedges (and otherwise). Let us also assume you
knew from the literature that female subjects in experiments used on aver-
age 12 hedges in a two-minute conversation with a female confederate of
the experimenter. You also knew that the frequencies of hedges are normal-
ly distributed. You now did an experiment in which you recorded 30 two-
minute conversations of female subjects with a male confederate and
counted the same kinds of hedges as were counted in the previous studies
(and of course we assume that with regard to all other parameters, your
experiment was an exact replication of the earlier standards of comparison).
The average number of hedges you obtain in this experiment is 14.83 (with
a standard deviation of 2.51). You now want to test whether this average
number of hedges of yours is significantly different from the value of 12
from the literature. This question involves

− a dependent ratio-scaled variable, namely NUMBERHEDGES, which will


be compared to the value from the literature;
− no independent variable since you do not test whether NUMBERHEDGES
is influenced by something else.

For such cases, you use a one-sample t-test, which involves these steps:

Procedure
Formulating the hypotheses
Testing the assumption(s) of the test: the population from which the sample
mean has been drawn or at least the sample itself is normally
distributed
Computing the test statistic t, the degrees of freedom df, and the probability
of error p

As always, you begin with the hypotheses.


196 Analytical statistics

H0: The average of NUMBERHEDGES in the conversations of the sub-


jects with the male confederate does not differ significantly from
the already known average; hedges in your experiment = 12, or
hedges in your experiment-12 = 0.
H1: The average of NUMBERHEDGES in the conversations of the sub-
jects with the male confederate differs significantly from the pre-
viously reported average; hedges in your experiment ≠ 12, or
hedges in your experiment-12 ≠ 0.

Then you load the data from <C:/_sflwr/_inputfiles/04-3-1-


1_hedges.txt>:

> Hedges<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Hedges)¶

While the literature mentioned that the numbers of hedges are normally
distributed, you test whether this holds for your data, too:

> shapiro.test(HEDGES)¶
————————Shapiro-Wilk—normality—test
data:——HEDGES
W—=—0.946,—p-value—=—0.1319

It does. You can therefore immediately proceed with computing the t-


value using the formula in (37).

x sample − x population
(37) t=
sd n
sample sample

> numerator<-mean(HEDGES)-12¶
> denominator<-sd(HEDGES)/sqrt(length(HEDGES))¶
> abs(numerator/denominator)¶
[1]—6.191884

To see what this value means, we need degrees of freedom again.


Again, this is easy here since df = n-1, i.e., df = 29. When you look up the t-
value for df = 29 in the usual kind of table, the t-value you computed must
again be larger than the one tabulated for your df at p = 0.05.
Means 197

Table 34. Critical t-values for ptwo-tailed = 0.05, 0.01, and 0.001 for 28 ≤ df ≤ 30
p = 0.05 p = 0.01 p = 0.001
df = 28 2.0484 2.7633 3.6739
df = 29 2.0452 2.7564 3.6594
df = 30 2.0423 2.75 3.646

To compute the exact p-value, you can use qt with the p-value and the
required df-value. Since you do a two-tailed test, you must cut off 0.05/2 =
2.5% on both sides of the distribution, which is illustrated in Figure 50.

Figure 50. Density function for a t-distribution for df = 29, two-tailed test

The critical t-value for p = 0.025 and df = 29 is therefore:

> qt(c(0.025,—0.0975),—29,—lower.tail=F)—#—note—that—0.05—is—
again—divided—by—2!¶
[1]——2.045230—-2.045230

The exact p-value can be computed with pt and the obtained t-value is
highly significant: 6.1919 is not just larger than 2.0452, but even larger
than the t-value for p = 0.001 and df = 29. You could also have guessed that
because the t-value of approx. 6.2 is very far in the right grey margin in
Figure 50.
To sum up: “On average, female subjects that spoke to a male confede-
rate of the experimenter for two minutes used 14.83 hedges (standard devi-
ation: 2.51). According to a one-sample t-test, this average is highly signif-
198 Analytical statistics

icantly larger than the value previously noted in the literature (for female
subjects speaking to a female confederate of the experimenter): t = 6.1919;
df = 29; ptwo-tailed < 0.001.”

> 2*pt(6.191884,—29,—lower.tail=F)—#—note—that—the—t-value—
is—multiplied—with—2!¶
[1]—9.42153e-07

With the right function in R, you need just one line. The relevant func-
tion is called t.test and requires the following arguments:

− x: a vector with the sample data;


− mu=…, i.e., the population mean to which the sample mean computed
from x is to be compared;
− alternative="two-sided" for two-tailed tests (the default) or one of
alternative="greater" or alternative="less", depending on
which alternative hypothesis you want to test: alternative="less"
states the sample mean is smaller than the population mean, and alter-
native="greater" states that the sample mean is larger than the popu-
lation mean respectively.

> t.test(HEDGES,—mu=12)¶
————————One—Sample—t-test
data:——HEDGES
t—=—6.1919,—df—=—29,—p-value—=—9.422e-07
alternative—hypothesis:—true—mean—is—not—equal—to—12
95—percent—confidence—interval:
—13.89746—15.76921
sample—estimates:
mean—of—x
—14.83333

You get the already known mean of 14.83 as well as the df- and t-value
we computed semi-manually. In addition, we get the exact p-value and the
confidence interval of the mean which, and that is why the result is signifi-
cant, does not include the tested value of 12.

Recommendation(s) for further study


Dalgaard (2002: 81ff.), Baayen (2008: Section 4.1.2)
Means 199

3.1.2. One dep. variable (ordinal)

In the previous section, we discussed a test that allows you to test whether
the mean of a sample from a normally-distributed population is different
from an already known population mean. This section deals with a test you
can use when the data violate the assumption of normality or when they are
not ratio-scaled to begin with. We will explore this test by looking at an
interesting little morphological phenomenon, namely subtractive word-
formation processes in which parts of usually two source words are merged
into a new word. Two such processes are blends and complex clippings;
some well-known examples of the former are shown in (38a), while (38b)
provides a few examples of the latter; in all examples, the letters of the
source words that enter into the new word are underlined.

(38) a. brunch (breakfast × lunch), motel (motor × hotel), smog


(smoke × fog), foolosopher (fool × philosopher)
b. scifi (science × fiction), fedex (federal × express), sysadmin
(system × administrator)

One question that may arise upon looking at these coinages is to what
degree the formation of such words is supported by some degree of similar-
ity of the source words. There are many different ways to measure the simi-
larity of words, and the one we are going to use here is the so-called Dice
coefficient (cf. Brew and McKelvie 1996). You can compute a Dice coeffi-
cient for two words in two simple steps. First, you split the words up into
letter (or phoneme or …) bigrams. For motel (motor × hotel) you get:

− motor: mo, ot, to, or;


− hotel: ho, ot, te, el.

Then you count how many of the bigrams of each word occur in the
other word, too. In this case, these are two: the ot of motor also occurs in
hotel, and thus the ot of hotel also occurs in motor.28 This number, 2, is
divided by the number of bigrams to yield the Dice coefficient:

(39) Dicemotor & hotel = 2/8 = 0.25

28. In R, such computations can be easily automated and done for hundreds of thousands of
words. For example, for any one word contained in a vector a, this line returns all its bi-
grams: substr(rep(a,—nchar(a)-1),—1:(nchar(a)-1),—2:(nchar(a)))¶; for many
such applications, cf. Gries (2009).
200 Analytical statistics

In other words, the Dice coefficient is the percentage of shared bigrams


out of all bigrams (and hence ratio-scaled). We will now investigate the
question of whether source words that entered into subtractive word-
formation processes are more similar to each other than words in general
are similar to each other. Let us assume, you know that the average Dice
coefficient of randomly chosen words is 0.225 (with a standard deviation of
0.0809; the median is 0.151 with an interquartile range of 0.125). These
figures already suggest that the data may not be normally distributed.29
This study involves

− a dependent ratio-scaled variable, namely the SIMILARITY of the source


words, which will be compared with the already known mean/median;
− no independent variable since you do not test whether SIMILARITY is
influenced by something else.

The hypotheses should be straightforward:

H0: The average of SIMILARITY for the source words that entered into
subtractive word-formation processes is not significantly different
from the known average of randomly chosen word pairs; Dice
coefficients of source words = 0.225, or Dice coefficients of source
words-0.225 = 0.
H1: The average of SIMILARITY for the source words that entered into
subtractive word-formation processes is different from the known
average of randomly chosen word pairs; Dice coefficients of source
words ≠ 0.225, or Dice coefficients of source words-0.225 ≠ 0.

The data to be investigated here are in <C:/_sflwr/_inputfiles/04-3-1-


2_dices.txt> ; they are data of the kind studied in Gries (2006).

> Dices<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Dices);—str(Dices)¶
'data.frame':———100—obs.—of——2—variables:
—$—CASE:—int——1—2—3—4—5—6—7—8—9—10—...
—$—DICE:—num——0.19—0.062—0.06—0.064—0.101—0.147—0.062—...

From the summary statistics, you could already infer that the similarities
of randomly chosen words are not normally distributed. We can therefore

29. For authentic data, cf. Gries (2006), where I computed Dice coefficients for all 499,500
possible pairs of 1,000 randomly chosen words.
Means 201

assume that this is also true of the sample of source words, but of course
you also test this assumption:

> shapiro.test(DICE)¶
————————Shapiro-Wilk—normality—test
data:——DICE
W—=—0.9615,—p-value—=—0.005117

The Dice coefficients are not normally, but symmetrically distributed


(as you could also clearly see in, say, a histogram by entering
hist(DICE)¶). Thus, even though Dice coefficients are ratio-scaled and
although the sample size is larger than 30 (cf. also Section 4.3.2 below),
you may want to be careful/conservative and not use the one-sample t-test
but, for example, the so-called one-sample sign test, which involves the
following steps:

Procedure
Formulating the hypotheses
Computing the frequencies of the signs of the differences between the
observed values and the expected average
Computing the probability of error p

You first rephrase the hypotheses; I only provide the new statistical ver-
sions:

H0: medianDice coefficients of source words = 0.151.


H1: medianDice coefficients of source words ≠ 0.151.

Then, you compute the median and its interquartile range:

> median(DICE);—IQR(DICE)¶
[1]—0.1775
[1]—0.10875

Obviously, the observed median Dice coefficient is a bit higher than


0.151, the median Dice coefficient of the randomly chosen word pairs, but
it is hard to guess whether the difference is going to be significant or not.
Hence, we do the required test. For the one-sample sign test, you first de-
termine how many observations are above and below the expected median,
because if the expected median was a good characterization of the observed
data, then 50% of the observed data should be above the expected median
202 Analytical statistics

and 50% should be below it. (NB: you must realize that this means that the
exact sizes of the deviations from the expected median are not considered
here – you only look at whether the observed values are larger or smaller
than the expected median, but not how much larger or smaller.)

> sum(DICE>0.151)¶
[1]—63

63 of the 100 observed values are larger than the expected median –
since you expected 50, it seems as if the Dice coefficients observed in the
source words are significantly larger than those of randomly chosen words.
As before, this issue can also be approached graphically, using the logic
and the function dbinom from Section 1.3.4.1, Figure 6 and Figure 8. Fig-
ure 51 shows the probabilities of all possible results you can get in 100
trials – because you look at the Dice coefficients of 100 subtractive word
formations, but consider the left panel of Figure 51 first. According to H0,
you would expect 50 Dice coefficients to be larger than the expected me-
dian, but you found 63. Thus, you add the probability of the observed result
(the black bar for 63 out of 100) to the probabilities of all those that deviate
from H0 even more extremely, i.e., the chances to find 64, 65, …, 99, 100
Dice coefficients out of 100 that are larger than the expected median. These
probabilities from the left panel sum up to approximately 0.006:

> sum(dbinom(63:100,—100,—0.5))¶
[1]—0.006016488

Figure 51. Probability distributions for 100 binomial trials test

But you are not finished yet … As you can see in the left panel of Fig-
Means 203

ure 51, so far you only include the deviations from H0 in one direction – the
right – but your alternative hypothesis is non-directional, i.e., two-tailed.
To do a two-tailed test, you must therefore also include the probabilities of
the events that deviate just as much and more from H0 in the other direc-
tion: 37, 36, …, 1, 0 Dice coefficients out of 100 that are smaller than the
expected median, as represented in the right panel of Figure 51. The proba-
bilities sum up to the same value (because the distribution of binomial
probabilities around p = 0.5 is symmetric).

> sum(dbinom(37:0,—100,—0.5))¶
[1]—0.006016488

Again: if you expect 50 out of 100, but observe 63 out of 100, and want
to do a two-tailed test, then you must add the summed probability of find-
ing 63 to 100 larger Dice coefficients (the upper/right 38 probabilities) to
the summed probability of finding 0 to 37 smaller Dice coefficients (the
lower/left 38 probabilities). As a result, you get a ptwo-tailed-value of
0.01203298, which is obviously significant. You can sum up: “The investi-
gation of 100 subtractive word formations resulted in an average source-
word similarity of 0.1775 (median, IQR = 0.10875). 63 of the 100 source
words were more similar to each other than expected, which, according to a
two-tailed sign test is a significant deviation from the average similarity of
random word pairs (median =0.151, IQR range = 0.125): pbinomial = 0.012.”
Recall that this one-sample sign test only uses nominal information,
whether each data point is larger or smaller than the expected reference
median. If the distribution of the data is rather symmetrical – as it is here –
then there is an alternative test that also takes the sizes of the deviations
into account, i.e. uses at least ordinal information. This so-called one-
sample signed-rank test can be computed using the function wilcox.test.
Apart from the vector to be tested, the following arguments are relevant:

− alternative: a character string saying which alternative hypothesis


you want to test: the default is "two.sided", other possible values for
one-tailed tests are "less" or "greater", which specify how the first-
named vector relates to the specified reference median;
− mu=…: the reference median expected according to H0;
− exact=T, if you want to compute an exact test (rather than an estima-
tion; only when your sample size is smaller than 50) or exact=F, if an
asymptotic test is sufficient; the latter is the default;
− correct=T for a continuity correction or correct=F for none;
204 Analytical statistics

− conf.level: a value between 0 and 1 specifying the size of the confi-


dence interval; the default is 0.95.

Since you have a non-directional alternative hypothesis, you do a two-


tailed test by simply adopting the default setting for alternative:

> wilcox.test(DICE,—mu=0.151,—correct=F)¶
————————Wilcoxon—signed—rank—test—with—continuity—correction
data:——DICE
V—=—3454.5,—p-value—=—0.001393
alternative—hypothesis:—true—location—is—not—equal—to—0.151

The test confirms the previous result: both the one-sample sign test,
which is only concerned with the directions of deviations, and the one-
sample signed rank test, which also considers the sizes of these deviations,
indicate that the source words of the subtractive word-formations are more
similar to each other. This should however, encourage you to make sure
you formulate exactly the hypothesis you are interested in (and then use the
required test).

Recommendation(s) for further study


− for the sake of completeness, there is a slightly better function for the
Wilcoxon-test, wilcox.exact (from the library(exactRankTests),
which is not under development anymore, but the successor package,
coin, doesn’t have wilcox.ecact (yet)). Although wilcox.test can
take the argument exact=T, this function still has problems with ties –
wilcox.exact does not and is thus sometimes preferable
− Dalgaard (2002: 85f.), Baayen (2008: Section 4.1.2)
− for the one-sample sign test, you may also want to read up on what to do
in cases when one or more of the observed values is exactly as large as
the expected median (e.g. in Marascuilo and McSweeney 1977)

3.2. Tests for differences/independence

A particularly frequent scenario requires you to test two groups of elements


with regard to whether they differ in their central tendency. There are again
several factors that determine which test to choose:

− the kind of samples: dependent or independent (cf. Section 1.3.4.1);


Means 205

− the level of measurement of the dependent variable: interval/ratio-scaled


vs. ordinal;
− the distribution of (interval/ratio-scaled) dependent variable: normal vs.
non-normal;
− the sample sizes.

The first factor can be dealt with in isolation, but the others are interre-
lated. Simplifying a bit: is the dependent variable ratio-scaled as well as
normally-distributed or both sample sizes are larger than 30 or are the dif-
ferences between variables normally distributed, then you can usually do a
t-test (for independent or dependent samples, as required) – otherwise you
must do a U-test (for independent samples) or a Wilcoxon test (for depen-
dent samples). The reason for this decision procedure is that while the t-test
for independent samples requires, among other things, normally distributed
samples, one can also show that means of samples of 30+ elements are
usually normally distributed even if the samples as such are not, which was
why we Section 4.3.1.2 at least considered the option of a one-sample t-test
(and then chose the more conservative sign test or one-sample signed-rank
test). Therefore, it is sufficient if the data meet one of the two conditions.
Strictly speaking, the t-test for independent samples also requires homo-
genous variances, which we will also test for, but we will discuss a version
of the t-test that can handle heterogeneous variances, the t-test after Welch.

3.2.1. One dep. variable (ratio-scaled) and one indep. variable (nominal)
(indep. samples)

The t-test for independent samples is one of the most widely used tests. To
explore it, we use an example from the domain of phonetics. Let us assume
you wanted to study the (rather trivial) non-directional alternative hypothe-
sis that the first formants’ frequencies of men and women differed. You
plan an experiment in which you record men’s and women’s pronunciation
of a relevant set of words and/or syllables, which you then analyze with a
computer (using Audacity or SpeechAnalyzer or …). This study involves

− one dependent ratio-scaled variable, namely F1-FREQUENCIES, whose


averages you are interested in;
− one independent nominal variable, namely SEX: MALE vs. SEX: FEMALE;
− independent samples since, if every subject provides just one data point,
the data points are not related to each other.
206 Analytical statistics

The test to be used for such scenarios is the t-test for independent sam-
ples and it involves the following steps:

Procedure
Formulating the hypotheses
Computing the relevant means; inspecting a graph
Testing the assumption(s) of the test:
the population from which the sample has been drawn or at least
the sample is normally distributed (esp. with samples of n
< 30)
the variances of the populations from which the samples have been
drawn or at least the variances of the samples are
homogeneous
Computing the test statistic t, the degrees of freedom df, and the probability
of error p

You begin with the hypotheses.

H0: The average F1 frequency of men is the same as the average F1


frequency of women: meanF1 frequency of men = meanF1 frequency of women, or
meanF1 frequency of men-meanF1 frequency of men = 0.
H1: The average F1 frequency of men is not the same as the average F1
frequency of women: meanF1 frequency of men ≠ meanF1 frequency of women, or
meanF1 frequency of men-meanF1 frequency of men ≠ 0.

The data you will investigate here are part of the data borrowed from a
similar experiment on vowels in Apache. First, you load the data from
<C:/_sflwr/_inputfiles/04-3-2-1_f1-freq.txt> into R:

> Vowels<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Vowels);—str(Vowels)¶
'data.frame':———120—obs.—of——3—variables:
—$—CASE——————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—HZ_F1—————:—num——489—558—425—626—531—...
—$—SEX:—Factor—w/—2—levels—"F","M":—2—2—2—2—2—2—2—2—2—2—...

Then, you compute the relevant means of the frequencies. As usual, the
less elegant way to proceed is this,

> mean(HZ_F1[SEX=="F"])¶
> mean(HZ_F1[SEX=="M"])¶
Means 207

… and, as usual, we use the more elegant variant with tapply.

> tapply(HZ_F1,—SEX,—mean)¶
———————F—————————M
528.8548——484.2740

To get a better impression of what the data look like, you also imme-
diately generate a boxplot. You set the limits of the y-axis such that it
ranges from 0 to 1,000 so that all values are included and the representation
is maximally unbiased; in addition, you use rug to plot the values of the
women and the men onto the left and right y-axis respectively; cf. Figure
52 and the code file for an alternative that includes a stripchart.

> boxplot(HZ_F1~SEX,—notch=T,—ylim=(c(0,—1000)));—grid()¶
> rug(HZ_F1[SEX=="F"],—side=2)¶
> rug(HZ_F1[SEX=="M"],—side=4)¶

Figure 52. Boxplot for HZ_F1~SEX

The next step consists of testing the assumptions of the t-test. Figure 52
suggests that these data meet the assumptions. First, the boxplots for the
men and the women appear as if the data are normally distributed: the me-
dians are in the middle of the boxes and the whiskers extend nearly equally
long in both directions. Second, the variances seem to be very similar since
the sizes of the boxes and notches are very similar. However, of course you
need to test this and you use the familiar Shapiro-Wilk test:

> tapply(HZ_F1,—SEX,—shapiro.test)¶
$F
208 Analytical statistics

————————Shapiro-Wilk—normality—test
data:——X[[1L]]
W—=—0.987,—p-value—=—0.7723
$M
————————Shapiro-Wilk—normality—test
data:——X[[2L]]
W—=—0.9724,—p-value—=—0.1907

The data do not differ significantly from normality. Now you test for
variance homogeneity with the F-test from Section 4.2.2 (whose assump-
tion of normality we now already tested). This test’s hypotheses are:

H0: The variance of the first sample equals that of the second; F = 1.
H1: The variance of one sample is bigger than that of the second; F ≠ 1.

The F-test with R yields the following result:

> var.test(HZ_F1~SEX)—#—with—a—formula¶
————————F—test—to—compare—two—variances
data:——HZ_F1—by—SEX
F—=—1.5889,—num—df—=—59,—denom—df—=—59,—p-value—=—0.07789
alternative—hypothesis:—true—ratio—of—variances—is—not—
equal—to—1
95—percent—confidence—interval:
—0.949093—2.660040
sample—estimates:
ratio—of—variances
—————————1.588907

The second assumption is also met if only just about: since the confi-
dence interval includes 1 and the p-value points to a non-significant result,
the variances are not significantly different from each other and you can
compute the t-test for independent samples. This test involves three differ-
ent statistics: the test statistic t, the number of degrees of freedom df, and of
course the p-value. In the case of the t-test we discuss here, the t-test after
Welch, the t-value is computed according to formula (40), where sd2 is the
variance, n is the sample size, and the subscripts 1 and 2 refer to the two
samples of men and women.

2 2
(40) (
t = x1 − x 2 ÷ ) sd 1
n1
+
sd 2
n2

In R:
Means 209

> t.numerator<-mean(HZ_F1[SEX=="M"])-mean(HZ_F1[SEX=="W"])¶
> t.denominator<-sqrt((var(HZ_F1[SEX=="M"])/
length((HZ_F1[SEX=="M"])))+(var(HZ_F1[SEX=="W"])/
length((HZ_F1[SEX=="W"]))))¶
> t<-abs(t.numerator/t.denominator)¶

You get t = 2.441581. The formula for the degrees of freedom is some-
what more complex. First, you need to compute a value called c, and with
c, you can then compute df. The formula to compute c is shown in (41), and
the result of 41 gets inserted into (42).

2
sd 1
n1
(41) c= 2 2
sd 1
+ sd 2
n1 n2
−1
 c2 (1 − c )2 
(42) df =  +
 n −1 n −1 
 1 2 

> c.numerator<-
var(HZ_F1[SEX=="M"])/length((HZ_F1[SEX=="M"]))¶
> c.denominator<-t.denominator^2¶
> c<-c.numerator/c.denominator¶
> df.summand1<-c^2/(length(HZ_F1[SEX=="M"])-1)¶
> df.summand2<-((1-c)^2)/(length(HZ_F1[SEX=="F"])-1)¶
> df<-(df.summand1+df.summand2)^-1¶

You get c = 0.3862634 and df = 112.1946 ≈ 112. You can then look up
the t-value in the usual kind of t-table (cf. Table 35) or you can compute
the critical t-value in R (with qt(c(0.025,—0.975),—112,—lower.tail=
F)¶, as before, for a two-tailed test you compute the t-value for the p-value
of 0.025).

Table 35. Critical t-values for ptwo-tailed = 0.05, 0.01, and 0.001 for 111 ≤ df ≤ 113
p = 0.05 p = 0.01 p = 0.001
df = 111 1.9816 2.6208 3.3803
df = 112 1.9814 2.6204 3.3795
df = 113 1.9812 2.62 3.3787

As you can see, the observed t-value is larger than the one tabulated for
p = 0.05, but smaller than the one tabulated for p = 0.01: the difference
210 Analytical statistics

between the means is significant. The exact p-value can be computed with
qt and for the present two-tailed case you simply enter this:

> 2*pt(t,—112,—lower.tail=F)¶
[1]—0.01618811

In R, you can do all this with the function t.test. This function takes
several arguments, the first two of which – the relevant samples – can be
given by means of a formula or with two vectors. These are the other rele-
vant arguments:

− alternative: a character string that specifies which alternative hypo-


thesis is tested: the default value, which can therefore be omitted, is
"two.sided", other values for one-tailed hypotheses are again "less"
or "greater"; as before, R considers the alphabetically first variable
level (i.e., here “F”) as the reference category so that the one-tailed hy-
pothesis that the values of the men are smaller than those of the women
would be tested with alternative="greater";
− paired=F for the t-test for independent samples (the default) or
paired=T for the t-test for dependent samples (cf. the following sec-
tion);
− var.equal=T, when the variances of the two samples are equal, or
var.equal=F if they are not; the latter is the default, which should hard-
ly be changed;
− conf.level: a value between 0 and 1, which specifies the confidence
interval of the difference between the means; the default is 0.95.

Thus, to do the t-test for independent samples, you can enter either va-
riant listed below. You get the following result:

> t.test(HZ_F1~SEX,—paired=F)—#—with—a—formula¶
————————Welch—Two—Sample—t-test
data:——HZ_F1—by—SEX
t—=—2.4416,—df—=—112.195,—p-value—=—0.01619
alternative—hypothesis:—true—difference—in—means—is—
not—equal—to—0
95—percent—confidence—interval:
—8.403651—80.758016
sample—estimates:
mean—in—group—F—mean—in—group—M
———————528.8548————————484.2740
> t.test(HZ_F1[SEX=="F"],—HZ_F1[SEX=="M"],—paired=F)—#—
with—vectors¶
Means 211

The first two lines of the output provide the name of the test and the da-
ta to which the test was applied. Line 3 lists the test statistic t (the sign is
irrelevant because it only depends on which mean is subtracted from
which, but it must of course be considered for the manual computation), the
df-value, and the p-value. Line 4 states the alternative hypothesis tested.
Then, you get the confidence interval for the differences between means
(and our test is significant because this confidence interval does not include
0). At the bottom, you get the means you already know.
To be able to compare our results with those of other studies while at
the same time avoiding the risk that the scale of the measurements distorts
our assessment of the observed difference, we also need an effect size.
There are two possibilities. One is an effect size correlation, the correlation
between the values of the dependent variable and the values you get if the
levels of the independent variable are recoded as 0 and 1.

> SEX2<-ifelse(SEX=="M",—0,—1)¶
> cor.test(SEX2,—HZ_F1)¶

The result contains the same t-value and nearly the same p-value as be-
fore (only nearly the same because of the different df), but you now also get
a correlation coefficient, which is, however, not particularly high: 0.219.
Another widely used effect size is Cohen’s d, which is computed as in (43):
2t
(43) Cohen’s d =
n1 + n 2

> d<-abs(2*t.test(HZ_F1~SEX,—paired=F)$stat/
sqrt(length(SEX)))¶

Since Cohen’s d can take on values between 0 and 1, the value of 0.446
reflects an only intermediately strong effect. You can sum up you results as
follows: “In the experiment, the average F1 frequency of the vowels pro-
duced by men was 484.3 Hz (95% confidence interval 461.6; 507 Hz), the
average F1 frequency of the vowels produced by the women was 528.9 Hz
(95% confidence interval: 500.2; 557.5 Hz). According to a t-test for inde-
pendent samples, the difference of 44.6 Hz between the means is statistical-
ly significant, but not particularly strong: tWelch = 2.4416; df = 112.2; ptwo-
tailed = 0.0162; Cohen’s d = 0.446.”
In Section 5.3, we will discuss the extension of this test to cases where
you have more than one independent variable and/or where the independent
212 Analytical statistics

variable has more than two levels.

Recommendation(s) for further study


− Crawley (2007: 289ff.), Baayen (2008: Section 4.2.2)
− an exact variant of the t-test for independent samples, which does not
make any distributional assumptions, can be programmed relatively eas-
ily in R using the function combn (from the library(combinat)) and is
available from me upon request

3.2.2. One dep. variable (ratio-scaled) and one indep. variable (nominal)
(dep. samples)

The previous section illustrated a test for means from two independent
samples. The name of that test suggests that there is a similar test for de-
pendent samples, which is what we will discuss in this section on the basis
of an example from translation studies. Let us assume you want to compare
the lengths of English and Portuguese texts and their respective translations
into Portuguese and English. Let us also assume you suspect that the trans-
lations are on average longer than the originals. This question involves

− one dependent ratio-scaled variable, namely the LENGTH of the texts,


the average of which we are interested in;
− one independent nominal/categorical variable, namely TEXTSOURCE:
ORIGINAL vs. TEXTSOURCE: TRANSLATION;
− dependent samples since there is one translation for every original text.

Performing a t-test for dependent samples requires the following steps:

Procedure
Formulating the hypotheses
Computing the relevant means; inspecting a graph
Testing the assumption(s) of the test: the differences of the paired values
are distributed normally
Computing the test statistic t, the degrees of freedom df, and the probability
of error p

As usual, you formulate the hypotheses, but note that this time the alter-
native hypothesis is directional: you suspect that the average length of the
Means 213

originals is shorter than those of their translations, not just different (i.e.,
shorter or longer). Therefore, the statistical form of the alternative hypothe-
sis does not just contain a “≠”, but something more specific, “<“:

H0: The average of the pairwise differences between the lengths of the
originals and the lengths of the translations is 0; meanpairwise differe-
rences = 0.
H1: The average of the pairwise differences between the lengths of the
originals and the lengths of the translations is smaller than 0; mean-
pairwise differerences < 0.

Note in particular (i) that the hypotheses do not involve the values of the
two samples but the pairwise differences between the samples and (ii) how
these difference are computed: original minus translation, not the other way
round (and hence we use “< 0”). To illustrate this test, we will look at data
from Frankenberg-Garcia (2004). She compared the lengths of eight Eng-
lish and eight Portuguese texts, which were chosen and edited such that
their lengths were approximately 1,500 words, and then determined the
lengths of their translations. You can load the data from <C:/_sflwr/
_inputfiles/04-3-2-2_textlengths.txt>:

> Texts<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Texts);—str(Texts)¶
`data.frame':———32—obs.—of——5—variables:
—$—CASE——————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—LENGTH—:—int——1501—1499—1501—1498—1499—1499—1498—1500—...
—$—TEXT————————:—int——1—2—3—4—5—6—7—8—9—...
—$—TEXTSOURCE———:—Factor—w/—2—levels—"Original","Translation"
:—1—...
—$—LANGUAGE:—Factor—w/—2—levels—"English","Portuguese":—1—1—1
—...

Note that the data are organized so that the order of the texts and their
translations is identical: case 1 is an English original (hence, TEXT is 1,
TEXTSOURCE is ORIGINAL, LANGUAGE is ENGLISH), and case 17 is its
translation (hence, TEXT is again 1, TEXTSOURCE is now TRANSLATION, and
LANGUAGE is PORTUGUESE), etc. First, you compute the means and gener-
ate a plot (note, this boxplot does not show the dependency of the samples).

> tapply(LENGTH,—TEXTSOURCE,—mean)¶
————Original——Translation
————1500.062—————1579.938
214 Analytical statistics

> boxplot(LENGTH~TEXTSOURCE,—notch=T,—ylim=c(0,—2000))¶
> rug(LENGTH,—side=2)¶

(Cf. the code file for alternative plots.) The median translation length is
a little higher than that of the originals. Also, the two samples have very
different dispersions because the lengths of the originals were set to ap-
proximately 1,500 words and thus exhibit very little variation while the
lengths of the translations are much more variable by comparison.

Figure 53. Boxplot for LENGTH~TEXTSOURCE

Unlike the t-test for independent samples, the t-test for dependent sam-
ples does not presuppose a normal distribution or variance homogeneity of
the sample values, but a normal distribution of the differences between the
pairs of sample values. You can create a vector with these differences and
apply the Shapiro-Wilk test to it:

> differences<-LENGTH[1:16]-LENGTH[17:32]¶
> shapiro.test(differences)¶
————————Shapiro-Wilk—normality—test
data:——differences
W—=—0.9569,—p-value—=—0.6057

The differences do not differ significantly from normality so you can in


fact do the t-test for dependent samples. First, you compute the t-value
according to the formula in (44), where n is the number of value pairs.
Means 215

x diff ⋅ n
(44) t=
sd diff

> t<-(abs(mean(differences))*sqrt(length(differences)))/
sd(differences);—t¶
[1]—1.927869

Second, you compute the degrees of freedom df, which is the number of
differences n minus 1:

> df<-length(differences)-1;—df¶
[1]—15

First, you can now compute the critical values for p = 0.05 – this time
not for 0.05/2 = 0.025 – at df = 15 or, in a more sophisticated way, create the
whole t-table.

> qt(c(0.05,—0.95),—15,—lower.tail=F)¶
[1]——1.753050—-1.753050

> p.values<-matrix(rep(c(0.05,—0.01,—0.001),—3),—
byrow=T,—ncol=3)¶
> df.values<-matrix(rep(14:16,—each=3),—byrow=T,—ncol=3)¶
> qt(p.values,—df.values,—lower.tail=F)¶
—————————[,1]—————[,2]—————[,3]
[1,]—1.761310—2.624494—3.787390
[2,]—1.753050—2.602480—3.732834
[3,]—1.745884—2.583487—3.686155

Second, you can look up the your t-value in such a t-table, repeated here
as Table 36. Since such tables usually only list the positive values, you use
the absolute value of your t-value. As you can see, the differences between
the originals and their translations is significant, but not very or highly
significant: 1.927869 > 1.7531, but 1.927869 < 2.6025.

Table 36. Critical t-values for pone-tailed = 0.05, 0.01, and 0.001 (for 14 ≤ df ≤ 16)
p = 0.05 p = 0.01 p = 0.001
df = 14 1.7613 2.6245 3.7874
df = 15 1.7531 2.6025 3.7328
df = 16 1.7459 2.5835 3.6862

Alternatively, you can compute the exact p-value. Since you have a di-
rectional alternative hypothesis, you only need to cut off 5% of the area
216 Analytical statistics

under the curve on one side of the distribution. The t-value following from
the null hypothesis is 0 and the t-value you computed is approximately
-1.93 so you must compute the area under the curve from to 1.93 to +∞; cf.
Figure 54. Since you are doing a one-tailed test, you need not multiply the
p-value with 2 as you did above in Sections 4.2.2, 4.3.1.1, and 4.3.2.1.

> pt(t,—15,—lower.tail=F)¶
[1]—0.03651145

Figure 54. Density function for a t-distribution for df = 15, one-tailed test

Note that this also means that the difference is only significant because
you did a one-tailed test – because of the multiplication with 2, a two-tailed
test would not have yielded a significant result but p = 0.07302292.
Now the same test with R. Since you already know the arguments of the
function t.test, we can focus on the only major differences to before, the
facts that you now have a directional alternative hypothesis and need to do
a one-tailed test and that you now do a paired test. To do that properly, you
must first understand how R computes the difference. As mentioned before
above, R proceeds alphabetically and computes the difference ‘alphabeti-
cally first level minus alphabetically second level’ (which is why the alter-
native hypothesis was formulated this way above). Since “Original” comes
before “Translation” and we saw that the mean of the former is smaller
Means 217

than that of the latter, the difference is smaller than 0. You therefore tell R
that the difference is “less” than zero.
Of course you can use the formula or the vector-based notation. I show
the output of the formula notation, where the setting of alternative per-
tains, as usual, to the first named vector. Both ways result in the same out-
put. You get the t-value (which is negative here, because R subtracts the
other way round), the df-value, a p-value, and a confidence interval which,
since it does not include 0, also reflects the significant result.

> t.test(LENGTH~TEXTSOURCE,—paired=T,—alternative="less")¶
————————Paired—t-test
data:——LENGTH—by—TEXTSOURCE
t—=—-1.9279,—df—=—15,—p-value—=—0.03651
alterna-
tive—hypothesis:—true—difference—in—means—is—less—than
—0
95—percent—confidence—interval:
——————-Inf—-7.243041
sample—estimates:
mean—of—the—differences
————————————————-79.875
> t.test(LENGTH[TEXTSOURCE=="Original"],—LENGTH[TEXTSOURCE==
"Translation"],—paired=T,—alternative="less")¶

Finally, let us compute an effect size. The formula for Cohen’s d for this
t-test is represented in (45):

(45) Cohen’s d = t ⋅
(
2 ⋅ 1 − rgroup1, group 2 )
n pairs

> d<-abs(t.test(LENGTH~TEXTSOURCE,—paired=T,—alternative=
"less")$stat*sqrt((2*(1-cor(LENGTH[TEXTSOURCE==
"Original"],—LENGTH[TEXTSOURCE=="Translation"])))/
(length(LENGTH)/2)))¶

Again, you get only an intermediately high value of 0.405. To sum up:
“On average, the originals are approximately 80 words shorter than their
translations (the 95% confidence interval of this difference is -Inf, -7.24).
According to a t-test for dependent samples, this difference is significant: t
= -1.93; df = 15; pone-tailed = 0.0365. However, the effect is relatively small:
the difference of 80 words corresponds to only about 5% of the length of
the texts; Cohen’s d = 0.405.”
218 Analytical statistics

Recommendation(s) for further study


− Crawley (2007: 298ff.), Baayen (2008: Section 4.3.1)
− an exact variant of the t-test for dependent samples, which does not
make any distributional assumptions, can be programmed relatively eas-
ily in R and is available from me upon request

3.2.3. One dep. variable (ordinal) and one indep. variable (nominal)
(indep. samples)

In this section, we discuss a non-parametric test for two independent sam-


ples of ordinal data, the U-test. Since I mentioned at the beginning of Sec-
tion 4.3.2 that the U-test is not only used when the samples to be compared
consist of ordinal data, but also when they violate distributional assump-
tions, this section will again involve an example where only a test of these
distributional assumptions allows you to decide which test to use.
In Section 4.3.1.2 above, you looked at the similarities of source words
entering into subtractive word formations and you tested whether these
similarities were on average different from the known average similarity of
random words to each other. The data you used were of the kind studied in
Gries (2006) but in the above example no distinction was made between
source words entering into different kinds of subtractive word formations.
This is what we will do here by comparing similarities of source words
entering into blends to similarities of complex clippings. If both kinds of
word-formation processes differed according to this parameter, this would
provide empirical motivation for distinguishing these processes in the first
place. This example, thus, involves

− one dependent ratio-scaled variable, namely the SIMILARITY of the


source words whose average you are interested in;
− one independent nominal variable, namely PROCESS: BLEND vs.
PROCESS: COMPLCLIP;
− independent samples since the Dice coefficient of any one pair of source
words has nothing to do with any one other pair of source words.

This kind of question would typically be investigated with the t-test for
independent samples we discussed above. According to the above proce-
dure, you first formulate the hypotheses (non-directionally since we may
have no a priori reason to assume a particular difference):
Means 219

H0: The mean of the Dice coefficients of the source words of blends is
as large as the mean of the Dice coefficients of the source words of
complex clippings; meanDice coefficients of blends = meanDice coefficients of com-
plex clippings, or meanDice coefficients of blends - meanDice coefficients of complex clippings
= 0.
H1: The mean of the Dice coefficients of the source words of blends is
not as large as the mean of the Dice coefficients of the source
words of complex clippings; meanDice coefficients of blends ≠ meanDice coeffi-
cients of complex clippings, or meanDice coefficients of blends - meanDice coefficients of
complex clippings ≠ 0.

You can load the data from the file <C:/_sflwr/_inputfiles/04-3-2-


3_dices.txt>. As before, this file contains the Dice coefficients, but now
also in an additional column the word formation process for each Dice
coefficient.

> Dices<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Dices);—str(Dices)¶
'data.frame':———100—obs.—of——3—variables:
—$—CASE———:—int——1—2—3—4—5—6—7—8—9—10—...
—$—PROCESS:—Factor—w/—2—levels—"Blend","ComplClip":—2—2—2—2—2
—2——...
—$—DICE:—num——0.19—0.062—0.06—0.064—0.101—0.147—0.062—0.184—.
..

As usual, you should begin by exploring the data graphically:

> boxplot(DICE~PROCESS,—notch=T,—ylim=c(0,—1))¶
> rug(jitter(DICE[PROCESS=="Blend"]),—side=2)¶
> rug(jitter(DICE[PROCESS=="ComplClip"]),—side=4)¶
> text(1:2,—tapply(DICE,—PROCESS,—mean),—"+")¶

As usual, this graph already gives away enough information to nearly


obviate the need for statistical analysis. The probably most obvious aspect
is the difference between the two medians, but since the data are ratio-
scaled you also need to explore the means. These are already plotted into
the graph and here is the usual line of code to compute them directly; note
how much the central tendency of the complex clippings differs from that
of the blends.
220 Analytical statistics

Figure 55. Boxplot for SIMILARITY~PROCESS

> tapply(DICE,—PROCESS,—mean)¶
————Blend—ComplClip
——0.22996———0.12152

In order to test whether the t-test for independent samples can be used
here, we need to test both of its assumptions, normality in the groups and
variance homogeneity. Since the F-test for homogeneity of variances pre-
supposes normality, you begin by testing whether the data are normally
distributed. As a first step, you generate histograms for both samples. The
argument main="" suppresses an otherwise very wide headline and, more
importantly, the arguments xlim=c(0,—0.5) and ylim=c(0,—15) force R
to plot the histograms into identical coordinate systems so that we cannot
be mislead by automatically chosen ranges of plots; cf. Figure 56. You can
immediately see that the data are not normally distributed, which is sup-
ported by the Shapiro-Wilk test.

> par(mfrow=c(1,—2))¶
> hist(DICE[PROCESS=="Blend"],—main="",—xlab="Blends,—
"ylab="Frequency",—xlim=c(0,—0.5),—ylim=c(0,15))¶
> hist(DICE[PROCESS=="ComplClip"],—main="",—xlab="Complex—
clippings",—ylab="Frequency",—xlim=c(0,—0.5),—
ylim=c(0,15))¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶

Given these violations of normality, you can actually not do the regular
F-test to test the second assumption of the t-test for independent samples.
You therefore do the Fligner-Killeen test of homogeneity of variances,
which does not require the data to be normally distributed and which I
mentioned in Section 4.2.2 above.
Means 221

Figure 56. Histograms of Dice coefficients for both word-formation processes

> tapply(DICE,—PROCESS,—shapiro.test)¶
$Blend
————————Shapiro-Wilk—normality—test
data:——X[[1L]]
W—=—0.9455,—p-value—=—0.02231
$ComplClip
————————Shapiro-Wilk—normality—test
data:——X[[2L]]
W—=—0.943,—p-value—=—0.01771

> fligner.test(DICE~PROCESS)¶
————————Fligner-Killeen—test—of—homogeneity—of—variances
data:——DICE—by—PROCESS
Fligner-Killeen:med—chi-squared—=—3e-04,—df—=—1,—p-
value—=—0.9863

The variances are homogeneous, but normality is still violated. It fol-


lows that even though the data are ratio-scaled and even though the sample
sizes are larger than 30, it is probably safer to compute a test that does not
make these assumptions, the U-test.

Procedure
Formulating the hypotheses
Computing the observed medians, inspecting a graph
Testing the assumption(s) of the test:
the values are independent of each other
the populations from which the values were sampled are identically
222 Analytical statistics

distributed30
Computing the test statistics U and z as well as the probability of error p

While the two histograms do not seem to be from samples that are iden-
tically distributed, they are at least a bit similar, the variances of the two
groups are not significantly different, and the U-test is relatively robust so
we use it here. Since the U-test assumes only ordinal data, you now com-
pute medians, not just means. You therefore adjust your hypotheses:

H0: The median of the Dice coefficients of the source words of blends
is as large as the median of the Dice coefficients of the source
words of complex clippings; medianDice coefficients of blends = medianDice
coefficients of complex clippings, or medianDice coefficients of blends - medianDice coeffi-
cients of complex clippings = 0.
H1: The median of the Dice coefficients of the source words of blends
is not as large as the median of the Dice coefficients of the source
words of complex clippings; medianDice coefficients of blends ≠ medianDice
coefficients of complex clippings, or medianDice coefficients of blends - medianDice coeffi-
cients of complex clippings ≠ 0.

Correspondingly, you compute the medians and interquartile ranges:

> tapply(DICE,—PROCESS,—median)¶
————Blend—ComplClip
————0.2300—————0.1195
> tapply(DICE,—PROCESS,—IQR)¶
————Blend—ComplClip
———0.0675————0.0675

Here, the assumptions can be tested fairly unproblematically: The val-


ues are independent of each other since no word-formation influences
another one and the distributions of the data in Figures 55 and 56 appear to
be rather similar.
Unfortunately, computing the U-test is somewhat more cumbersome
than many other tests. First, you transform all Dice coefficients into ranks,
and then you compute the sum of all ranks for each word-formation
process. In R:

> Ts<-tapply(rank(DICE),—PROCESS,—sum)¶

30. According to Bortz, Lienert, and Boehnke (1990:211), the U-test can discover differ-
ences of measures of central tendency well even if this assumption is violated.
Means 223

Then, both of these T-values and the two sample sizes are inserted into
the formulae in (46) and (47) to compute two U-values, the smaller one of
which is the required test statistic:

n1 ⋅ (n1 + 1)
(46) U1 = n1 ⋅ n2 + − T1
2
n ⋅ (n + 1)
(47) U2 = n1 ⋅ n2 + 2 2 − T2
2

> n1<-length(DICE[PROCESS=="Blend"])¶
> n2<-length(DICE[PROCESS=="ComplClip"])¶
> U1<-n1*n2+((n1*(n1+1))/2)-Ts[1]¶
> U2<-n1*n2+((n2*(n2+1))/2)-Ts[2]¶
> U<-min(U1,—U2)¶

The resulting U-value, 84, can be looked up in a U-table or, because


there are few U-tables for larger samples,31 converted into a normally-
distributed z-score. This z-score is computed in several steps. First, you use
the formulae in (48) and (49) to compute an expected U-value as well as its
dispersion.

(48) Uexpected = 0.5·n1·n2


n1 ⋅ n2 ⋅ (n1 + n2 + 1)
(49) Dispersion Uexpected =
12

Second, you insert these values together with the observed U-value into
the formula in (50).

U − U expected
(50) z=
Dispersion U expected

> expU<-n1*n2/2¶
> dispersion.expU<-sqrt(n1*n2*(n1+n2+1)/12)¶
> z<-abs((U-expU)/dispersion.expU)¶

To decide whether the null hypothesis can be rejected, you look up this

31. Bortz, Lienert and Boehnke (1990:202 and Table 6) provide critical U-values for n ≤ 20
and mention references for tables with critical values for n ≤ 40 – I at least know of no
U-tables for larger samples.
224 Analytical statistics

value, 8.038194, in a z-table such as Table 37 or you compute a critical z-


score for ptwo-tailed = 0.05 with qnorm (as was mentioned in Section 1.3.4.2
above). Since you have a non-directional alternative hypothesis, you apply
the same logic as above and compute z-scores for half of the ptwo-tailed-values
you are interested in:

> qnorm(c(0.0005,—0.005,—0.025,—0.975,—0.995,—0.995),—
lower.tail=F)¶
[1]——3.290527——2.575829——1.959964—-1.959964—-2.575829—-
2.575829

Table 37. Critical z-scores for ptwo-tailed = 0.05, 0.01, and 0.001
z-score p-value
1.96 0.05
2.575 0.01
3.291 0.001

It is obvious that the observed z-score is not only much larger than the
one tabulated for ptwo-tailed = 0.001 but also very distinctly in the grey-
shaded area in Figure 57: the difference between the medians is highly
significant, as the non-overlapping notches already anticipated. Obviously,
you can now also compute exact p-value with the usual ‘mirror function’ of
qnorm:

> pnorm(z,—lower.tail=F)¶
[1]—4.558611e-16

In R, you compute the U-test with the same function as the Wilcoxon
test, wilcox.test, and again you can either use a formula or two vectors.
Apart from these arguments, the following ones are useful, too:

− alternative: a character string specifying which alternative hypothesis


you want to test: the default is "two.sided", other possible values for
one-tailed tests are again "less" or "greater", which specify how the
first-named vector or factor level relates to the second;
− paired=F for the U-Test for independent samples or paired=T for the
Wilcoxon test for dependent samples (cf. the following section);
Means 225

Figure 57. Density function of the standard normal distribution; two-tailed test

− exact=T, if you want to compute an exact test, or exact=F if you don’t


(if you don’t change exact’s default setting of NULL and your data set
has fewer than 50 data points and no ties, an exact p-value is computed
automatically);
− correct=T for a continuity correction (the default) and correct=F for
none;
− conf.level: a value between 0 and 1 specifying the size of the confi-
dence interval; the default is 0.95.

The standard version to be used here is this:

> wilcox.test(DICE~PROCESS,—paired=F)¶
————————Wilcoxon—rank—sum—test—with—continuity—correction
data:——DICE—by—PROCESS
W—=—2416,—p-value—=—8.882e-16
alternative—hypothesis:—true—location—shift—is—not—equal—to—0

You get a U-value (here referred to as W) and a p-value; W is not the


minimum of U1 and U2, but the maximum here, which value you get de-
pends on which vector or factor level comes first in the alphabet. The p-
value here is a bit different from yours since R uses a slightly different
algorithm and the continuity correction. I am not aware of a widely used
effect size for median differences other than the observed difference itself,
so you can now sum up: “According to a U-test, the median Dice coeffi-
cient of the source words of blends (0.23, IQR = 0.0675) and the median of
226 Analytical statistics

the Dice coefficients for complex clippings (0.1195, IQR = 0.0675) are
very significantly different: U = 84 (or W = 2416), ptwo-tailed < 0.0001. The
creators of blends appear to be more concerned with selecting source words
that are similar to each other than the creators of complex clippings.”

Recommendation(s) for further study:


Dalgaard (2002: 89f.), Crawley (2007: 297f.), Baayen (2008: Section 4.3.1)
and recall the above comments regarding wilcox.exact

3.2.4. One dep. variable (ordinal) and one indep. variable (nominal)
(dep. samples)

Just like the U-test, the test in this section has two major applications. First,
you really may have two dependent samples of ordinal data such as when
you have a group of subjects perform two rating tasks to test whether each
subject’s first rating differs from the second. Second, the probably more
frequent application arises when you have two dependent samples of ratio-
scaled data but cannot do the t-test for dependent samples because its dis-
tributional assumptions are not met. We will discuss an example of the
latter kind in this section.
In a replication of Bencini and Goldberg, Gries and Wulff (2005) stu-
died the question which verbs or sentence structures are more relevant for
how German foreign language learners of English categorize sentences.
They crossed four syntactic constructions and four verbs to get 16 sen-
tences, each verb in each construction. Each sentence was printed onto a
card and 20 advanced German learners of English were given the cards and
asked to sort them into four pile of four cards each. The question was
whether the subjects’ sortings would be based on the verbs or the construc-
tions. To determine the sorting preferences, each subject’s four stacks were
inspected with regard to how many cards one would minimally have to
move to create either four completely verb-based or four completely con-
struction-based sortings. The investigation of this question involves

− one dependent ratio-scaled variable, namely SHIFTS, the number of


times a card had to be shifted from one stack to another to create the
perfectly clean sortings, and we are interested in the average of these
numbers;
− one independent nominal variable, namely CRITERION: CONSTRUCTION
vs. CRITERION: VERB;
Means 227

− dependent samples since each subject ‘generated’ two numbers of


changes, one to create the verb-based sorting, one to create the construc-
tion-based sorting.

To test some such result for significance, you should first consider a t-
test for dependent samples since you have two samples of ratio-scaled val-
ues. As usual, you begin by formulating the relevant hypotheses:

H0: The average of the pairwise differences between the numbers of


rearrangements towards perfectly verb-based stacks and the num-
bers of rearrangements towards perfectly construction-based stacks
is 0; meanpairwise differerences = 0.
H1: The average of the pairwise differences between the numbers of
rearrangements towards perfectly verb-based stacks and the num-
bers of rearrangements towards perfectly construction-based stacks
is not 0; meanpairwise differerences ≠ 0.

Then, you load the data that Gries and Wulff (2005) obtained in their
experiment from the file <C:/_sflwr/_inputfiles/04-3-2-4_sortingstyles.
txt>:

> SortingStyles<-read.table(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(SortingStyles)¶
> head(SortingStyles,—3)¶
——CASE—SHIFTS————CRITERION
1————1——————0—Construction
2————2——————0—Construction
3————3——————4—Construction

As usual, you compute the means and generate a graph of the results.

> tapply(SHIFTS,—CRITERION,—mean)¶
Construction—————————Verb
————————3.45—————————8.85
> boxplot(SHIFTS~CRITERION,—notch=T)¶
> rug(jitter(SHIFTS[CRITERION=="Construction"]),—side=2)¶
> rug(jitter(SHIFTS[CRITERION=="Verb"]),—side=4)¶

(Note that the boxplot does not represent the ‘pairwise-ness’ of the dif-
ferences.) Both medians and notches indicate that the average numbers of
card rearrangements are very different. You then test the assumption of the
t-test for dependent samples, the normality of the pairwise differences:
228 Analytical statistics

Figure 58. Boxplot for SHIFTS~CRITERION

> differences<-SHIFTS[CRITERION=="Construction"]-
SHIFTS[CRITERION!="Construction"]¶
> shapiro.test(differences)¶
————————Shapiro-Wilk—normality—test
data:——differences
W—=—0.7825,—p-value—=—0.0004797

The distribution of the differences deviates highly significantly from


normality: you cannot use the t-test. Instead, you compute a test for two
dependent samples of ordinal variables, the Wilcoxon test.

As a first step, you adjust your hypotheses:

H0: medianpairwise differerences = 0


H1: medianpairwise differerences ≠ 0

Procedure
Formulating the hypotheses
Computing the observed medians, inspecting a graph
Testing the assumption(s) of the test:
the pairs of values are independent of each other
the populations from which the samples were obtained are
distributed identically
Computing the test statistic T and the probability of error p
Means 229

Since the data are now analyzed on an ordinal level of measurement,


you compute the medians and their interquartile ranges:

> tapply(SHIFTS,—CRITERION,—median)¶
Construction—————————Verb
———————————1———————————11
> tapply(SHIFTS,—CRITERION,—IQR)¶
Construction—————————Verb
————————6.25—————————6.25

These are the medians that you could already infer from the above box-
plot. The assumptions appear to be met because the pairs of values are in-
dependent of each other (since the sorting of any one subject does not af-
fect any other subject’s sorting) and, somewhat informally, there is little
reason to assume that the populations are distributed differently especially
since most of the values to achieve a perfect verb-based sorting are the
exact reverse of the values to get a perfect construction-based sorting.
Thus, you compute the Wilcoxon test; for reasons of space we only consid-
er the standard variant. First, you transform the vector of pairwise differ-
ences, which you already computed for the Shapiro-Wilk test, into ranks:

> ranks<-rank(abs(differences))¶

Second, all ranks whose difference was negative are summed to a value
T-, and all ranks whose difference was positive are summed to T+; the
smaller of the two values is the required test statistic T:32

> T.minus<-sum(ranks[differences<0])¶
> T.plus<-sum(ranks[differences>0])¶
> T<-min(T.minus,—T.plus)¶

This T-value of 41.5 can be looked up in a T-table, but note that here,
for a significant result, the observed test statistic must be smaller than the
tabulated one. The observed T-value of 41.5 is smaller than the one tabu-
lated for n = 20 and p = 0.05 (but larger than the one tabulated for n= 20
and p = 0.01): the result is significant.

32. The way of computation discussed here is the one described in Bortz (2005). It disre-
gards ties and cases where the differences are zero.
230 Analytical statistics

Table 38. Critical T-values for ptwo-tailed = 0.05, 0.01, and 0.001 for 14 ≤ df ≤ 16
p = 0.05 p = 0.01 p = 0.001
n = 19 46 32 18
n = 20 52 37 21
n = 21 58 42 25

Let us now do this test with R: You already know the function for the
Wilcoxon test so we need not discuss it again in detail. The relevant differ-
ence is that you now instruct R to treat the samples as dependent/paired. As
nearly always, you can use the vector-based function call or the formula.

> wilcox.test(SHIFTS[CRITERION=="Verb"],—SHIFTS[CRITERION==
"Construction"],—paired=T,—exact=F)¶
> wilcox.test(SHIFTS~CRITERION,—paired=T,—exact=F)¶
————————Wilcoxon—signed—rank—test—with—continuity—correction
data:——SHIFTS—by—CRITERION
V—=—36.5,—p-value—=—0.01616
alternative—hypothesis:—true—location—shift—is—not—equal—to—0

R computes the test statistic differently but arrives at the same kind of
decision: the result is significant, but not very significant.
To sum up: “On the whole, the 20 subjects exhibited a strong preference
for a construction-based sorting style: the median number of card rear-
rangements to arrive at a perfectly construction-based sorting was 1 while
the median number of card rearrangements to arrive at a perfectly verb-
based sorting was 11 (both IQRs = 6.25). According to a Wilcoxon test,
this difference is significant: V = 36.5, ptwo-tailed = 0.0162. In this experi-
ment, the syntactic patterns were a more salient characteristic than the
verbs (when it comes to what triggered the sorting preferences).”

Recommendation(s) for further study:


recall wilcox.exact

4. Coefficients of correlation and linear regression

In this section, we discuss the significance tests for the coefficients of cor-
relation discussed in Section 3.2.3.
Coefficients of correlation and linear regression 231

4.1. The significance of the product-moment correlation

While the manual computation of the product-moment correlation above


was a bit complex, its significance test is not. It involves these steps:

Procedure
Formulating the hypotheses
Computing the observed correlation; inspecting a graph
Testing the assumption(s) of the test: the population from which the sample
was drawn is bivariately normally distributed. Since this criterion
can be hard to test (cf. Bortz 2005: 213f.), we simply require both
samples to be distributed normally
Computing the test statistic t, the degrees of freedom df, and the probability
of error p

Let us return to the example in Section 3.2.3, where you computed a


correlation coefficient of 0.9337 for the correlation of the lengths of 20
words and their reaction times. You formulate the hypotheses and we as-
sume for now your alternative hypothesis is non-directional.

H0: The length of a word in letters does not correlate with the word’s
reaction time in a lexical decision task; r = 0.
H1: The length of a word in letters correlates with the word’s reaction
time in a lexical decision task; r ≠ 0.

You load the already familiar data from <C:/_sflwr/_inputfiles/03-2-


3_reactiontimes.txt>:

> ReactTime<-read.table(choose.files(),—header=T,—sep="\t")¶
> attach(ReactTime);—str(ReactTime)¶
'data.frame':———20—obs.—of——3—variables:
—$—CASE——————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—LENGTH————:—int——14—12—11—12—5—9—8—11—9—11—...
—$—MS_LEARNER:—int——233—213—221—206—123—176—195—207—172—...

Since we already generated a scatterplot above (cf. Figure 36 and Figure


37), we do not do that again. We do, however, have to test the assumption
of normality of both vectors. You can either proceed in a stepwise fashion
and enter shapiro.test(LENGTH)¶ and shapiro.test(MS_LEARNER)¶ or
use a shorter variant:
232 Analytical statistics

> apply(ReactTime[,2:3],—2,—shapiro.test)¶
$LENGTH
————————Shapiro-Wilk—normality—test
data:——newX[,—i]
W—=—0.9748,—p-value—=—0.8502
$MS_LEARNER
————————Shapiro-Wilk—normality—test
data:——newX[,—i]
W—=—0.9577,—p-value—=—0.4991

This line of code means ‘take the data mentioned in the first argument
of apply (the second and third column of the data frame ReactTime), look
at them column by column (the 2 in the second argument slot – a 1 would
mean look at them row-wise; recall this notation from prop.table in Sec-
tion 3.2.1), and apply the function shapiro.test to each column. Clearly,
both variables do not differ significantly from a normal distribution.
To compute the test statistic t, you insert the correlation coefficient r
and the number of correlated value pairs n into the formula in (51):

r⋅ n−2
(51) t=
2
1− r

> r<-cor(LENGTH,—MS_LEARNER,—method="pearson")¶
> numerator<-r*sqrt(length(LENGTH)-2)¶
> denominator<-sqrt(1-r^2)¶
> t<-numerator/denominator¶

This t-value, 11.06507, has df = n-2 = 18 degrees of freedom.

> df<-length(LENGTH)-2¶

Just as with the t-tests before, you can now look this t-value up in a t-
table, or you can compute a critical value: if the observed t-value is higher
than the tabulated/critical one, then r is significantly different from 0. Since
your t-value is much larger than even the one for p = 0.001, the correlation
is highly significant.

> qt(c(0.025,—0.975),—18,—lower.tail=F)—#—division—by—2!¶
[1]——2.100922—-2.100922

The exact p-value can be computed as follows, and do not forget to


again double the p-value.
Coefficients of correlation and linear regression 233

> 2*pt(t,—18,—lower.tail=F)—#—multiplication—with—2!¶
[1]—1.841060e-09

Table 39. Critical t-values for ptwo-tailed = 0.05, 0.01, and 0.001 for
17 ≤ df ≤ 19
p = 0.05 p = 0.01 p = 0.001
df = 17 2.1098 2.8982 3.9561
df = 18 2.1009 2.8784 3.9216
df = 19 2.093 2.8609 3.8834

This p-value is obviously much smaller than 0.001. However, you will
already suspect that there is an easier way to get all this done. Instead of the
function cor, which we used in Section 3.2.3 above, you simply use
cor.test with the two vectors whose correlation you are interested (and, if
you have a directional alternative hypothesis, you specify whether you
expect the correlation to be less than 0 (i.e., negative) or greater than 0 (i.e.,
positive) using alternative=…:

> cor.test(LENGTH,—MS_LEARNER,—method="pearson")¶
————————Pearson's—product-moment—correlation
data:——LENGTH—and—MS_LEARNER
t—=—11.0651,—df—=—18,—p-value—=—1.841e-09
alternative—hypothesis:—true—correlation—is—not—equal—to—0
95—percent—confidence—interval:
—0.8370608—0.9738525
sample—estimates:
——————cor
0.9337171

You can also look at the results of the corresponding linear regression:

> model<-lm(MS_LEARNER~LENGTH)¶
> summary(model)¶
Call:
lm(formula—=—MS_LEARNER—~—LENGTH)
Residuals:
—————Min———————1Q———Median———————3Q——————Max
-22.1368——-7.8109———0.8413———7.9499——18.9501
Coefficients:
————————————Estimate—Std.—Error—t—value—Pr(>|t|)
(Intercept)——93.6149—————9.9169————9.44—2.15e-08—***
LENGTH———————10.3044—————0.9313———11.06—1.84e-09—***
---
Signif.—codes:——0—'***'—0.001—'**'—0.01—'*'—0.05—
'.'—0.1—'—'—1
Residual—standard—error:—11.26—on—18—degrees—of—freedom
234 Analytical statistics

Multiple—R-Squared:—0.8718,—————Adjusted—R-squared:—0.8647
F-statistic:—122.4—on—1—and—18—DF,——p-value:—1.841e-09

We begin at the bottom: the last row contains information we already


know. The F-value is our t-value squared; we find the 18 degrees of free-
dom and the p-value we computed. In the line above that you find the coef-
ficient of determination you know plus an adjusted version we will only
talk about later (cf. Section 5.2). We ignore the line about the residual stan-
dard error (for now) and the legend for the p-values. The table above that
shows the intercept and the slope we computed in Section 3.2.3 (in the
column labeled “Estimate”), their standard errors, t-values – do you recog-
nize the t-value from above? – and p-values. The p-value for LENGTH says
whether the slope of the regression line is significantly different from 0; the
p-value for the intercept says whether the intercept of 93.6149 is signifi-
cantly different from 0. We skip the info on the residuals because we dis-
cussed above how you can investigate those yourself (with resi-
duals(model)¶).
To sum up: “The lengths of the words in letters and the reaction times in
the experiment correlate highly positively with each other: r = 0.9337; r2 =
0.8718. This correlation is highly significant: t = 11.07; df = 18; p < 0.001.
A linear regression shows that every additional letter increases the reaction
time by approximately 10.3 ms.”
In Section 5.2, we deal with the extensions of linear regression to cases
where we include more than one independent variable, and we will also
discuss more comprehensive tests of the regression’s assumptions (using
plot(model)¶).

4.2. The significance of Kendall’s Tau

If you need a p-value for Kendall’s tau τ, you follow the following proce-
dure:

Procedure
Formulating the hypotheses
Computing the observed correlation; inspecting a graph
Testing the assumption(s) of the test: the data are at least ordinal
Computing the test statistic z and the probability of error p
Coefficients of correlation and linear regression 235

Again, we simply use the example from Section 3.2.3 above (even
though we know we can actually use the product-moment correlation; we
use this example again just for simplicity’s sake). How to formulate the
hypotheses should be obvious by now:

H0: The length of a word in letters does not correlate with the word’s
reaction time in a lexical decision task; τ = 0.
H1: The length of a word in letters correlates with the word’s reaction
time in a lexical decision task; τ ≠ 0.

As for the assumption: we already know the data are ordinal – after all,
we know they are even interval/ratio-scaled. You load the data again from
<C:/_sflwr/_inputfiles/03-2-3_reactiontimes.txt> and compute Kendall’s τ:

> ReactTime<-read.table(choose.files(),—header=T,—sep="\t")¶
> attach(ReactTime)¶
> tau<-cor(LENGTH,—MS_LEARNER,—method="kendall")—#—0.8189904¶

To test Kendall’s tau τ for significance, you compute a z-score of the


kind that is by now familiar. You insert τ and the number of value pairs n
into the formula in (52).

2 ⋅ (2 ⋅ n + 5)
(52) z= τ ÷
9 ⋅ n ⋅ (n − 1)

In R:

> numerator.root<-2*(2*length(LENGTH)+5)¶
> denominator.root<-9*length(LENGTH)*(length(LENGTH)-1)¶
> z<-abs(tau)/sqrt(numerator.root/denominator.root);—z¶
[1]—5.048596

This value can then be looked up in a z-table such as Table 40 or you


generate these values. The z-score for a significant two-tailed test must cut
off at least 2.5% of the area under the standard normal distribution:

> qnorm(c(0.0005,—0.005,—0.025,—0.975,—0.995,—0.9995),—
lower.tail=T)¶
[1]—-3.290527—-2.575829—-1.959964——1.959964——2.575829——
3.290527
236 Analytical statistics

Table 40. Critical z-scores for ptwo-tailed = 0.05, 0.01, and 0.001
z-score p
1.96 0.05
2.576 0.01
3.291 0.001

For a result to be significant, the z-score must be larger than 1.96. Since
the observed z-score is actually larger than 5, this result is highly signifi-
cant:

> 2*pnorm(z,—lower.tail=F)¶
[1]—4.450685e-07

The function to get this result much faster is again cor.test. Since R
uses a slightly different method of calculation, you get a slightly different
z-score and p-value, but the results are for all intents and purposes identic-
al.

> cor.test(LENGTH,—MS_LEARNER,—method="kendall")¶
————————Kendall's—rank—correlation—tau
data:——LENGTH—and—MS_LEARNER
z—=—4.8836,—p-value—=—1.042e-06
alternative—hypothesis:—true—tau—is—not—equal—to—0
sample—estimates:
——————tau
0.8189904
Warning—message:
In—cor.test.default(LENGTH,—MS_LEARNER,—method—=—"kendall")—:
——Cannot—compute—exact—p-value—with—ties

(The warning refers to ties such as the fact that the length value 11 oc-
curs more than once). To sum up: “The lengths of the words in letters and
the reaction times in the experiment correlate highly positively with each
other: τ = 0.819, z = 5.05; p < 0.001.”

4.3 Correlation and causality

Especially in the area of correlations, but also more generally, you need to
bear in mind a few things even if the null hypothesis is rejected: First, one
can often hear a person A making a statement about a correlation (maybe
even a significant one) by saying “The more X, the more Y” and then hear
a person B objecting on the grounds that B knows of an exception. This
argument is flawed. The exception quoted by B would only invalidate A’s
Coefficients of correlation and linear regression 237

statement if A considered the correlation to be perfect (r = 1 or r = -1) – but


if A did not mean that (and A never does!), then there may be strong and
significant correlation although there is (at least) one exception. The excep-
tion or exceptions are the reason why the correlation is not 1 or -1 but ‘on-
ly’, say, 0.9337. Second, a correlation as such does not necessarily imply
causality. As is sometimes said, a correlation between X and Y is a neces-
sary condition for a causal relation between X and Y, but not a sufficient
one, as you can see from many examples:

− There is a positive correlation between the number of firefighters trying


to extinguish a fire and the amount of damage that is caused at the site
where the fire was fought. This does of course not mean that the fire-
fighters arrive at the site and destroy as much as they can – the correla-
tion results from a third variable, the size of the fire: the larger the fire,
the more firefighters are called to help extinguish it and the more dam-
age the fire causes.
− There is a negative correlation between the amount of hair men have
and their income which is unfortunately only due to the effect of a third
variable: the men’s age.
− There is a positive correlation such that the more likely a drug addict
was to go to therapy to get off his addiction, the more likely he was to
die. This is not because the therapy leads to death – the variable in the
background correlated with both is the severity of the addiction: the
more addicted addicts were, the more likely they were to go to therapy,
but also the more likely they were to die.

Thus, beware of jumping to conclusions …


Now you should do the exercise(s) for Chapter 4 …

Recommendation(s) for further study


− the functions ckappa and lkappa (from the library(psy)) to compute
the kappa coefficient and test how well two or more raters conform in
their judgments of stimuli
− the function cronbach (from the library(psy)) to compute Cron-
bach’s alpha and test how consistently several variables measure a con-
struct the variables are supposed to reflect
− Crawley (2007: Ch. 10), Baayen (2008: Section 4.3.2), Johnson (2008:
Section 2.4)
Chapter 5
Selected multifactorial methods

So far we have only been concerned with monofactorial methods, i.e., me-
thods in which we investigated how maximally one independent variable is
correlated with the behavior of one dependent variable. In many cases,
proceeding like this is the beginning of the empirical quantitative study of a
phenomenon. Nevertheless, such a view on phenomena is usually a simpli-
fication: we live in a multifactorial world in which probably no phenome-
non is really monofactorial – probably just about everything is influenced
by several things at the same time. This is especially true for language, one
of the most complex phenomena resulting from human evolution. In this
section, we will therefore discuss several multifactorial techniques, which
can handle this kind of complexity better than the monofactorial methods
discussed so far. You should know, however, each section’s method below
could easily fill courses for several quarters or semesters, which is why I
can impossibly discuss every aspect or technicality of the methods and why
I will have to give you a lot of references and recommendations for further
study. Also, given the complexity of the methods involved, there will be no
discussion of how to compute them manually. Section 5.1, 5.2, and 5.3
introduce multifactorial extensions to the chi-square test of Section 4.1.2.2,
correlation and linear regression of Section 3.2.3, and the t-test for inde-
pendent samples of Section 4.3.2.1 respectively. Section 5.4 introduces a
method called binary logistic regression, and Section 5.5 introduces an
exploratory method, hierarchical agglomerative cluster analysis.
Before we begin to look at the methods in more detail, one comment
about multifactorial methods is in order. As the name indicates, you use
such methods to explore variation in a multi-variable dataset and this ex-
ploration involves formulating a statistical model – i.e., a statistical descrip-
tion of the structure in the data – that provides the best possible characteri-
zation of the data that does not violate Occam’s razor by including more
parameters than necessary and/or assuming more complex relations be-
tween variables than are necessary. In the examples in Chapter 4, there was
little to do in terms of Occam’s razor: we usually had only one independent
variable with only two levels so we did not have to test whether a simpler
approach to the data was in fact better (in the sense of explaining the data
just as well but being more parsimonious). In this chapter, the situation will
Selected multifactorial methods 239

be different. The study of multifactorial approaches typically involves a


stepwise procedure where you

− start out from the so-called maximal model, i.e., the model that includes
all predictors (i.e., all variables and their levels and their interactions)
that you are considering;
− iteratively delete the least relevant predictors (starting with the highest-
order interactions) and fit a new model; until
− you arrive at the so-called minimal adequate model, which contains
only predictors that are either significant themselves or participate in
significant higher-order interactions.

Note that while this principle is in general well-known, it is often not so


well recognized for predictors that are neither interactions of variables or
variables but variable levels. The above definition of predictors requires
that the final, minimal adequate model should not only contain only those
interactions or variables that participate in significant interactions, but also
only those variable levels that make some significant contribution (recall
exercise #3 from the assignments for Chapter 4). In other words, the inclu-
sion of different variable levels should ideally be scrutinized for whether
variable levels much be kept apart just as the inclusions of separate va-
riables and interactions should be. Note, though, that a conflation of varia-
ble levels must make sense conceptually: it is not useful to create a new
combination of variable levels that looks nicer statistically but is concep-
tually senseless – modeling is a means to an end (namely, understanding
the data), not an end in itself. The principle of parsimony is therefore a very
important methodological guideline and will surface in different forms in
this chapter. I like to think of this so-called model selection approach – but
also of statistical approaches in general – as a kind of detective work: you
are presented with a mystery (a data set), whose internal structure you need
to explore. In order to do that, you look at the data from different angles
(using different graphs or different transformations of variables), and just
like peeling an onion, you remove layers in the form of unnecessary va-
riables or variable levels until you get to the core.
On the one hand, this makes this kind of approach more difficult than
the simple monofactorial tests in Chapter 4, and unfortunately the recom-
mendations on how to proceed differ. Some sources argue in favor of an
approach where you rigorously only delete one insignificant parameter at
the time, others recommend to proceed not so much on the basis of p-
values, but on the basis of other statistics (most notably, AIC), yet others
240 Selected multifactorial methods

begin with a maximal model and delete several insignificant parameters at


once, and sometimes you find even more than one of these approaches in
the same reference. On the other hand, this kind of approach is therefore
also much more interesting; there is little more rewarding than, after some
long evaluation and model selection process, being able to say, “oh, so this
is what’s actually going on here …”

Recommendation(s) for further study:


Crawley (2007: Ch. 9) is most instructive

1. The multifactorial analysis of frequency data

In Section 4.1.2.2, I introduced how you investigate the correlation be-


tween nominal or categorical variables, but we only considered two va-
riables at the same time – GIVENNESS and CONSTRUCTION. But of course,
you often observe more than one independent variable. There are many
different methods that can be used for such data, loglinear analysis,
count/Poisson regression, sometimes binary logistic regression, correspon-
dence analysis, association rules, hierarchical configural frequency analy-
sis, and probably others. Since the approach of hierarchical configural fre-
quency analysis is so similar to the already familiar chi-square test, this is
the approach to be discussed here, but I already want to recommend to you
to read up on count/Poisson regression once you’ve read Sections 5.2 and
5.4.

1.1. Configural frequency analysis

The simple configural frequency analysis (CFA) is an extension of the chi-


square test discussed in Section 4.1.2.2. I already mentioned above that the
contribution to chi-square is a useful heuristic in the interpretation of the
data because the larger the contribution to chi-square of a cell, the more
that cell contributes to the overall interaction effect. I also mentioned that it
is possible to test contributions to chi-square for significance, and that is
the basic idea of a CFA. In some sense, a CFA is both an exploratory and a
hypothesis-testing method. It is exploratory because every possible combi-
nation of variable levels gets tested for the presence or absence of an effect,
and it tests hypotheses because each combination is subjected to a signific-
ance test (with a small additional complication to be discussed presently). I
The multifactorial analysis of frequency data 241

will focus here only on cases where there is not necessarily an a priori and
precisely formulated alternative hypothesis; but the recommendations for
further study will point you to readings where such issues are also dis-
cussed.
The general procedure of a CFA is this:

Procedure
Tabulating the observed frequencies
Computing the contributions to chi-square
Computing pcorrected-values for the contribution to chi-square for df = 1

We first return to the example from Section 4.1.2.2. You investigated


the constituent order alternation of particle placement using data from Pe-
ters (2001). You loaded the files from <C:/_sflwr/_inputfiles/04-1-2-
2_vpcs.txt>, which are now also in <C:/_sflwr/_inputfiles/05-1-
1_vpcs.txt>), you computed a chi-square test, and you inspected the contri-
bution to chi-square:

Table 41. Observed construction frequencies of Peters (2001)


GIVENNESS: GIVENNESS: Row
GIVEN NEW totals
CONSTRUCTION:
85 65 150
V DO PART
CONSTRUCTION:
100 147 247
V PART DO
Column totals 185 212 397

> rm(list=ls(all=T))¶
> VPCs<-read.table(choose.files(),—header=T,—sep="\t"¶
> attach(VPCs)¶
> chisq.test(table(CONSTRUCTION,—REFERENT),—correct=F)¶
————————Pearson's—Chi-squared—test
data:——table(CONSTRUCTION,—REFERENT)
X-squared—=—9.8191,—df—=—1,—p-value—=—0.001727
> chisq.test(table(CONSTRUCTION,—REFERENT),—correct=F)$res^2¶
————————————REFERENT
CONSTRUCTION———given———————new
———V_DO_PRt—3.262307——2.846825
———V_PRt_DO—1.981158——1.728841

Now, how do you compute the probability not of the chi-square table of
the whole table, but of an individual contribution to chi-square? First, you
242 Selected multifactorial methods

need a df-value, which we set to 1. Second, you must correct your signific-
ance level for multiple post-hoc tests. To explain what that means, we have
to briefly go off a tangent and return to Section 1.3.4.
In that section, I explained that the probability of error is the probability
to obtain the observed result when the null hypothesis is true. This means
that probability is also the probability to err in rejecting the null hypothesis.
Finally, the significance level was defined as the threshold level or proba-
bility that the probability of error must not exceed. Now a question: if you
reject two independent null hypotheses at each p = 0.05, what is the proba-
bility that you do so correctly both times?

THINK
BREAK

This probability is 0.9025, i.e. 90.25% Why? Well, the probability you
are right in rejecting the first null hypothesis is 0.95. But the probability
that you are always right when you reject the null hypothesis on two inde-
pendent trials is 0.952 = 0.9025. This is the same logic as if you were asked
for the probability to get two sixes when you simultaneously roll two dice:
1 2
/6 = 1/36. If you look at 13 null hypotheses, then the probability that you do
not err once if you reject all of them is in fact dangerously close to 0.5, i.e.,
that of getting heads on a coin toss: 0.9513 ≈ 0.5133, which is pretty far
away from 0.95. Thus, the probability of error you use to evaluate each of
the 13 null hypotheses should better not be 0.05 – it should be much small-
er so that when you perform all 13 tests, your overall probability to be al-
ways right is 0.95. It is easy to show which probability of error you should
use instead of 0.05. If you want to test 13 null hypotheses, you must use p =
1-0.95(1/13) ≈ 0.00394. Then, the probability that you are right on any one
rejection is 1-0.00394 = 0.99606, and the probability that you are right with
all 13 rejections is 0.9960613 ≈ 0.95. A shorter heuristic that is just as con-
servative (some say, too conservative) is the Bonferroni correction. It con-
sists of just dividing the desired significance level – i.e., usually 0.05 – by
the number of tests – here 13. You get 0.05/13 ≈ 0.003846154, which is close
(enough) to the exact probability of 0.00394 computed above. Thus, if you
do multiple post hoc tests on a dataset, you must adjust the significance
level, which makes it harder for you to get significant results just by fishing
around in your data. Note, this does not just apply to a (H)CFA – this is a
general rule! If you do many post hoc tests, this means that the adjustment
The multifactorial analysis of frequency data 243

will make it very difficult for you to still get any significant result at all,
which should motivate you to formulate reasonable alternative hypotheses
beforehand rather than endlessly perform post hoc tests.
Let’s return to the data from Peters (2001). Table 41 has four cells
which means that the post hoc p-value you would need according to the
Bonferroni correction is 0.05/4 = 0.0125 (or, if you want to be as exact as
possible, 1-0.95(1/4) ≈ 0.01274146). What is therefore the contribution to
chi-square value you need to find in the table (for df = 1)? And what are the
similarly adjusted chi-square values for p = 0.01 and p = 0.001?

THINK
BREAK

> qchisq(c(0.0125,—0.0025,—0.00025),—1,—lower.tail=F)—#—or¶
> qchisq(c(0.05,—0.01,—0.001)/4,—1,—lower.tail=F)¶
[1]——6.238533——9.140593—13.412148

Now you can check which contribution to chi-square exceeds which of


these values. You find that none of them does, which means that the overall
interaction is of course still significant but that no single cell reaches a
standard level of significance.

> which(chisq.test(table(CONSTRUCTION,—REFERENT),—
correct=F)$res^2>6.239)¶
integer(0)

Let us now look at a more complex and thus more interesting example.
As you know, you can express relations such as possession in English in
several different ways, the following two of which we are interested in.

(53) a. the president’s speech (s-genitive) =


NPPossessor’s NPPossessed
b. the speech of the president (of-construction) =
NPPossessed of NPPossessor

Since again often both constructions are possible,33 one might again be

33. These two constructions can of course express many different relations, not just those of
possessor and possessed. Since these two are very basic and probably the archetypal re-
lations of these two constructions, I use these two labels as convenient cover terms.
244 Selected multifactorial methods

interested in determining how speakers choose which construction to use.


Previous studies have shown that, among other things, the degree of anima-
cy/concreteness of the referents of the possessor and the possessed influ-
ences the constructional choice. To evaluate such results, let’s assume you
extracted 150 examples of each construction from a corpus and coded them
with regard to the construction, but also with regard to whether the posses-
sor is an animate being, a concrete object, or an abstract entity (POSSESSOR:
ANIMATE vs. CONCRETE vs. ABSTRACT), and the same for the possessed
(POSSESSED: ANIMATE vs. CONCRETE vs. ABSTRACT). Table 42 crosstabu-
lates the observed frequencies in the form of a k (rows) × m (columns) × s
(slices) table.

Table 42. POSSESSOR × POSSESSED × GENITIVE


POSSESSED ABSTRACT CONCRETE ANIMATE Totals
POSSESSOR OF S OF S OF S OF S Totals
ABSTRACT 80 37 9 8 3 2 92 47 139
CONCRETE 22 0 20 1 0 0 42 1 43
ANIMATE 9 58 1 35 6 9 16 102 118
Totals 111 95 30 44 9 11 150 150 300
206 74 20

A CFA now tests whether the observed frequencies of the so-called con-
figurations – variable level combinations – are larger or smaller than ex-
pected by chance. If a configuration is more frequent than expected, it is
referred to as a type; if it is less frequent than expected, it is referred to as
an antitype. In this example, this means you test which configurations of a
construction with a particular possessor and a particular possessed are pre-
ferred and which are dispreferred.
First, for convenience’s sake, the data are transformed into the tabular
format of Table 43, whose four left columns contain the same data as Table
42. The column “expected frequency” contains the expected frequencies,
which were computed in exactly the same way as for the two-dimensional
chi-square test: you multiply all totals of a particular cell and divide by
nnumber of variables-1. For example, for the configuration POSSESSOR: ABSTRACT,
POSSESSED: ABSTRACT, GENITIVE: OF you multiply 139 (the overall fre-
quency of abstract possessors) with 206 (the overall frequency of abstract
possesseds) with 150 (the overall frequency of of-constructions), and then
you divide that by 300(3-1), etc. The rightmost column contains the contribu-
tions to chi-square, which add up to the chi-square value of 181.47 for the
complete table.
The multifactorial analysis of frequency data 245

Table 43. CFA: POSSESSOR × POSSESSED × GENITIVE


Configuration observed expected contrib. to
POSSESSOR POSSESSED GENITIVE frequency frequency chi-square
ABSTRACT ABSTRACT OF 80 47.72 21.83
ABSTRACT ABSTRACT S 37 47.72 2.41
ABSTRACT CONCRETE OF 9 17.14 3.87
ABSTRACT CONCRETE S 8 17.14 4.88
ABSTRACT ANIMATE OF 3 4.63 0.58
ABSTRACT ANIMATE S 2 4.63 1.5
CONCRETE ABSTRACT OF 22 14.76 3.55
CONCRETE ABSTRACT S 0 14.76 14.76
CONCRETE CONCRETE OF 20 5.3 40.73
CONCRETE CONCRETE S 1 5.3 3.49
CONCRETE ANIMATE OF 0 1.43 1.43
CONCRETE ANIMATE S 0 1.43 1.43
ANIMATE ABSTRACT OF 9 40.51 24.51
ANIMATE ABSTRACT S 58 40.51 7.55
ANIMATE CONCRETE OF 1 14.55 12.62
ANIMATE CONCRETE S 35 14.55 28.73
ANIMATE ANIMATE OF 6 3.93 1.09
ANIMATE ANIMATE S 9 3.93 6.53
Totals 300 300 181.47

For df = (k·m·s)-(k+m+s)+number of variables-1 = 12, this chi-square


value is highly significant:

> pchisq(181.47,—12,—lower.tail=F)¶
[1]—2.129666e-32

This means the global null hypothesis can be rejected: there is a signifi-
cant interaction between the animacy/concreteness of the possessor, of the
possessed, and the choice of construction (χ2 = 181.47; df = 12; p < 0.001).
But how do you now decide whether a particular contribution to chi-square
is significant and, therefore, indicative of a potentially interesting type or
antitype? You compute the adjusted significance levels, from those you
compute the adjusted critical chi-square values that need to be exceeded for
significant types and antitypes, and then you check which of the contribu-
tions to chi-square exceed these adjusted critical chi-square values:
246 Selected multifactorial methods

> qchisq(c(0.05,—0.01,—0.001)/18,—1,—lower.tail=F)¶
[1]——8.947972—11.919293—16.248432

− types:
– POSSESSED: ABSTRACT of POSSESSOR: ABSTRACT (***);
– POSSESSED: CONCRETE of POSSESSOR: CONCRETE (***);
– POSSESSOR: ANIMATE ‘S POSSESSED: CONCRETE (***);
− antitypes:
– POSSESSOR: CONCRETE ‘s POSSESSED: ABSTRACT (**);
– POSSESSED: CONCRETE of POSSESSOR: ANIMATE (**);
– POSSESSED: ABSTRACT of POSSESSOR: ANIMATE (***).

Thus, in addition to the rejection of the global null hypothesis, there are
significant types and antitypes: animate entities are preferred as possessors
of concrete entities in s-genitives whereas abstract and concrete possessors
prefer of-constructions. More comprehensive analysis can reveal more, and
we will revisit this example shortly. There is no principled upper limit to
the number of variables and variable levels CFAs can handle (as long as
your sample is large enough, and there are also extensions to CFAs that
allow for slightly smaller samples) so this method can be used to study very
complex patterns, which are often not taken into consideration.
Before we refine and extend the above analysis, let us briefly look at
how such data can be tabulated easily. First, load the data from <C:/_sflwr/
_inputfiles/05-1-1_genitives.txt>:

> rm(list=ls(all=T))¶
> Genitive<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="")¶
> attach(Genitive)¶
> str(Genitive)¶
'data.frame':———300—obs.—of——4—variables:
—$—CASE—————:—int——1—2—3—4—5—6—7—8—9—10—...
—$—POSSESSOR:—Factor—w/—3—levels—"abstract","animate",..:—...
—$—POSSESSED:—Factor—w/—3—levels—"abstract","animate",..:—...
—$—GENITIVE—:—Factor—w/—2—levels—"of","s":—1—1—1—1—1—1—1—...

The simplest ways to tabulate the data involves the functions table and
prop.table, which you already know. For example, you can use table
with more than two variables. Note how the order of the variable names
influences the structure of the tables that are returned.

> table(GENITIVE,—POSSESSOR,—POSSESSED)¶
> table(POSSESSOR,—POSSESSED,—GENITIVE)¶
The multifactorial analysis of frequency data 247

Note also what this gives you, it will be important later:

> table(POSSESSOR,—POSSESSED,—GENITIVE)[,,1]¶
> table(POSSESSOR,—POSSESSED,—GENITIVE)[,,"of"]¶

The function ftable offers another interesting way to tabulate. For our
present purposes, this function takes three kinds of arguments:

− it can take a data frame and then cross-tabulates all variables so that the
levels of the left-most variable vary the slowest;
− it can take several variables as arguments and then cross-tabulates them
such that the levels of the left-most variable vary the slowest;
− it can take a formula in which the dependent variable and independent
variable(s) are again on the left and the right of the tilde respectively.

> ftable(POSSESSOR,—POSSESSED,—GENITIVE)¶
> ftable(GENITIVE~POSSESSOR+POSSESSED)¶
————————————————————GENITIVE—of——s
POSSESSOR—POSSESSED
abstract——abstract———————————80—37
——————————animate—————————————3——2
——————————concrete————————————9——8
animate———abstract————————————9—58
——————————animate—————————————6——9
——————————concrete————————————1—35
concrete——abstract———————————22——0
——————————animate—————————————0——0
——————————concrete———————————20——1

You can combine this approach with prop.table. In this case, it would
be useful to be able to have the row percentages (because these then show
the proportions of the constructions):

> prop.table(ftable(GENITIVE~POSSESSOR+POSSESSED),—1)¶
————————————————————GENITIVE—————————of——————————s
POSSESSOR—POSSESSED
abstract——abstract———————————0.68376068—0.31623932
——————————animate————————————0.60000000—0.40000000
——————————concrete———————————0.52941176—0.47058824
animate———abstract———————————0.13432836—0.86567164
——————————animate————————————0.40000000—0.60000000
——————————concrete———————————0.02777778—0.97222222
concrete——abstract———————————1.00000000—0.00000000
——————————animate———————————————————NaN————————NaN
——————————concrete———————————0.95238095—0.04761905

Many observations we will talk about below fall out from this already.
248 Selected multifactorial methods

You can immediately see that, for POSSESSOR: ABSTRACT and POSSESSOR:
CONCRETE, the percentages of GENITIVE: OF are uniformly higher than
those of GENITIVE: S, while the opposite is true for POSSESSOR: ANIMATE.

1.2. Hierarchical configural frequency analysis

The kind of approach discussed in the last section is a method that can ap-
plied to high-dimensional interactions. However, the alert reader may have
noticed two problematic aspects of it. First, we looked at all configurations
of the three variables – but we never determined whether an analysis of
two-way interactions would actually have been sufficient; recall Occam’s
razor and the comments regarding model selection from above. Maybe it
would have been enough to only look at POSSESSOR × GENITIVE because
this interaction would have accounted for the constructional choices suffi-
ciently. Thus, a more comprehensive approach would test:

− the three-way interaction POSSESSOR × POSSESSED × GENITIVE (which


is what we did);
− a two-way interaction POSSESSOR × POSSESSED;
− a two-way interaction POSSESSOR × GENITIVE;
− a two-way interaction POSSESSED × GENITIVE.34, 35

Second, the larger the numbers of variables and variable levels, the
larger the required sample size since (i) the chi-square approach requires
that most expected frequencies are greater than or equal to 5 and (ii) with
small samples, significant results are hard to come by.
Many of these issues can be addressed fairly unproblematically. With
regard to the former problem, you can of course compute CFAs of the
above kind for every possible subtable, but even in this small example this
is somewhat tedious. This is why the files from companion website of this
book include an interactive script you can use to compute CFAs for all
possible interactions, a so-called hierarchical configural frequency analy-
sis. Let us apply this method to the genitive data.

34. Since we are interested in the constructional choice, the second of these interactions is
of course not particularly relevant.
35. Strictly speaking, you should also test whether all three levels of POSSESSOR and
POSSESSED are needed, and, less importantly, you can also look at each variable in isola-
tion.
The multifactorial analysis of frequency data 249

Start R, maximize the console, and enter this line (check ?source):

> source("C:/_sflwr/_scripts/05-1_hcfa_3-2.r")¶

Then you enter hcfa()¶, which starts the function HCFA 3.2. Unlike
most other R functions, HCFA 3.2 is interactive so that the user is prompt-
ed to enter the information the script requires to perform the analysis. Apart
from two exceptions, the computations are analogous to those from Section
5.1.1 above.
After a brief introduction and some acknowledgments, the function ex-
plains which kinds of input it accepts. Either the data consist of a raw data
list of the familiar kind (without a first column containing case numbers,
however!), or they are in the format shown in the four left columns of Ta-
ble 43; the former should be the norm. Then, the function prompts you for
a working directory, and you should enter an existing directory (e.g.,
<C:/_sflwr/_inputfiles>) and put the raw data file <05-1-2_genitives.txt> in
there.
Then, you have to answer several questions. First, you must specify the
format of the data. Since the data come in the form of a raw data list, you
enter 1¶. Then, you must specify how you want to adjust the p-values for
multiple post hoc tests. Both options use exact binomial tests of the kind
discussed in Sections 1.3.4.1 and 4.3.1.2. The first option is the Bonferroni
correction from above, the second is the so-called Holm adjustment, which
is just as conservative – i.e., it also guarantees that you do not exceed an
overall probability of error of 0.05 – but can detect more significant confi-
gurations than the Bonferroni correction. The first option is only included
for the sake of completeness, you should basically always use the Holm
correction: 2¶.
As a next step, you must specify how the output is to be sorted. You can
choose the effect size measure Q (1), the observed frequencies (2), the p-
values (3), the contributions to chi-square (4), or simply nested tables (5). I
recommend option 1 (or sometimes 5): 1¶. Then, you choose the above
input file. R shows you the working directory you defined before, choose
the relevant input file. From that table, R determines the number of sub-
tables that can be generated, and in accordance with what we said above, in
this case these are 7. R generates files containing these subtables and saves
them into the working directory.
Then, you are prompted which of these tables you want to include in the
analysis. Strictly speaking, you would only have to include the following:
250 Selected multifactorial methods

POSSESSOR × GENITIVE, POSSESSED × GENITIVE, POSSESSOR × POSSESSED


× GENITIVE. Why?

THINK
BREAK

Again, the variables in isolation tell you little you don’t already know –
you already know that there are 150 s-genitives and 150 of-constructions –
or they do not even involve the constructional choice. Second, this is of
course also true of the interaction POSSESSOR × POSSESSED. Just for now,
you still choose all tables that are numbered from <0001*.txt> to
<0007*.txt> in the working directory.
Now you are nearly done: the function does the required computations
and finally asks you which of the working tables you want to delete (just
for housekeeping). Here you can specify, for example, the seven interim
tables since the information they contain is also part of the three output
files the script generated. That’s it.
Now, what is the output and how can you interpret and summarize it?
First, the file <HCFA_output_sum.txt> from the working directory (or the
file I prepared earlier, <C:/_sflwr/_outputfiles/05-1-2_genitives_HCFA_
output_sum.txt>. This file provides summary statistics for each of the sev-
en subtables, namely the results of a chi-square test (plus a G-square test
statistic that I am not going to discuss here). Focusing on the three relevant
tables, you can immediately see that the interactions POSSESSOR ×
GENITIVE and POSSESSOR × POSSESSED × GENITIVE are significant, but
POSSESSED × GENITIVE is not significant. This suggests that POSSESSED
does not seem to play an important role for the constructional choice direct-
ly but, if at all, only in the three-way interaction, which you will examine
presently. This small overview already provides some potentially useful
information.
Let us now turn to <05-1-2_genitives_HCFA_output_complete.txt>.
This file contains detailed statistics for each subtable. Each subtable is re-
ported with columns for all variables but the variable(s) not involved in a
statistical test simply have periods instead of their levels. Again, we focus
on the three relevant tables only.
First, POSSESSOR × GENITIVE (beginning in line 71). You again see that
this interaction is highly significant, but you also get to see the exact distri-
bution and its evaluation. The six columns on the left contain information
The multifactorial analysis of frequency data 251

of the kind you know from Table 43. The column “Obs-exp” shows how
the observed value compares to the expected one. The column
“P.adj.Holm” provides the adjusted p-value with an additional indicator in
the column “Dec” (for decision). The final column labeled “Q” provides
the so-called coefficient of pronouncedness, which indicates the size of the
effect: the larger Q, the stronger the configuration. You can see that, in
spite of the correction for multiple post hoc tests, all six configurations are
at least very significant. The of-construction prefers abstract and concrete
possessors and disprefers animate possessors while the s-genitive prefers
animate possessors and disprefers abstract and concrete ones.
Second, POSSESSED × GENITIVE. The table as a whole is insignificant as
is each cell: the p-values are high, the Q-values are low.
Finally, POSSESSOR × POSSESSED × GENITIVE. You already know that
this table represents a highly significant interaction. Here you can also see,
however, that the Holm correction identifies one significant configuration
more than the more conservative Bonferroni correction above. Let us look
at the results in more detail. As above, there are two types involving of-
constructions (POSSESSED: ABSTRACT of POSSESSOR: ABSTRACT and
POSSESSED: CONCRETE of POSSESSOR: CONCRETE) and two antitypes
(POSSESSED: CONCRETE of POSSESSOR: ANIMATE and POSSESSED: ABSTRACT
of POSSESSOR: ANIMATE). However, the noteworthy point here is that these
types and antitypes of the three-way interactions do not tell you much that
you don’t already know from the two-way interaction POSSESSOR ×
GENITIVE. You already know from there that the of-construction prefers
POSSESSOR: ABSTRACT and POSSESSOR: CONCRETE and disprefers
POSSESSOR: ABSTRACT. That is, the three-way interaction does not tell you
much new about the of-construction. What about the s-genitive? There are
again two types (POSSESSOR: ANIMATE ‘s POSSESSED: CONCRETE and
POSSESSOR: ANIMATE ‘s POSSESSED: ABSTRACT) and one antitype
(POSSESSOR: CONCRETE ‘s POSSESSED: ABSTRACT). But again, this is not big
news: the two-way interaction already revealed that the s-genitive is pre-
ferred with animate possessors and dispreferred with concrete possessors,
and there the effect sizes were even stronger.
Finally, what about the file <05-1-2_genitives_HCFA_output_
hierarchical.txt>? This file is organized in a way that you can easily import
it into spreadsheet software. As an example, cf. the file <05-1-2_genitives_
hierarchical.ods>. In the first sheet, you find all the data from <05-1-2_
genitives_HCFA_output_hierarchical.txt> without a header or footer. In the
second sheet, all configurations are sorted according to column J, and all
types and antitypes are highlighted in blue and red respectively. In the third
252 Selected multifactorial methods

and final sheet, all non-significant configurations have been removed and
all configurations that contain a genitive are highlighted in bold. With this
kind of highlighting, even complex data sets can be analyzed relatively
straightforwardly.
To sum up: the variable POSSESSED does not have a significant influ-
ence on the choice of construction and even in the three-way interaction it
provides little information beyond what is already obvious from the two-
way interaction POSSESSOR × GENITIVE. This example nicely illustrates
how useful a multifactorial analysis can be compared to a simple chi-square
test.

Recommendation(s) for further study


− Krauth (1993), Lautsch and von Weber (1995) (both in German), and
von Eye (2002) on CFAs
− Agresti (2002) for an overview of many different kinds of analysis for
categorical data
− the functions pairs and mosaicplot as well as assoc (from the li-
brary(vcd)) to explore multidimensional frequency tables graphically
− the functions loglin and the function loglm (from the library(MASS))
to compute loglinear analyses
− the functions ca and mjca (from the library(ca)) to compute corres-
pondence analyses

2. Multiple regression analysis

In Sections 3.2.3 and 4.4.1, we looked at how to compute and evaluate the
correlation between an independent ratio-scaled variable and a dependent
ratio-scaled variable using the Pearson product-moment correlation coeffi-
cient r and linear regression. In this section, we will extend this to the case
of multiple independent variables.36 Our case study is an exploration of
how to predict speakers’ reaction times to nouns in a lexical decision task
and involves the following ratio-scaled variables:37

36. In spite of their unquestionable relevance, I can unfortunately not discuss the issues of
repeated measures and fixed/random effects in this introductory textbook without rais-
ing the overall degree of difficulty considerably. For repeated-measures ANOVAs,
Johnson (2008: Section 4.3) provides a very readable introduction; for mixed effects, or
multi-level, models, cf. esp. Gelman and Hill (2006), but also Baayen (2008: Ch. 7) and
Johnson (2008: Sections 7.3 and 7.4).
37. The words (but not the reaction times) are borrowed from a data set from Baayen’s
Multiple regression analysis 253

− a dependent variable, namely the reaction time to words in a lexical


decision task REACTTIME, whose correlation with the following inde-
pendent variables you are interested in;
− an independent variable NUMBERLETTERS, which corresponds to the
number of letters of each stimulus word;
− an independent variable KF-WRITTENFREQ, which corresponds to their
frequency (according to Kučera and Francis 1967);
− an independent variable FAMILIARITY, which is a familiarity index de-
rived from merging three different familiarity norms;
− an independent variable CONCRETENESS, which reflects the rated con-
creteness from merging three different studies;
− an independent variable IMAGEABILITY, which indicates the imageabili-
ty of the referent of the word from the same three studies;
− an independent variable MEANINGFULNESS, which indicates the mea-
ningfulness rating of the stimulus word.

This is the overall procedure of a multiple linear regression:

Procedure
Formulating the hypotheses
Computing the observed correlations and inspecting graphs
Testing the main assumption(s) of the test:
the variances of the residuals are homogeneous and normally
distributed in the populations from which the samples were
taken or, at least, in the samples themselves
the residuals are normally distributed (with a mean of 0) in the
populations from which the samples were taken or, at least,
in the samples themselves
Computing the multiple correlation R2 and the regression parameters
Computing the test statistic F, the degrees of freedom df, and the probabili-
ty of error p

We begin, as usual, with the hypotheses. Let us assume we wish to test


each independent variable’s effect in isolation as well as all interactions of
maximally two variables. To keep the hypotheses short, we use an exten-

excellent (2008) introduction, and all other characteristics of these words were taken
from the MRC Psycholinguistic Database; cf. <http://www.psy.uwa.edu.au/ mrcdata-
base/mrc2.html> for more detailed explanations regarding the variables.
254 Selected multifactorial methods

sion of the coefficient of determination r2, namely its multiple regression


equivalent multiple R2:

H0: There is no correlation between REACTTIME on the one hand and


the independent variables and their pairwise interactions on the
other hand: multiple R2 = 0.
H1: There is a correlation between REACTTIME on the one hand and at
least one of the independent variables and/or of their pairwise inte-
ractions on the other hand: multiple R2 > 0.

You clear the memory (if you did not already start a new instance of R)
and load the data from the file <C:/_sflwr/_inputfiles/05-
2_reactiontimes.txt>, note how the stimulus words are used as row names:

> rm(list=ls(all=T))¶
> ReactTime<-read.table(choose.files(),—header=T,—sep="\t",—
row.names=1,—comment.char="",—quote="")¶
> summary(ReactTime)¶
——REACTTIME————————NO_LETT————————KF_WRITFREQ—
Min.———:523.0———Min.———:—3.000———Min.———:——0.00
1st—Qu.:589.4———1st—Qu.:—5.000———1st—Qu.:——1.00
Median—:617.7———Median—:—6.000———Median—:——3.00
Mean———:631.9———Mean———:—5.857———Mean———:——8.26
3rd—Qu.:663.6———3rd—Qu.:—7.000———3rd—Qu.:——9.00
Max.———:794.5———Max.———:10.000———Max.———:117.00
—FAMILIARITY—————CONCRETENESS————IMAGEABILITY———MEANINGFUL_CO
Min.———:393.0———Min.———:564.0———Min.———:446.0———Min.———:315.0
1st—Qu.:470.5———1st—Qu.:603.5———1st—Qu.:588.0———1st—Qu.:409.8
Median—:511.0———Median—:613.5———Median—:604.0———Median—:437.5
Mean———:507.4———Mean———:612.4———Mean———:600.5———Mean———:436.0
3rd—Qu.:538.5———3rd—Qu.:622.0———3rd—Qu.:623.0———3rd—Qu.:466.2
Max.———:612.0———Max.———:662.0———Max.———:644.0———Max.———:553.0
NA's———:—22.0———NA's———:—25.0———NA's———:—24.0———NA's———:—29.0

For numerical vectors, the function summary returns the summary statis-
tics you already saw above; for factors, it returns the frequencies of the
factor levels and also provides the number of cases of NA, of which there
are a few. Before running multifactorial analyses, it often makes sense to
spend some more time on exploring the data to avoid falling prey to out-
liers or other noteworthy datapoints (recall Section 3.2.3). There are several
useful ways to explore the data. One involves plotting pairwise scatterplots
between columns of a data frame using pairs (this time not from the
library(vcd)). You add the arguments labels=… and summarize the
overall trend with a smoother (panel=panel.smooth) in Figure 59:
Multiple regression analysis 255

> pairs(ReactTime,—labels=c("Reaction\ntime",—"Number\nof—
letters",—"Kuc-Francis\nwritten—freq",—"Familiarity",—
"Concreteness",—"Imageability",—"Meaningfulness"),—
panel=panel.smooth)¶

You immediately get a first feel for the data. For example, FAMILIARITY
exhibits a negative trend, and so does IMAGEABILITY. On the other hand,
NUMBERLETTERS shows a positive trend, and CONCRETENESS and KF-
WRITTENFREQ appear to show no clear patterns. However, since word
frequencies are usually skewed and we can see there are some outlier fre-
quencies in the data, it makes sense here to log the frequencies (which li-
nearizes them) and see whether that makes a difference in the correlation
plot (we add 1 before logging to take care of zeroes):

> ReactTime[,3]<-log(ReactTime[,3]+1)¶
> pairs(ReactTime,—labels=c("Reaction\ntime",—"Number\nof—
letters",—"Kuc-Francis\nwritten—freq",—"Familiarity",—
"Concreteness",—"Imageability",—"Meaningfulness"),—
panel=panel.smooth)¶

As you can see (I do not show this second scatterplot matrix here), there
is now a correlation between KF-WRITTENFREQ and REACTTIME of the
kind we would intuitively expect, namely a negative one: the more frequent
the word, the shorter the reaction time (on average). We therefore continue
with the logged values.
To quantify the relations, you can also generate a pairwise correlation
matrix with the already familiar function cor. Since you have missing data
here, you can instruct cor to disregard the missing data only from each
individual correlation. Since the output is rather voluminous, I only show
the function call here. You can see some correlations of the independent
variables with REACTTIME that look promising …

> round(cor(ReactTime,—method="pearson",—
use="pairwise.complete.obs"),—2)¶

It is also obvious, however, that there are some data points which de-
viate from the main bulk of data points. (Of course, that was already indi-
cated to some degree in the above summary output). For example, there is
one very low value of IMAGEABILITY value. It could therefore be worth the
effort to also look at how each variable is distributed. You can do that using
boxplots, but in a more comprehensive way than before. First, you will use
256 Selected multifactorial methods

only one line to plot all boxplots, second, you will make use the numerical
output of boxplots (which so far I haven’t even told you about):

Figure 59. Pairwise scatterplots

> par(mfrow=c(2,4))¶
> boxplot.output<-apply(ReactTime,—2,—boxplot)¶
> plot(c(0,—2),—c(1,—7),—xlab="",—ylab="",—main="'legend'",—
type="n",—axes=F);—text(1,—7:1,—labels=paste(1:7,—
names(ReactTime),—sep="="))¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶

Two things happened here, one visibly, the other invisibly. The visible
thing is the graph: seven boxplots were created, each in one panel, and the
last of the eight panels contains a legend that tells you for the number of
Multiple regression analysis 257

each boxplot which variable it represents (note how paste is used to paste
the numbers from one to seven together with the column names, separated
by a “=”).

Figure 60. Boxplots for all columns in the data frame

The invisible thing is that the data structure boxplot.output now con-
tains a lot of statistics. This data structure is a list, a structure mentioned
before very briefly in Section 4.1.1.2. I will not discuss this data structure
here in great detail, suffice it here to say that many functions in R use lists
to store their results (as does boxplot) because this data structure is very
flexible in that it can contain many different data structures of all kinds.
In the present case, the list contains seven lists, one for each boxplot
(enter str(boxplot.output)¶ to see for yourself). Each of the lists con-
tains two matrices and four vectors, which contain information on the basis
258 Selected multifactorial methods

of which R plotted the boxplots. What we are interested in here is the


fourth element of each list, which is called out and contains what R consi-
dered outliers. How do you get the fourth element of each of the seven lists
in the list boxplot.output? You can either do this manually for each list
with subsetting. The only thing you need to know is that parts of lists are
usually not extracted with single square brackets but double square brack-
ets. Thus, this is how you get the outliers from the first group:

> boxplot.output[[1]][[4]]¶
—gherkin————stork
776.6582—794.4522

The more elegant alternative is this, but don’t dwell long on how this
works for now – read up on sapply when you’re done with this chapter; I
only show the function call.

> sapply(boxplot.output,—"[",—4)¶

The data points you get here are each variable’s outliers that are plotted
separately in the boxplots and that, sometimes at least, stick out in the scat-
terplots. We will have to be careful to see whether or not these give rise to
problems in the linear modeling process later.
Before we begin with the regression, a few things need to be done. First,
you will need to tell R how the parameters of the linear regression are to be
computed (more on that later). Second, you will want to disregard the in-
complete cases (because R would otherwise do that later anyway) so you
downsize the data frame to one that contains only complete observations
(with the function complete.cases). Third, for reasons that will become
more obvious below, all predictor variables are centered (with scale from
Section 3.1.4). Then you can attach the new data frame and we can begin:

> options(contrasts=c("contr.treatment",—"contr.poly"))¶
> ReactTime<-ReactTime[complete.cases(ReactTime),]¶
> ReactTime.2<-ReactTime¶
> ReactTime.2[,-1]<-apply(ReactTime.2[,-1],—2,—scale,—
scale=F)¶
> attach(ReactTime.2)¶

As before we use the function lm, but this time we list several indepen-
dent variables. Recall, we want to test each independent variable, but also
each pairwise interaction. Thankfully, you don’t have to enter all interac-
tions manually because there is a shorthand notation for that: if you put all
Multiple regression analysis 259

variables for which you want interactions into parentheses and add a “^n”
(where n is an integer), then R will generate and test all interactions up to
the level n. Thus, you can write this to start your work on the regressions (I
add a data=… argument to the lm function, which is strictly speaking not
necessary since we used attach, but it makes some plotting etc. below
easier. I omit the call and all significance codes in the results):38

> model.1<-lm(REACTTIME—~—(CONCRETENESS—+—FAMILIARITY—+—
IMAGEABILITY—+—KF_WRITFREQ—+—MEANINGFUL_CO—+—NO_LETT)
^2,—data=ReactTime.2)¶
> summary(model.1)¶
[…]
Residuals:
————Min——————1Q——Median——————3Q—————Max
-56.378—-16.013——-1.352——13.313——49.753

Coefficients:
—————————————————————————————Estimate—Std.—Error—t—value—Pr(>|t|)
(Intercept)————————————————604.081415———7.258239——83.227——<—2e-16
CONCRETENESS—————————————————0.174377———0.384053———0.454——0.65356
FAMILIARITY—————————————————-0.318653———0.168723——-1.889——0.07015
IMAGEABILITY————————————————-0.111855———0.360045——-0.311——0.75853
KF_WRITFREQ—————————————————-6.627089———7.046023——-0.941——0.35560
MEANINGFUL_CO———————————————-0.177886———0.208846——-0.852——0.40213
NO_LETT—————————————————————12.371051———3.481681———3.553——0.00148
CONCRETENESS:FAMILIARITY————-0.008318———0.015268——-0.545——0.59052
CONCRETENESS:IMAGEABILITY————0.050307———0.021627———2.326——0.02807
CONCRETENESS:KF_WRITFREQ—————0.662994———0.458249———1.447——0.15990
CONCRETENESS:MEANINGFUL_CO———0.020024———0.017726———1.130——0.26896
CONCRETENESS:NO_LETT—————————0.163612———0.237059———0.690——0.49620
FAMILIARITY:IMAGEABILITY————-0.007331———0.007790——-0.941——0.35528
FAMILIARITY:KF_WRITFREQ——————0.493292———0.245883———2.006——0.05534
FAMILIARITY:MEANINGFUL_CO————0.004569———0.005688———0.803——0.42912
FAMILIARITY:NO_LETT——————————0.120892———0.118363———1.021——0.31649
IMAGEABILITY:KF_WRITFREQ————-0.114299———0.517466——-0.221——0.82691
IMAGEABILITY:MEANINGFUL_CO——-0.022628———0.009809——-2.307——0.02928
IMAGEABILITY:NO_LETT————————-0.429974———0.227784——-1.888——0.07028
KF_WRITFREQ:MEANINGFUL_CO————0.046693———0.274243———0.170——0.86612
KF_WRITFREQ:NO_LETT—————————-4.959232———3.472913——-1.428——0.16520
MEANINGFUL_CO:NO_LETT————————0.029051———0.185781———0.156——0.87695
---
Residual—standard—error:—33.55—on—26—degrees—of—freedom
Multiple—R-squared:—0.6942,—————Adjusted—R-squared:—0.4472
F-statistic:—2.811—on—21—and—26—DF,——p-value:—0.006743

A lot of information … Let us begin at the bottom. There is a very sig-


nificant correlation of intermediate strength: multiple R2 = 0.4472;

38. So far, we always tested the assumptions of a test before we actually did it. However,
since testing the appropriateness of a linear regression requires values that you only get
from it, we compute the regression first and then evaluate its appropriateness.
260 Selected multifactorial methods

F = 2.811; df = 21, 26; p = 0.0067.39 We ignore the residual standard error


and turn to the coefficients.40 In the middle of the output under “Coeffi-
cients”, you can see the results for every variable and every interaction.
Some variables and interactions reach the standard level of significance,
but most do not. Many publications now provide this table in their results
section, but we will use a different strategy. Following Occam’s razor, we
will now in a stepwise fashion eliminate variables/interactions that are nei-
ther significant nor involved in significant interactions because, since they
are not significant, their influence on REACTTIME is probably only random
anyway and they must be omitted from the subsequent models.
As a first step, let us exclude the least significant interaction:
MEANINGFULNESS:NUMBERLETTERS (p ≈ 0.88). Thankfully, you don’t
have to now spell out the complete model – you can use update:

> model.2<-update(model.1,—~.—-—MEANINGFUL_CO:NO_LETT)¶

This tells R to create a new linear model, model.2, which is the same as
model.1 (that’s what model.1,—~. means), but does not contain (hence the
minus) the specified interaction. Let’s look at the new model (I now only
provide the coefficients, the R2-values, and the overall significance test.)

> summary(model.2)¶
[…]
(Intercept)————————————————604.427392———6.786756——89.060——<—2e-16
CONCRETENESS—————————————————0.162072———0.369051———0.439——0.66404
FAMILIARITY—————————————————-0.320052———0.165413——-1.935——0.06355
IMAGEABILITY————————————————-0.125755———0.342538——-0.367——0.71639
KF_WRITFREQ—————————————————-6.616024———6.917212——-0.956——0.34733
MEANINGFUL_CO———————————————-0.175571———0.204522——-0.858——0.39820
NO_LETT—————————————————————12.293915———3.383722———3.633——0.00116
CONCRETENESS:FAMILIARITY————-0.007639———0.014370——-0.532——0.59936
CONCRETENESS:IMAGEABILITY————0.049802———0.020995———2.372——0.02507
CONCRETENESS:KF_WRITFREQ—————0.689974———0.416785———1.655——0.10941
CONCRETENESS:MEANINGFUL_CO———0.018903———0.015918———1.188——0.24535
CONCRETENESS:NO_LETT—————————0.186387———0.183630———1.015——0.31911
FAMILIARITY:IMAGEABILITY————-0.007115———0.007526——-0.945——0.35284

39. We use the second, adjusted R2 value. The first one has the undesirable characteristic
that it can only get larger as you include additional independent variables. The adjusted
value, on the other hand, takes into consideration not only the amount of explained va-
riance, but also the number of independent variables used to explain this amount of va-
riance by subtracting a small amount from the R2-value, which effectively penalizes the
inclusion of many irrelevant variables.
40. The residual standard error is the square root of the quotient of the residual sums of
squares divided by the residual degrees of freedom (in R: sqrt(sum(residuals(
model.1)^2)/26)¶); I will not discuss this any further.
Multiple regression analysis 261

FAMILIARITY:KF_WRITFREQ——————0.488236———0.239304———2.040——0.05122
FAMILIARITY:MEANINGFUL_CO————0.004644———0.005564———0.835——0.41126
FAMILIARITY:NO_LETT——————————0.131016———0.097279———1.347——0.18924
IMAGEABILITY:KF_WRITFREQ————-0.080299———0.461007——-0.174——0.86302
IMAGEABILITY:MEANINGFUL_CO——-0.022963———0.009398——-2.444——0.02136
IMAGEABILITY:NO_LETT————————-0.411003———0.189274——-2.171——0.03885
KF_WRITFREQ:MEANINGFUL_CO————0.022696———0.223142———0.102——0.91974
KF_WRITFREQ:NO_LETT—————————-4.936940———3.406722——-1.449——0.15880
[…]
Multiple—R-squared:—0.6939,—————Adjusted—R-squared:—0.4672
F-statistic:—3.061—on—20—and—27—DF,——p-value:—0.003688

What has happened now that a non-significant interaction has been re-
moved? First, multiple R2 is smaller, but only a tiny little bit – 0.0003 –
which is not surprising since we dropped only an insignificant interaction
from the model. Second and more interestingly, adjusted R2 is larger and
the p-value has decreased to nearly half the first value, again because we
dropped an interaction without losing much predictive power. Put different-
ly, we were rewarded for dropping useless variables/interactions, which
changes the degrees of freedom. Third, note that the p-values changed:
IMAGEABILITY:NUMBERLETTERS was marginally significant in model.1 (p
= 0.07028) but it is now significant (p = 0.03885). This is important be-
cause it shows that in such a multifactorial linear model, each predictor’s
effect is not evaluated in isolation but in the context/presence of the other
predictors in the model: when one predictor is removed or added, every-
thing else in the model can change.
Given that R2 of model.2 is nearly exactly as large as R2 of model.1, the
models don’t seem to differ significantly from each other, but let us also
test that. We can use the function anova for that, which for this application
just takes the two models as arguments (where one model is a subset of the
other): (I again omit the model definitions from the output.)

> anova(model.1,—model.2)¶
Analysis—of—Variance—Table
[…]
——Res.Df—————RSS—Df—Sum—of—Sq——————F—Pr(>F)
1—————26—29264.7
2—————27—29292.2—-1—————-27.5—0.0245——0.877

The p-value of 0.877 shows we were clearly justified in omitting the


three interactions: there is no significant difference between the models so
Occam’s razor forces us to adopt the simpler one, model.2. And you may
even recognize that p-value: it’s the p-value of the interaction
MEANINGFULNESS:NUMBERLETTERS, which we deleted from the model.
262 Selected multifactorial methods

Let’s move on, there are still many insignificant predictors to delete and
you delete predictors in a stepwise fashion and from highest-order interac-
tions to lower-order interactions to main effects. Thus, we chose the next
most insignificant one, KF-WRITTENFREQ: MEANINGFULNESS. Since there
will be quite a few steps before we arrive at the minimal adequate model, I
now often provide only very little output; you will see the complete output
when you run the code.

> model.3<-update(model.2,—~.—-—KF_WRITFREQ:MEANINGFUL_CO)¶
> summary(model.3);—anova(model.2,—model.3)¶
[…]
Multiple—R-squared:—0.6938,—————Adjusted—R-squared:—0.486
F-statistic:—3.339—on—19—and—28—DF,——p-value:—0.001926
Analysis—of—Variance—Table
[…]
——Res.Df—————RSS—Df—Sum—of—Sq——————F—Pr(>F)
1—————27—29292.2
2—————28—29303.4—-1—————-11.2—0.0103—0.9197

Same story: multiple R2 decreased a little, adjusted R2 increased a little,


p decreased. The interaction of CONCRETENESS and KF-WRITTENFREQ
changed to ‘marginally significant’, NUMBERLETTERS changed to highly
significant, but not much else has happened. Again, anova shows that the
model simplification was justified: p ≈ 0.92.
And we go on like this, always eliminating the interaction with the larg-
est p-values with update and checking that with anova … Follow along on
the basis of the code file and see how adjusted R2 changes, how variables’
and interactions’ p-values change, and how the ratio of significant predic-
tors in our model increases:

> model.4<-update(model.3,—~.—-—IMAGEABILITY:KF_WRITFREQ)¶
> summary(model.4);—anova(model.3,—model.4)¶
> model.5<-update(model.4,—~.—-—CONCRETENESS:FAMILIARITY)¶
> summary(model.5);—anova(model.4,—model.5)¶
> model.6<-update(model.5,—~.—-—FAMILIARITY:MEANINGFUL_CO)¶
> summary(model.6);—anova(model.5,—model.6)¶
> model.7<-update(model.6,—~.—-—FAMILIARITY:IMAGEABILITY)¶
> summary(model.7);—anova(model.6,—model.7)¶
> model.8<-update(model.7,—~.—-—CONCRETENESS:NO_LETT)¶
> summary(model.8);—anova(model.7,—model.8)¶
> model.9<-update(model.8,—~.—-—FAMILIARITY:NO_LETT)¶
> summary(model.9);—anova(model.8,—model.9)¶
> model.10<-update(model.9,—~.—-—CONCRETENESS:KF_WRITFREQ)¶
> summary(model.10);—anova(model.9,—model.10)¶
> model.11<-update(model.10,—~.—-—KF_WRITFREQ:NO_LETT)¶
> summary(model.11);—anova(model.10,—model.11)¶
> model.12<-update(model.11,—~.—-—CONCRETENESS:IMAGEABILITY)¶
Multiple regression analysis 263

> summary(model.12);—anova(model.11,—model.12)¶
> model.13<-update(model.12,—~.—-—IMAGEABILITY:NO_LETT)¶

Now an interesting situation arises: This is the first time all interactions
that are still in the model are at least marginally significant:

> summary(model.13);—anova(model.12,—model.13)¶
[…]
—————————————————————————————Estimate—Std.—Error—t—value—Pr(>|t|)
(Intercept)————————————————607.675439———5.600854—108.497—<—2e-16
CONCRETENESS————————————————-0.080520———0.267324——-0.301—0.764898
FAMILIARITY—————————————————-0.265266———0.152372——-1.741—0.089792
IMAGEABILITY————————————————-0.232681———0.273245——-0.852—0.399800
KF_WRITFREQ—————————————————-5.349738———5.154223——-1.038—0.305861
MEANINGFUL_CO———————————————-0.008723———0.180080——-0.048—0.961617
NO_LETT—————————————————————11.575720———2.869773———4.034—0.000256
CONCRETENESS:MEANINGFUL_CO———0.020318———0.007844———2.590—0.013529
FAMILIARITY:KF_WRITFREQ——————0.312600———0.130241———2.400—0.021393
IMAGEABILITY:MEANINGFUL_CO——-0.008990———0.003435——-2.617—0.012657
---
[…]
Multiple—R-squared:—0.5596,—————Adjusted—R-squared:—0.4553
F-statistic:—5.364—on—9—and—38—DF,——p-value:—9.793e-05
Analysis—of—Variance—Table
[…]
——Res.Df———RSS—Df—Sum—of—Sq——————F—Pr(>F)
1—————37—40678
2—————38—42151—-1—————-1473—1.3396—0.2545

You may now think that we go on removing non-significant main ef-


fects (such as MEANINGFULNESS) from the regression model until all pre-
dictors are significant. However, while one in general weeds out non-
significant predictors in the way we did, recall from above that one does
not weed out non-significant predictors that still participate in higher inte-
ractions. In this case, MEANINGFULNESS is the least significant predictor so
you may want to remove it – but you don’t because MEANINGFULNESS still
participates in a significant interaction. In fact, if you look at all non-
significant variables, you will find that this is true of all of them, which
means that we cannot simplify the model any further and that model.13 is
our final, minimally adequate model, which indicates that there is an over-
all correlation that is highly significant.
Now, how well does this regression equation predict the observed reac-
tion times? You can assess the model on the basis of adjusted R2, but you
can also check the exact predictions. Of course, you can generate a regres-
sion equation as in the sections on correlations above – an interaction of
two ratio-scaled variables is by default modeled as the product of the two
264 Selected multifactorial methods

variables’ values for each data point – but this becomes very tedious so you
use predict again:

> head(predict(model.13))—#—or—head(fitted(model.13))¶
——————ant—————apple—asparagus————banana———————bat————beaver
—581.4894——577.0885——645.0601——582.2992——598.7205——646.0908

And, as mentioned above in Section 3.2, you can also use predict to
make predictions for combinations of values that were not observed. If you
wanted to predict the value for the first word (this was of course observed,
this is just so that you can check you get the right output), you specify the
desired variable values in a list called newdata:

> predict(model.13,—newdata=list(NO_LETT=-2.625,—KF_WRITFREQ=
0.1089857,—FAMILIARITY=-5.208333,—CONCRETENESS=
-7.645833,—IMAGEABILITY=10.85417,—MEANINGFUL_CO=
-20.97917))¶
———————1
581.4894

Let us briefly have a look at which words’ reaction times are predicted
well and which are not. The first of the following two lines sets up an emp-
ty coordinate system. (I set the limits of the y-axis manually so that all resi-
duals can be shown and that the y-axis extends in both directions symmetr-
ically around 0.) The second line plots the words at the x-axis values 1 to 8
(which also means, the position of a word on the x-axis does not mean any-
thing: words are just spread out to avoid overplotting). Other and maybe
nicer ways to plot this are shown in the code file.

> plot(1:8,—xlim=c(0,—9),—ylim=c(-100,—100),—xlab="",—
ylab="Residuals—in—ms",—type="n");—grid()¶
> text(rep(1:8,—6),—residuals(model.17),—labels=
row.names(ReactTime.2),—cex=0.9)¶

Obviously, the reaction times for the words squirrel, potato, asparagus,
and tortoise are underestimated while the reaction times for the words
sheep, spider, apple, and orange, for instance, are strongly overestimated,
which could be explored further depending on the study’s objectives. One
thing worth mentioning, though, is that the words whose reaction times are
not predicted well are not all exactly ones that looked like outliers in the
variable-specific boxplots earlier. One of the words that appeared to be an
outlier that would potentially bias the results (horse) is predicted rather
well. Thus, the practice of simply excluding some high or low values (e.g.,
Multiple regression analysis 265

because they are two or three standard deviations away from the mean) can
exclude data from consideration that can be accounted for very well. We
will look at more appropriate ways to identify outliers below.

Figure 61. Residuals for all words

Some further explanations/comments: first, multiple R2 corresponds to


the squared correlation of the observed and predicted values, as you can
easily verify by entering cor(REACTTIME,—predict(model.13))^2¶.
Second, the coefficients in the linear regression are of course only esti-
mates, which is why it is always useful to also look at their confidence
intervals to get a better idea of their precision. You don’t have to compute
them manually as in Section 3.1.5, but can use the function confint, which
only requires the linear model as its only argument (for the default 95%
confidence interval, that is – for a 99%-confidence interval, you add
level=0.99).
266 Selected multifactorial methods

> confint(model.13)¶
———————————————————————————————————2.5—%———————97.5—%
(Intercept)————————————————596.337102767—619.013775682
CONCRETENESS————————————————-0.621690201———0.460649732
FAMILIARITY—————————————————-0.573727338———0.043195748
IMAGEABILITY————————————————-0.785836257———0.320474310
KF_WRITFREQ————————————————-15.783916233———5.084440221
MEANINGFUL_CO———————————————-0.373276344———0.355829352
NO_LETT——————————————————————5.766167271——17.385272293
CONCRETENESS:MEANINGFUL_CO———0.004437865———0.036197360
FAMILIARITY:KF_WRITFREQ——————0.048939463———0.576259695
IMAGEABILITY:MEANINGFUL_CO——-0.015943219——-0.002036064

But now what do the coefficients (which were computed using centered
predictors, remember?) and their confidence intervals mean? In this case
here, the coefficients of the main effects correspond to the predictive dif-
ference a variable makes with the other predictors in the model at their
average values. Why is that so? This is so because we used centered predic-
tors in our regression, which makes sure that the mean of the previously
uncentered raw predictors is now 0. In fact, this is one of two reasons why
we centered them: if you do not center variables this way, the coefficients
are harder to interpret and are sometimes not particular meaningful (cf.
below for an example). The second reason is that centering predictors pro-
tects a bit against what is called collinearity of predictors, the undesirable
phenomenon that sp,e predictors may be correlated with each other, which
can affect the coefficients and the power of the analysis. While this is a
problem to large to be discussed here, the present data set in its raw form
suffers from high collinearity whereas the centered form does not.
Thus, when all other variables are at their average, then a one-letter in-
crease of a word increases the predicted reaction time by 11.576 ms. When
all other variables are at their average, then an increase of one unit of
FAMILIARITY decreases the predicted reaction time by 0.265 ms. From that,
can you guess what the coefficient for the intercept actually is?

THINK
BREAK

Here’s the answer in R:

> predict(model.13,—newdata=list(NO_LETT=0,—KF_WRITFREQ=0,—
FAMILIARITY=0,—CONCRETENESS=0,—IMAGEABILITY=0,—
MEANINGFUL_CO=0))¶
Multiple regression analysis 267

———————1
607.6754

The coefficient for the intercept is the predicted reaction time when
each predictors is at its average. (And if we had not centered the predictors,
the coefficient for the intercept would be the predicted reaction time when
all variables are zero, which is completely meaningless here.)
For the coefficients of interactions, the logic is basically the same. Let’s
look at CONCRETENESS:MEANINGFULNESS, which had a positive coeffi-
cient, 0.020318. When both increase by 100, then, all other things being
equal, they change the predicted reaction time by the sum of

− 100 · -0.080520 = -8.0520 (the main effect of CONCRETENESS);


− 100 · -0.008723 = -0.8723 (the main effect of MEANINGFULNESS);
− 100 · 100 · 0.020318 = 203.18ms (their interaction).

Disregarding differences due to rounding, the sum of these values


(194.2557) is what R outputs, too:

> means.everywhere<-predict(model.13,—newdata=
list(NO_LETT=0,—KF_WRITFREQ=0,—FAMILIARITY=0,—
CONCRETENESS=0,—IMAGEABILITY=0,—MEANINGFUL_CO=0));—
means.everywhere¶
607.6754
> both.positive<-predict(model.13,—newdata=list(NO_LETT=0,—
KF_WRITFREQ=0,—FAMILIARITY=0,—CONCRETENESS=100,—
IMAGEABILITY=0,—MEANINGFUL_CO=100))¶
> both.positive-means.everywhere¶
———————1
194.2518

On the other hand, when both decrease by 100, then, all other things be-
ing equal, they change the prediction by the sum of

− -100 · -0.080520 = 8.0520 (the main effect of CONCRETENESS);


− -100 · -0.008723 = 0.8723 (the main effect of MEANINGFULNESS);
− -100 · -100 · 0.020318 = 203.18ms (their interaction).

> both.negative<-predict(model.13,—newdata=
list(NO_LETT=0,—KF_WRITFREQ=0,—FAMILIARITY=0,—
CONCRETENESS=-100,—IMAGEABILITY=0,—
MEANINGFUL_CO=-100))
268 Selected multifactorial methods

> both.negative-means.everywhere
———————1
212.1005

Third, note that the sizes of the coefficients in the regression equation
do not reflect the strengths of the effects. These values have more to do
with the different scales on which the variables are measured than with
their importance. You must also not try to infer the effect sizes from the p-
values. Rather, what you do is you compute the linear model again, but this
time not on the centered values, but on the standardized values of both the
dependent variable and all predictors, i.e., the columnwise z-scores of the
involved variables and interactions (i.e., you will need scale again but this
time with the default setting scale=T). For that, you need to recall that
interactions in this linear regression model are products of the interacting
variables. However, you cannot simply write the product of two variables
into a linear model formula because the asterisk you would use for the
product already means something else, namely ‘all variables in isolation
and all their interactions’. You have to tell R something like ‘this time I
want the asterisk to mean mere multiplication’, and the way to do this is
with by putting the multiplication in brackets and prefixing it with I. Thus:

> model.13.effsiz<-lm(scale(REACTTIME)—~—
scale(FAMILIARITY)—+—scale(NO_LETT)—+—
I(scale(MEANINGFUL_CO)*scale(CONCRETENESS))—+—
I(scale(FAMILIARITY)*scale(KF_WRITFREQ))—+—
I(scale(MEANINGFUL_CO)*scale(IMAGEABILITY)))¶
> round(coef(model.13.effsiz),—2)¶
——————————————————————————————————(Intercept)
————————————————————————————————————————-0.13
——————————————————————————scale(CONCRETENESS)
————————————————————————————————————————-0.04
———————————————————————————scale(FAMILIARITY)
————————————————————————————————————————-0.28
——————————————————————————scale(IMAGEABILITY)
————————————————————————————————————————-0.17
———————————————————————————scale(KF_WRITFREQ)
————————————————————————————————————————-0.14
—————————————————————————scale(MEANINGFUL_CO)
————————————————————————————————————————-0.01
———————————————————————————————scale(NO_LETT)
—————————————————————————————————————————0.48
I(scale(CONCRETENESS)—*—scale(MEANINGFUL_CO))
—————————————————————————————————————————0.41
———I(scale(FAMILIARITY)—*—scale(KF_WRITFREQ))
—————————————————————————————————————————0.40
I(scale(IMAGEABILITY)—*—scale(MEANINGFUL_CO))
————————————————————————————————————————-0.28
Multiple regression analysis 269

These coefficients, which are sometimes called standardized regression


coefficients or beta weights, are one, though not completely unproblematic,
way of assessing the degree of importance of each predictor left in the
model. These standardized coefficients range from -1 to 1 and can be inter-
preted like correlation coefficients: positive and negative values reflect
positive and negative correlations respectively (cf. the code file for a graph-
ical representation of the effect sizes). The results from the above pairwise
scatterplot are confirmed: CONCRETENESS and MEANINGFULNESS exhi-
bited virtually no trends and have standardized coefficients very close to
zero, and NUMBERLETTERS exhibited a positive trend and has a positive
standardized coefficient. Note, however, that you cannot simply interpret
all beta weights in this straightforward way because the interactions show
that each of these variables participates in interactions that make it neces-
sary to qualify their effect.
Consider, for example, again the interaction CONCRETENESS:
MEANINGFULNESS. As you can see, as main effects, both variables contri-
bute very little to the predictive power of the model: they are in fact the two
weakest predictors as their beta weights are closest to zero. On the other
hand, their interaction is the second strongest predictor in the whole model.
One quick and dirty way to explore this interaction consists of dichotomiz-
ing the two variables (and since we centered all variables, we can dicho-
tomize with 0 as the boundary) and then determine the means of the four
possible combinations of the (then) two variable levels of the two then
dichotomized variables. Again, since interactions are often easier to under-
stand from one perspective, we immediately generate both tables:

> CONC.MEAN.1<-tapply(predict(model.13),—
list(CONCRETENESS>0,—MEANINGFUL_CO>0),—mean);—
CONC.MEAN.1¶
—————————FALSE—————TRUE
FALSE—633.1282—601.5534
TRUE——610.3415—603.3445
> CONC.MEAN.2<-tapply(predict(model.13),—
list(MEANINGFUL_CO>0,—CONCRETENESS>0),—mean);—
CONC.MEAN.2¶
—————————FALSE—————TRUE
FALSE—633.1282—610.3415
TRUE——601.5534—603.3445

What does this show?


270 Selected multifactorial methods

THINK
BREAK

It shows that when MEANINGFULNESS is larger than average,


CONCRETENESS being smaller or larger than average does not make much
of a difference: the predicted means differ only by not even two ms. How-
ever, when MEANINGFULNESS is smaller than average, then values of
CONCRETENESS that are also smaller than average increase the predicted
reaction time much more; if you use the code in the code file to create bar-
plots you will see the effect immediately.
Let us now finally look at the model assumptions. The standard check
of the model assumptions involves four plots so you instruct R to create a
two-by-two grid and then plot the model object (if you add the argument
which=1:6, you get two more plots and must adjust the grid accordingly):

> par(mfrow=c(2,—2))¶
> plot(model.13)¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶

Consider Figure 62. What do these graphs mean? The two left graphs
test the assumptions that the variances of the residuals are constant. Both
show the ratio of the fitted/predicted values on the x-axis to the residuals on
the y-axis (as the raw or the root of the standardized residuals). Ideally,
both graphs would show a scattercloud without much structure, especially
no structure such that the dispersion of the values increases or decreases
from left to right. Here, both graphs look good.41 Several words – potato,
squirrel, and apple/tortoise – are marked as potential outliers. Also, the
plot on the top left shows that the residuals are distributed well around the
desired mean of 0.
The assumption that the residuals are distributed normally also seems
met: The points in the top right graph should be rather close to the dashed
line, which they are; again, three words are marked as potential outliers.
But you can of course also do a Shapiro-Wilk test on the residuals, which
yields the result hoped for.

41. You can also use ncv.test from the library(car): library(car);—
ncv.test(model.13)¶, which returns the desired non-significant result.
Multiple regression analysis 271

Figure 62. Model-diagnostic plots

> shapiro.test(residuals(model.13))¶
————————Shapiro-Wilk—normality—test
data:——residuals(model.13)
W—=—0.9769,—p-value—=—0.458

Finally, the bottom right plot plots the standardized residuals against the
so-called leverage. Leverage is a measure of how much a point can influ-
ence a model. (Note: this is not the same as outliers, which are points that
are not explained well by a model such as, here, squirrel.) You can com-
pute these leverages most easily with the function hatvalues, which only
requires the fitted linear model as an argument. (In the code file, I also
show you how to generate a simple plot with leverages.)

> hatvalues(model.13)¶
272 Selected multifactorial methods

As you could see, there is only one word with a very large leverage,
clove, which is why we do not need to worry about this too much (recall
from above that clove was the word that turned up several times as an out-
lier in the boxplot). One thing you might want to try, also, is fit the model
again without the word squirrel since each plot in the model diagnostics
has it marked as a point that has been fitted rather badly. Let’s see what
happens:

> model.13.wout.squirrel<-lm(formula(model.13),—
data=ReactTime.2[-43,])¶
> summary(model.13.wout.squirrel)¶
[…]
Coefficients:
—————————————————————————————Estimate—Std.—Error—t—value—Pr(>|t|)
(Intercept)————————————————607.107405———5.346363—113.555——<—2e-16
CONCRETENESS————————————————-0.107829———0.255182——-0.423—0.675065
FAMILIARITY—————————————————-0.257604———0.145320——-1.773—0.084517
IMAGEABILITY————————————————-0.402052———0.271746——-1.480—0.147467
KF_WRITFREQ—————————————————-4.541263———4.928068——-0.922—0.362754
MEANINGFUL_CO————————————————0.090203———0.177531———0.508—0.614401
NO_LETT—————————————————————10.765275———2.761037———3.899—0.000392
CONCRETENESS:MEANINGFUL_CO———0.018404———0.007530———2.444—0.019406
FAMILIARITY:KF_WRITFREQ——————0.273133———0.125477———2.177—0.035953
IMAGEABILITY:MEANINGFUL_CO——-0.008513———0.003282——-2.594—0.013523
[…]
Multiple—R-squared:—0.5655,—————Adjusted—R-squared:—0.4599
F-statistic:—5.352—on—9—and—37—DF,——p-value:—0.0001094

The coefficients do not that change much and both multiple R2 and,
more importantly, adjusted R2 go up a bit, but ideally you would also look
at the coefficients you get for the effect size model. If you do that (cf. the
code file), you find that the largest change arises for IMAGEABILITY. The
word squirrel was clearly responsible for some residual variance. Note that
you can of course not eliminate all values you don’t like – you must ana-
lyze the data carefully to justify the elimination of data points.
Let us now sum up the results:42 “A linear regression was used to study
the effects of NUMBERLETTERS, KF-WRITTENFREQ, FAMILIARITY,

42. A short comment is still necessary: the example above may not be ideal because the
range of the dependent variable is limited: reaction times can, for example, not be nega-
tive but the regression equation may well predict negative values. It is important to bear
in mind that the regression equation’s predictive power is best only for the range of ob-
served values. Other kinds of regression are sometimes recommended to deal with such
cases because their link functions restrict the range of predicted values. For example,
Poisson regressions only predict positive values (cf. Faraway 2006: Ch. 3 and Crawley
2007: Ch. 14, 16 for discussion of Poisson regression and regressions involving percen-
tages).
Multiple regression analysis 273

CONCRETENESS, IMAGEABILITY, and MEANINGFULNESS and their pairwise


interactions on the reaction times of 48 words in lexical decision tasks (29
incomplete cases were discarded). A first model with all independent va-
riables and their pairwise interactions was fit. In a stepwise procedure, then
the variable or interaction with the highest (insignificant) p-value was de-
leted until only variables and interactions participating in significant pre-
dictors remained. This model contained all six independent variable, as
well as the significant interactions CONCRETENESS:MEANINGFULNESS,
FAMILIARITY:KF-WRITTENFREQ, and IMAGEABILITY:MEANINGFULNESS.
Model diagnostics showed that the assumptions of linear regressions were
not violated, but that one word, squirrel, was marked as an outlier in sever-
al tests. The above model resulted in a highly significant overall correlation
(adjusted R2 = 0.4553; F = 5.364; df = 9, 38; p < 0.001) with the following
parameters: [show the table of coefficients from model.13 and graphs]. In
this model, the variable NUMBERLETTERS has the strongest effect (and
increases reaction times) whereas MEANINGFULNESS and CONCRETENESS
as main effects have the weakest effects [show the table of coefficients
from model.13.effsiz and maybe graphs with beta-weights]. The five
words that are predicted worst in this model are squirrel, potato, apple,
spider, and asparagus. and the word that has the strongest effect on the
model parameters is clove. [… plus your interpretation]”

Recommendation(s) for further study


− the function step, to have R perform model selection automatically
based on criteria other than the coefficients’ p-values rather than do it
with yourself with anova; cf. also the functions leaps and regsubsets
(from the library(leaps)), the function fastbw (from the li-
brary(Design)), and the function stepAIC (from the library(MASS)).
Warning: automatic model selection processes can be dangerous: they
cannot include the analyst’s expert knowledge and different algorithms
can result in very (!) different results; just apply step and fastbw to this
data set …
− the function collin.fnc from the library(languageR) to test for
collinearity
− the function gvlma from the library(gvlma) to test model assumptions
− the function calc.relimp (from the library(relaimpo)), to compute
measures of importance for variables in a linear model
− the functions influence.measures, rstudent, lm.influence, and
influence, to identify outliers
− the functions qplot (from the library(ggplot)) and coefplot (from
274 Selected multifactorial methods

the library(arm)) for nice plots with confidence intervals


− R also allows you to compute so-called robust regressions
(library(robust)) and (library(MASS)) as well as nonlinear regres-
sions (library(nls)), which we cannot deal with here; cf. esp. Craw-
ley (2002, 2005) and Faraway (2005, 2006) for these (more complex)
topics as well as Good and Hardin (2006: Ch. 11) for the
library(quantreg) and further alternatives
− It is worth exploring the issue of collinearity, which refers to the threat
that highly intercorrelated predictor variables pose for regression coeffi-
cients and their p-values; cf. Faraway (2005: Sections 5.3 and 9.3)
− Harrell (2001), Crawley (2002: Ch. 13, 14, 17), Backhaus et al. (2003:
Ch. 1), Maindonald and Braun (2003: Ch. 3-6), Bortz (2005: Ch. 13),
Crawley (2007: Ch. 10, 14, 16), Gelman and Hill (2007: Ch. 3-4),
Baayen (2008: Ch. 6-7), Johnson (2008: Section 2.4, 3.2)
− Good and Hardin (2006: Ch. 10) on risks of linear regressions

3. ANOVA (analysis of variance)

In Section 4.3.2.1, I explained how you test whether arithmetic means from
two independent samples are significantly different from each other. As an
example, we looked at different F1 frequencies of men and women. The t-
test for independent samples from that section, however, cannot be applied
to cases where you need to compare more than two means (because your
single independent variable has more than two levels or because you have
more than one independent variable). For both such situations, one often
uses a method called ANOVA, for analysis of variance. In Section 5.3.1, I
will explain how to perform a monofactorial ANOVA, and Section 5.3.2
deals with a multifactorial ANOVA, i.e., an ANOVA with more than one
independent variable; in the context of ANOVAs independent variables are
often referred to as factors (which have nothing to do with factors in factor
analysis).43

43 Let me also remind you that nominal/categorial variables are ideally coded (with
strings) such that R’s read.table can automatically recognize them (cf. Section 2.5.1).
The independent variables can in fact also include ratio-scaled variables; sometimes, the
method is then referred to as ANCOVA (analysis of covariance). The overall procedure
is practically the same as with ‘regular’ ANOVAs.
ANOVA (analysis of variance) 275

3.1. Monofactorial ANOVA

To get to know monofactorial ANOVAs, we will extend the example from


Section 4.3.2.3. Above, we studied subtractive word-formation processes
and tested whether the average similarity of words entering into blends is
different from that of words entering into complex clippings. If you want to
include a third group of source words, the t-test or the U-test are not appli-
cable anymore. In such a case, our question involves

− a dependent ratio-scaled variable, namely the similarity of the source


words measured in DICE coefficients, whose average you are looking at;
− an independent categorical variable, namely PROCESS: BLEND vs.
PROCESS: COMPLCLIP vs. PROCESS: COMPOUND.

The overall procedure of an ANOVA is this:

Procedure
Formulating the hypotheses
Computing the means; inspecting graphical representations
Testing the main assumption(s) of the test:
the variances of the variable values in the groups are homogeneous
and normally distributed in the populations from which the
samples were taken or, at least, in the samples themselves
the residuals are normally distributed (with a mean of 0) in the
populations from which the samples were taken or, at least,
in the samples themselves
Computing the multiple correlation R2 and the differences of means
Computing the test statistic F, the degrees of freedom df, and the
probability of error p

As usual, you begin with the hypotheses, which will be simplified be-
low:

H0: The means of the Dice coefficients of the source words entering
into the three kinds of word-formation processes do not differ from
each other: meanDice coefficients of the blends = meanDice coefficients of the complex
clippings = meanDice coefficients of the compounds.
H1: There is at least one difference between the average Dice coeffi-
cients of the three word-formation processes: H0, with at least one
“≠” instead of a “=“.
276 Selected multifactorial methods

You first clear R’s memory and then load the data from
<C:/_sflwr/_inputfiles/05-3-1_dices.txt>.

> rm(list=ls(all=T))¶
> Dices<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> attach(Dices);—str(Dices)¶
'data.frame':———120—obs.—of——3—variables:
—$—CASE———:—int——1—2—3—4—5—6—7—8—...
—$—PROCESS:—Factor—w/—3—levels—"Blend","ComplClip",..:—3—...
—$—DICE———:——num——0.0032—0.0716—0.037—0.0256—0.01—0.0415—...

Then you compute the means and represent the data in a boxplot. To
explore the graphical possibilities a bit more, you could also add the three
means and the grand mean into the plot; I only provide a shortened version
of the code here, but you will find the complete code in the code file for
this chapter.

> boxplot(DICE~PROCESS,—notch=T,—ylab="Dices");—grid()¶
> text(1:3,—c(0.05,—0.15,—0.15),—labels=paste("mean=\n",—
round(tapply(DICE,—PROCESS,—mean),—4),—sep=""))¶
> rug(DICE,—side=4)¶
> abline(mean(DICE),—0,—lty=2)¶
> text(1,—mean(DICE),—"Grand—mean",—pos=3,—cex=0.8)¶

The graph already suggests a highly significant result:44 The Dice coef-
ficients for each word-formation process appear to be normally distributed
(the boxes and the whiskers are rather symmetric around the medians and
means); the medians differ strongly from each other and are outside of the
ranges of each others’ notches. Theoretically, you could again be tempted
to end the statistical investigation here and interpret the results, but, again,
of course you can’t really do that … Thus, you proceed to test the assump-
tions of the ANOVA.
The first assumption is that the variances of the variables in the groups
in the population, or the samples, are homogeneous. This assumption can
be tested with an extension of the F-tests from Section 4.2.2, the Bartlett-
test. The hypotheses correspond to those of the F-test:

H0: The variances of the Dice coefficients of the three word-formation


processes are identical: varianceDice coefficients of blends = varianceDice coef-
ficients of complex clippings = varianceDice coefficients of compounds.

44. Cf. the code file for this chapter for other graphs.
ANOVA (analysis of variance) 277

H1: The variances of the Dice coefficients of the three word-formation


processes are not all identical: H0, with at least one “≠” instead of a
“=”.

Figure 63. Boxplot for DICE~PROCESS

The standard deviations, i.e., the square roots, are very similar to each
other:

> round(tapply(DICE,—PROCESS,—sd),—2)¶
————Blend—ComplClip—Compound
—————0.02——————0.02—————0.02

In R, you can use the function bartlett.test. Just like most tests you
have learned about above, you can use a formula and, unsurprisingly, the
variances do not differ significantly from each other:

> bartlett.test(DICE~PROCESS)¶
————————Bartlett—test—of—homogeneity—of—variances
278 Selected multifactorial methods

data:——DICE—by—PROCESS
Bartlett's—K-squared—=—1.6438,—df—=—2,—p-value—=—0.4396

The other assumption will be tested once the linear model has been
created (just like for the regression).
A monofactorial ANOVA is based on an F-value which is the quotient
of the variability in the data that can be associated with the levels of the
independent variable divided by the variability in the data that remains
unexplained. One implication of this is that the formulation of the hypo-
theses can be simplified as follows:

H0: F = 1.
H1: F > 1.

I will not discuss the manual computation in detail but will immediately
turn to how you do this in R. For this, you will again use the functions lm
and anova, which you already know. Again, you first define a contrast
option that tells R how to compute the parameters of the linear model.45

> options(contrasts=c("contr.sum",—"contr.poly"))¶
> model<-lm(DICE~PROCESS)¶
> anova(model)¶
Analysis—of—Variance—Table
Response:—DICE
———————————Df———Sum—Sq——Mean—Sq—F—value————Pr(>F)
PROCESS—————2—0.225784—0.112892——332.06—<—2.2e-16—***
Residuals—117—0.039777—0.000340
[…]

In the column “Df”, you find the degrees of freedom for the indepen-
dent variable PROCESS and for the residual variance that remains unex-
plained. In the column “Sum Sq”, you find sums of squares, and the col-
umn “Mean Sq” contains the quotient Sum Sq/Df. The column “F value” con-
tains the F-value, which is the quotient 0.112892/0.00034. Finally, the column
“Pr( > F)” lists the p-value for the F-value at 2 and 117 df. This p-value
shows that the variable PROCESS accounts for a highly significant portion
of the variance of the Dice coefficients. You can even compute how much

45. The definition of the contrasts that I use is hotly debated in the literature and in statistics
newsgroups. (For experts: R’s standard setting are treatment contrasts, but ANOVA re-
sults reported in the literature are often based on sum contrasts.) I will not engage in the
discussion here which approach is better but, for reasons of comparability, will use the
latter option and refer you to the discussion in Crawley (2002: Ch. 18, 2007: 368ff.) as
well as lively repeated discussions on the R-help list.
ANOVA (analysis of variance) 279

of the variance, namely the percentage of the sums of squares of PROCESS


(0.225784) out of the complete sums of squares (0.225784+0.039777). The
result of this is the effect size eta-squared η2, an equivalent to the beta
weight of the regression. With one independent variable, this is also R2:

> 0.225784/(0.225784+0.039777)¶
[1]—0.8502152

This output does not reveal, however, which levels of PROCESS are re-
sponsible for significant amounts of variance. Just because PROCESS as a
whole is significant, this does not mean that every single level of PROCESS
has a significant effect (even though, here, Figure 63 suggests just that). A
rather conservative way to approach this question involves the function
TukeyHSD. The first argument of this function is an object created by the
function aov (an alternative to anova), which in turn requires the relevant
linear model as an argument; as a second argument you can order the order
of the variable levels, which we will just do:

> TukeyHSD(aov(model),—ordered=T)¶
——Tukey—multiple—comparisons—of—means
————95%—family-wise—confidence—level
————factor—levels—have—been—ordered
Fit:—aov(formula—=—model)
$PROCESS
————————————————————————diff————————lwr————————upr—p—adj
ComplClip-Compound—0.0364675—0.02667995—0.04625505—————0
Blend-Compound—————0.1046600—0.09487245—0.11444755—————0
Blend-ComplClip————0.0681925—0.05840495—0.07798005—————0

The first two columns provide the observed differences of means; the
columns “lwr” and “upr” provide the lower and upper limits of the 95%
confidence intervals of the differences of means; the rightmost column lists
the p-values (corrected for multiple testing; cf. Section 5.1.1 above) for the
differences of means. You can immediately see that all means are highly
significantly different from each other (as the boxplot already suggested).46

46. If not all variable levels are significantly different from each other, then it is often useful
to lump together the levels that are not significantly different from each other and test
whether a new model with these combined variable levels is significantly different from
the model where the levels were still kept apart. If yes, you stick to and report the more
complex model – more complex because it has more different variable levels – other-
wise you adopt and report the simpler model. The logic is similar to the chi-square test
exercises #3 and #13 in <04_all_exercises_answerkey.r>. The code file for this section
discusses this briefly on the basis of the above data; cf. also Crawley (2007:364ff.).
280 Selected multifactorial methods

For additional information, you can now also look at the summary of the
linear model:

> summary(model)¶
[…]
Residuals:
———————Min—————————1Q—————Median—————————3Q————————Max
-4.194e-02—-1.329e-02——1.625e-05——1.257e-02——4.623e-02

Coefficients:
—————————————Estimate—Std.—Error—t—value—Pr(>|t|)
(Intercept)——0.080512———0.001683——47.833———<2e-16—***
PROCESS1—————0.057617———0.002380——24.205———<2e-16—***
PROCESS1————-0.010575———0.002380——-4.443—2.03e-05—***
---
[…]
Residual—standard error:—0.01844—on—117—degrees—of—freedom
Multiple—R-squared:—0.8502,—————Adjusted—R-squared:—0.8477
F-statistic:—332.1—on—2—and—117—DF,——p-value:—<—2.2e-16

At the bottom, you see the overall significance test (with F, dfPROCESS, dfre-
sidual,and p). Above that, you find multiple R2, which you already com-
puted, as well as the adjusted version, which takes the amount of variables
and variable levels involved into consideration. We again ignore the resi-
dual standard error and turn to the coefficients. When you use sum con-
trasts, as we do here (recall the options setting), then the intercept estimate
is the overall mean of the dependent variable, and the rest of that row pro-
vides the test whether that grand mean is significantly different from 0. The
following two lines list the differences of the means for the alphabetically
first two levels of PROCESS. The value for blends – the alphabetically first
level of PROCESS – is 0.057617 larger than the overall mean. The value for
complex clippings – the alphabetically second level of PROCESS is
0.010575 smaller than the overall mean. The value for compounds – the
alphabetically last and not listed level of PROCESS – differs from the overall
mean by -0.057617+0.010575=-0.047042. These results correspond to
those of Figure 63 and are also those returned by the function
model.tables, which again requires aov and provides the results in a more
accessible fashion:

> model.tables(aov(model))¶
Tables—of—effects
—PROCESS
PROCESS
————Blend—ComplClip—Compound
——0.05762——-0.01057—-0.04704
ANOVA (analysis of variance) 281

(Note in passing that, with model.tables(aov(model),—"means")¶,


you get not the differences between means but the means themselves – try
it out.) The additional columns of summary(model)¶ provide the standard
errors for the differences (in the first row for the difference of the blends'
mean to 0, then for the differences between the blends' mean and the re-
spective group means), and all these come with t-values and p-values.
Above these coefficients, you find information about the residuals, which
suggests that the residuals are distributed normally.
Let us now look at the model-diagnostic graphs. In this case, the four
graphs do not indicate big problems. The plot on the top left shows that the
residuals are just about perfectly distributed around the desired mean of 0.
Both plots on the left do not reveal any particular structure, and the resi-
duals in the upper right plot look as if they are normally distributed, which
is confirmed by a Shapiro-Wilk test:

> par(mfrow=c(2,—2))¶
> plot(lm(DICE~PROCESS))¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶
> shapiro.test(residuals(model))¶
————————Shapiro-Wilk—normality—test
data:——lm(DICE—~—PROCESS)$residuals
W—=—0.9924,—p-value—=—0.7607

Three data points appear to be outliers:

> Dices[c(14,—71,—105),]¶
————CASE———PROCESS———DICE
14————14——Compound—0.0797
71————71—ComplClip—0.0280
105——105—————Blend—0.1816

As you can see, these three cases represent (i) the maximal Dice coeffi-
cient for the compounds, (ii) the minimal Dice coefficient for complex
clippings, and (iii) the maximal Dice coefficient of all cases, which was
observed for a blend; these values get also identified as extreme when you
sort the residuals: sort(residuals(model))¶. One could now also check
which word formations these cases correspond to.
Before we summarize the results, let me briefly also show one example
of model-diagnostic plots pointing to violations of the model assumptions.
Figure 65 below shows the upper two model plots that I once found when
exploring the data of a student who had been advised by a stats consultant
to apply an ANOVA to her data. In the left panel, you can clearly see how
the range of residuals increases from left to right. In the right panel, you
282 Selected multifactorial methods

can see how strongly the points deviate from the dashed line especially in
the upper right part of the coordinate system. Such plots are a clear warning
(and the function gvlma mentioned above showed that four out of five
tested assumptions were violated!). One possible follow-up would be to see
whether one can justifiably ignore the outliers indicated.

Figure 64. Model-diagnostic plots

After the evaluation, you can now summarize the analysis: “The simi-
larities of the source words of the three word-formation processes as meas-
ures in Dice coefficients are very different from each other. The average
Dice coefficient for blends is 0.1381 while that for complex clippings is
only 0.0699 and that for compounds is only 0.0335 (all standard deviations
= 0.02). [Then insert Figure 63.] According to a monofactorial ANOVA,
ANOVA (analysis of variance) 283

these differences are highly significant: F2, 117 = 332.06; p < 0.001 ***; the
variable PROCESS explains more than 80% of the overall variance: multiple
R2 = 0.85; adjusted R2 = 0.848. Pairwise post-hoc comparisons of the
means (Tukey’s HSD) show that all three means are significantly different
from each other; all ps < 0.001.” Again, it is worth emphasizing that Figure
63 already anticipated nearly all of these results; a graphical exploration is
not only useful but often in fact indispensible.

Figure 65. Problematic model-diagnostic plots

Recommendation(s) for further studies (cf. also after Section 5.3.2)


− the function aov already mentioned above is an alternative to anova
− the function model.tables, to get means (cf. above) and standard errors
of a linear model
− the function plot.design, to plot monofactorial effects
− the function oneway.test as an alternative to monofactorial ANOVAs,
which does not assume variance homogeneity
− the function kruskal.test as another non-parametric alternative to
monofactorial ANOVAs
− the function pairwise.t.test as an alternative to TukeyHSD (unlike
TukeyHSD, this function only works for monofactorial ANOVAs)
− Good and Hardin (2006: 71–73) on alternatives to ANOVAs
− (ANOVAs are often relatively robust when it comes to violations of
their assumptions.)
284 Selected multifactorial methods

3.2. Two-/multifactorial ANOVA

To explain this method, I will return to the example from Section 1.5. In
that section, we developed an experimental design to test whether the nu-
merical estimates for the meaning of some depends on the sizes of the
quantified objects and/or on the sizes of the reference points used to locate
the quantified objects. We assume for now the experiment resulted in a data
set you now wish to analyze.47 Since the overall procedure of ANOVAs has
already been discussed in detail, we immediately turn to the hypotheses:

H0: The average estimates in the four experimental conditions do not


differ from each other: meanestimates for small objects = meanestimates for large
objects = meanestimates for small reference points = meanestimates for large reference points =
meanestimates for small objects and small reference points = meanestimates for small objects and
large reference points = meanestimates for large objects and small reference points = meanesti-
mates for large objects and large reference points.
H1: There is at least one difference between the averages: there is at
least one “≠” instead of an “=“.

This time, you also immediately formulate the short version of the sta-
tistical hypotheses:

H0: F = 1.
H1: F > 1.

As the hypotheses indicate, you look both for main effects (i.e., each in-
dependent variable’s effect in isolation) and an interaction (i.e., each in-
dependent variable’s effects in the context of the other independent varia-
ble; cf. Section 1.3.2.3. You clear the memory, load the library(car), and
load the data from <C:/_sflwr/_inputfiles/05-3-2_objectestimates.txt>. You
again use summary to check the structure of the data:

> rm(list=ls(all=T))¶
> library(car)¶
> ObjectEstimates<-read.table(choose.files(),—header=T,—

47. To make it easier to also check the results manually, I use a data set that does not con-
tain the complete experimental design from Section 1.5. The overall logic of the analysis
is of course the same as that of one based on a larger amount of data. Also, in order to
keep things simple, I again do not address the issues of repeated measures and item-
specific adjustments in the context of mixed effects / multi-level models but direct you
to the references mentioned above.
ANOVA (analysis of variance) 285

sep="\t",—comment.char="",—quote="")¶
> attach(ObjectEstimates);—summary(ObjectEstimates)¶
——————CASE—————————OBJECT———REFPOINT————ESTIMATE
—Min.———:—1.00———large:8———large:8———Min.———:—2.0
—1st—Qu.:—4.75———small:8———small:8———1st—Qu.:38.5
—Median—:—8.50———————————————————————Median—:44.0
—Mean———:—8.50———————————————————————Mean———:51.5
—3rd—Qu.:12.25———————————————————————3rd—Qu.:73.0
—Max.———:16.00———————————————————————Max.———:91.0

You can begin with the graphical exploration. In this case, where you
have two independent binary variables, you can begin with the standard
interaction plot, and as before you plot both graphs. The following two
lines generate the kind of graph we discussed above in Section 3.2.2.2:

> interaction.plot(OBJECT,—REFPOINT,—ESTIMATE,—
ylim=c(0,—90),—type="b")¶
> interaction.plot(REFPOINT,—OBJECT,—ESTIMATE,—
ylim=c(0,—90),—type="b")¶

Also, you compute all means (cf. below) and all standard deviations (cf.
the code file for this chapter):

> means.object<-tapply(ESTIMATE,—OBJECT,—mean)¶
> means.refpoint<-tapply(ESTIMATE,—REFPOINT,—mean)¶
> means.interact<-tapply(ESTIMATE,—list(OBJECT,—REFPOINT),—
mean)¶

Instead of the standard interaction plots shown in Section 3.2.2.2, I


show here two somewhat more sophisticated versions (in Figure 66). The
code file for this chapter shows you how I generated this graph. As usual, it
doesn’t matter if you do not understand the code right away – once you try
to understand the code and look up a few functions (such as arrows) you
will learn a lot about how to do such graphs yourself.
Figure 66 shows both ways of looking at all means and their standard
errors. As it is so often the case, the graph already anticipates virtually all
of the results:

− The grand mean – the overall mean the dependent variable – is a bit
larger than 50;
− In the middle of the left panel, you see the means for the levels of the
variable REFPOINT when you disregard the levels of OBJECT while the
left and right parts of the left panel show you the means of the interac-
tion. Obviously, the means for the levels of REFPOINT will deviate sig-
286 Selected multifactorial methods

nificantly from the grand mean since the middle error bars do not in-
clude the grand mean. On the whole, large reference points lead to larg-
er estimates and small reference points lead to smaller estimates.
− In the middle of the right panel, you see the means for the levels of the
variable OBJECT when you disregard the levels of REFPOINT while the
left and right parts of the left panel show you the means of the interac-
tion. Obviously, the means for the levels of OBJECT will not deviate
significantly from the grand mean since the middle error bars do include
the grand mean.
− Finally, Figure 66 strongly suggests that there is a significant interac-
tion: the tendency that large reference points result in higher estimates
seems to hold only for small objects.

Figure 66. Interaction plot for ESTIMATE~OBJECT*REFPOINT

The graph gives away nearly everything one might want to know about
the results, but you now compute the real statistical analysis: are the differ-
ences between the means significant or not? You begin by first testing the
assumption of variance homogeneity:

> bartlett.test(ESTIMATE~OBJECT*REFPOINT)¶
————————Bartlett—test—of—homogeneity—of—variances
ANOVA (analysis of variance) 287

data:——ESTIMATE—by—OBJECT—by—REFPOINT
Bartlett's—K-squared—=—1.9058,—df—=—1,—p-value—=—0.1674

This condition is met so you can proceed as planned. Again you first tell
R how to contrast the means with other (sum contrasts), and then you com-
pute the linear model and the ANOVA. Since you want to test both the two
independent variables and their interaction you combine the two variables
with an asterisk (cf. Section 3.2.2.2 above).

> options(contrasts=c("contr.sum",—"contr.poly"))¶
> model<-lm(ESTIMATE—~—OBJECT*REFPOINT)¶
> summary(model)¶
[…]
Coefficients:
——————————————————Estimate—Std.—Error—t—value—Pr(>|t|)
(Intercept)—————————51.500——————4.796——10.738—1.65e-07
OBJECT1—————————————-2.250——————4.796——-0.469——0.64738
REFPOINT1———————————17.625——————4.796———3.675——0.00318
OBJECT1:REFPOINT1——-12.375——————4.796——-2.580——0.02409
[…]
Residual—standard—error:—19.18—on—12—degrees—of—freedom
Multiple—R-Squared:—0.6294,—————Adjusted—R-squared:—0.5368
F-statistic:—6.794—on—3—and—12—DF,——p-value:—0.00627

With sum contrasts, in the first column of the coefficients, the first row
(intercept) again provides the overall mean of the dependent variable: 51.5
is the mean of all estimates. The next two rows of the first column provide
the differences for the alphabetically first factor levels of the respective
variables: the mean of OBJECT: LARGE is 2.25 smaller than the overall mean
and the mean of REFPOINT: LARGE is 17.625 larger than the overall mean.
The third row of the first column shows how the mean of OBJECT: LARGE
and REFPOINT: LARGE differs from the overall mean: it is 12.375 smaller.
Again, the output of model.tables shows this in a little easier way:

> model.tables(aov(ESTIMATE—~—OBJECT*REFPOINT))¶
Tables—of—effects
—OBJECT
OBJECT
large—small
-2.25——2.25
—REFPOINT
REFPOINT
——large———small
—17.625—-17.625
—OBJECT:REFPOINT
———————REFPOINT
OBJECT——large———small
288 Selected multifactorial methods

——large—-12.375——12.375
——small——12.375—-12.375

Back to summary(model). The columns “Std. Error” and following pro-


vide the standard errors, t-values, and p-values. The coefficient of OBJECT
is smaller than its standard error and not significant, but the coefficients for
REFPOINT and the interaction of OBJECT and REFPOINT are significant. As
for the whole model, there is a very significant correlation, given the p-
value of 0.00627.
Let us now turn to the ANOVA table. For ANOVAs with more than one
independent variable, you use the function Anova from the library(car)
with the additional argument type="III". (You can also use that function
for monofactorial ANOVAs and would get the same results as discussed
above, but for multifactorial ANOVAs you must use it to get the results
you typically get with the default settings of other statistics software; cf. n.
45 again.)

> Anova(model,—type="III")¶
Anova—Table—(Type—III—tests)
Response:—ESTIMATE
————————————————Sum—Sq—Df——F—value————Pr(>F)
(Intercept)——————42436——1—115.3022—1.651e-07
OBJECT——————————————81——1———0.2201——0.647385
REFPOINT——————————4970——1——13.5046——0.003179
OBJECT:REFPOINT———2450——1———6.6575——0.024088
Residuals—————————4417—12

Of course the results do not change and the F-values are just the
squared t-values.48 The factor OBJECT is not significant, but REFPOINT
and the interaction are. Unlike in Section 5.2, this time we do not compute
a second analysis without OBJECT because even though OBJECT is not sig-
nificant itself, it does participate in the significant interaction.
How can this result be interpreted? According to what we said about in-
teractions above (e.g., Section 1.3.2.3) and according to Figure 66,
REFPOINT is a very significant main effect – on the whole, large reference
points increase the estimates – but this effect is actually dependent on the
levels of OBJECT such that it is really only pronounced with small objects.

48. If your model included factors with more than two levels, the results from Anova would
differ because you would get one F-value and one p-value for each predictor (factor / in-
teraction) as a whole rather than a t-value and a p-value for one level (cf. also Section
5.4). Thus, it is best to look at the ANOVA table from Anova(…,—type="III").
ANOVA (analysis of variance) 289

For large objects, the difference between large and small reference points is
negligible and, as we shall see in a bit, not significant.
To now also find out which of the independent variables has the largest
effect size, you can apply the same logic as above and compute η2. Ob-
viously, REFPOINT has the strongest effect:

> 81/(81+4970+2450+4417)—#—for—OBJECT¶
[1]—0.006796442
> 4970/(81+4970+2450+4417)—#—for—REPFOINT¶
[1]—0.4170163
> 2450/(81+4970+2450+4417)—#—for—the—interaction¶
[1]—0.2055714

You can again perform pairwise post-hoc comparisons of the means.


You find that, for the main effects, the conservative Tukey HSD test re-
turns the same p-values as the ANOVA table (because for binary factors
only one test is performed), and for the interaction, it returns only one sig-
nificant difference and one that some people call marginally significant.

> TukeyHSD(aov(model),—ordered—=—T)¶
——Tukey—multiple—comparisons—of—means
————95%—family-wise—confidence—level
————factor—levels—have—been—ordered
Fit:—aov(formula—=—model)
$OBJECT
————————————diff———————lwr——————upr—————p—adj
small-large——4.5—-16.39962—25.39962—0.6473847

$REFPOINT
—————————————diff——————lwr——————upr—————p—adj
large-small—35.25—14.35038—56.14962—0.0031786

$`OBJECT:REFPOINT`
—————————————————————————diff————————lwr———————upr—————p—adj
large:small-small:small—20.25—-20.024414——60.52441—0.4710945
large:large-small:small—30.75——-9.524414——71.02441—0.1607472
small:large-small:small—60.00——19.725586—100.27441—0.0039895
large:large-large:small—10.50—-29.774414——50.77441—0.8646552
small:large-large:small—39.75——-0.524414——80.02441—0.0534462
small:large-large:large—29.25—-11.024414——69.52441—0.1908416

Let us now turn to the diagnostic plots of the linear model.

> par(mfrow=c(2,—2))¶
> plot(model)¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶

You can see that the results are not as good as one would have hoped
290 Selected multifactorial methods

for (which is of course partially due to the fact that this is a very small and
invented data set). The Bartlett test above yielded the desired result, and the
plot on the top left shows that residuals are nicely distributed around 0, but
both plots on the left are not completely unstructured. In addition, the upper
right plot reveals some deviation from normality, which however turns out
to be not significant.

Figure 67. Model-diagnostic plots

> shapiro.test(residuals(model))¶
————————Shapiro-Wilk—normality—test
data:——residuals(model)
W—=—0.9878,—p-value—=—0.9973

The plot on the bottom right shows some differences between the resi-
ANOVA (analysis of variance) 291

duals both in the whole sample and between the four groups in the sample.
However, for expository reasons and since the data set is too small and
invented, and because library(gvlma);—gvlma(model)¶ does not return
significant violations, let us for now just summarize the results.
“There is an intermediately strong correlation between the size of the
average estimate and the sizes of the quantified objects and their reference
points; according to a two-factor ANOVA, this correlation is very signifi-
cant (F3, 12 = 6.79; p < 0.0063 **) and explains more than 50% of the over-
all variance (multiple adj. R2 = 0.537). However, the size of the quantified
object alone does not contribute significantly: F1, 12 < 1; p = 0.647; η2 <
0.01. The size of the reference point has a very significant influence on the
estimate (F1, 12 = 13.5; p < 0.0032; η2 = 0.417) and the strongest effect size
(η2 = 0.417). The direction of this effect is that large and small reference
points yield larger and smaller estimates respectively. However, there is
also a significant interaction showing that large reference points really
yield large estimates with small objects only (F1, 12 = 6.66; p = 0.024; η2 =
0.206). [Insert Figure 66 and the tables resulting from model.tables(
aov(ESTIMATE~OBJECT* REFPOINT),—"means")¶.] Pairwise post-hoc
comparisons with Tukey’s HSD tests show the most reliable differences
between means arise between OBJECT: small / REFPOINT: LARGE and
OBJECT: SMALL / REFPOINT: SMALL.”

Recommendation(s) for further study


− the function adonis from the library(vegan) to compute a multiva-
riate non-parametric alternative to ANOVAs
− the function all.effects and others from the library(effects) to
represent effects in linear models graphically
− the function rpart from the library(rpart), to compute classification
and regression trees as an alternative to linear models (cf. the exercise
files for examples)
− Backhaus et al. (2003: Ch. 2), Crawley (2002: Ch. 13, 15, 17), Crawley
2005: Ch. 7, 9, 12), Crawley (2007: Ch. 11, 12), Faraway (2005: Ch.
14), Baayen (2008: Section 4.4, Ch. 6-7), Johnson (2008: Ch. 4)

4. Binary logistic regression

In the last two sections, we dealt with methods in which the dependent
variable is ratio-scaled. However, in many situations the dependent variable
292 Selected multifactorial methods

is binary/nominal (or categorical, which we are not going to be concerned


with). In such situations, one should not use a linear model of the kind dis-
cussed so far (with a so-called identity link) because the values of the de-
pendent variable can only take on two values (which should be coded as 0
and 1 or, preferably, as a two-level factor, as I mentioned above in Section
1.3.2.2, note 3) while the values predicted from the above kind of linear
model will be continuous and theoretically unbound. In addition, one
should also not compute a linear model in which the dependent variable
corresponds to the percentages of the levels of the 0/1-coded nominal vari-
able because a linear model of the above kind will predict negative values,
i.e., values outside of the meaningful range for percentages (cf. above note
42). Even some of the traditionally used transformations (such as the arc-
sine transformation) that have been applied to nominal dependent variables
can be problematic (cf. Jaeger 2008). Thus, for binary dependent variables,
binary logistic regression is often the best choice.
As an example, we consider another instance of a constructional alterna-
tion, this time the so-called dative alternation in English exemplified in
(54) and (55). (By the way, it seems to be nearly a running gag that every
introduction to statistics for linguists with R has at least one example in-
volving the dative alternation. I did not dare break that tradition …)

(54) a. He gave his father a book. (ditransitive)


b. He gave a book to his father. (prep. dativeto)
(55) a. He made his father a sandwich. (ditransitive)
b. He made a sandwich for his father. (prep. dativefor)

We concentrate only on the alternation in (54): ditransitives vs. preposi-


tional datives with to. While a study of the literature would again reveal a
large number of variables influencing speakers’ choices of constructions,
we focus on the following:

− one dependent nominal variable, namely the choice of construction:


CONSTRUCTION: 0/DITRANSITIVE for the ditransitive and
CONSTRUCTION: 1/PREP_DATIVE for the prepositional dative with to;
− one independent nominal variable, namely V_CHANGPOSS: NO_CHANGE
(for examples such as He gave me a hard time, which do not imply lit-
eral change of possession by transfer of an entity) and V_CHANGPOSS:
CHANGE (for examples such as He gave me a book, which do);
− three independent ratio-scaled variables, namely AGENT_ACT,
REC_ACT, and PAT_ACT, whose values indicate how familiar, or active,
Binary logistic regression 293

the referent of the agent, the recipient, and the patient of the relevant
clause are from the preceding context: high values reflect high familiari-
ty because, say, the referent has been mentioned before often.

A binary logistic regression involves the following procedure:

Procedure
Formulating the hypotheses
Inspecting graphical representations (plus maybe computing some descrip-
tive statistics)
Testing the assumption(s) of the test: checking for overdispersion
Computing the multiple correlation Nagelkerke’s R2 and differences of
probabilities (and maybe odds)
Computing the test statistic likelihood ratio chi-square, the degrees of free-
dom df, and the probability of error p

First, the hypotheses:

H0: The frequencies of the two constructions are independent of the


independent variables and their pairwise interactions: R2 = 0.
H1: The frequencies of the two constructions are not independent of the
independent variables and their pairwise interactions: R2 > 0.

You clear the memory and load the data from <C:/_sflwr/_inputfiles/05-
4_dativealternation.txt>:
> rm(list=ls(all=T))¶
> DatAlt<-read.table(choose.files(),—header=T,—sep="\t",—
comment.char="",—quote="")¶
> summary(DatAlt)
——————CASE———————CONSTRUCTION1——————CONSTRUCTION2
—Min.———:——1.0———Min.———:0.0———ditransitive:200——
—1st—Qu.:100.8———1st—Qu.:0.0———prep_dative—:200——
—Median—:200.5———Median—:0.5—————————————————————
—Mean———:200.5———Mean———:0.5—————————————————————
—3rd—Qu.:300.2———3rd—Qu.:1.0—————————————————————
—Max.———:400.0———Max.———:1.0—————————————————————
————V_CHANGPOSS————AGENT_ACT———————REC_ACT————————PAT_ACT———
—change———:252———Min.———:0.00———Min.———:0.00———Min.———:0.000
—no_change:146———1st—Qu.:2.00———1st—Qu.:2.00———1st—Qu.:2.000
—NA's————————2———Median—:4.00———Median—:5.00———Median—:4.000
—————————————————Mean———:4.38———Mean———:4.63———Mean———:4.407
—————————————————3rd—Qu.:7.00———3rd—Qu.:7.00———3rd—Qu.:7.000
—————————————————Max.———:9.00———Max.———:9.00———Max.———:9.000
294 Selected multifactorial methods

Note first that the dependent variable is (redundantly) included twice,


once as a vector of zeroes and ones (CONSTRUCTION1) and once as a factor
(CONSTRUCTION2) with the constructions as its levels. Second, there are
two cases with missing data, which you can eliminate by instructing R to
retain only the rows with complete cases. If you check the structure of the
new version of DatAlt, you see that only the 398 complete cases are used.

> DatAlt<-DatAlt[complete.cases(DatAlt),]¶
> attach(DatAlt)¶

You begin by looking at the observed frequencies in a purely descrip-


tive way using graphs: association or mosaic plots for the categorical varia-
ble V_CHANGPOSS and spineplots (and/or conditional density plots using
cdplot) for the three other independent variables.

> assocplot(table(V_CHANGPOSS—,—CONSTRUCTION2)¶
> spineplot(CONSTRUCTION2—~—AGENT_ACT)¶
> spineplot(CONSTRUCTION2—~—REC_ACT)¶
> spineplot(CONSTRUCTION2—~—PAT_ACT)¶

The graphs suggest that V_CHANGPOSS and CONSTRUCTION interact:


when the clause implies a change of possession, then the ditransitive con-
struction is preferred, but when the clause does not, ditransitives are dispre-
ferred. AGENT_ACT does not seem to have a particularly systematic influ-
ence on the choice of construction, whereas PAT_ACT and REC_ACT appear
to have clear effects. For a more comprehensive analysis, you of course
now need to check the effect(s) of all variables at the same time. For here,
we assume you want to test (i) whether each variable has a significant ef-
fect on the choice of construction in isolation (i.e., whether what you see in
the graphs is significant) and (ii) whether the interactions of each ratio-
scaled independent variable with V_CHANGPOSS are significant.
Basically, you go through a similar model selection process as in Sec-
tion 5.2. You begin with the most complete model you are interested in and
successively remove the interactions and (later) variables with the highest
p-values, always checking with anova whether your simplifications are
justified. Two differences are important: first, you use not lm, but the func-
tion glm (for generalized linear model). Just like lm, glm is used with a
formula and, for now, the standard contrast settings, but we also need the
additional argument family=binomial, which tells R that you want a bi-
nary logistic regression (rather than the ‘normal’ linear regression from
above), which in turn makes sure that the values predicted by the model fall
Binary logistic regression 295

into the range between 0 and 1.49 Second, the anova test is done with the
additional argument test="Chi".

> options(contrasts=c("contr.treatment",—"contr.poly"))¶
> model.glm<-glm(CONSTRUCTION1~V_CHANGPOSS*AGENT_ACT+
V_CHANGPOSS*REC_ACT+V_CHANGPOSS*PAT_ACT,—
family=binomial)¶
>—summary(model.glm)¶
[…]
Deviance—Residuals:
————Min———————1Q———Median———————3Q——————Max
-2.3148——-0.7524——-0.2891———0.7716———2.2402
Coefficients:
————————————————————————————————Estimate—Std.—Error—z—value—Pr(>|z|)
(Intercept)————————————————————-1.131682———0.430255——-2.630——0.00853
V_CHANGPOSSno_change————————————1.110805———0.779622———1.425——0.15422
AGENT_ACT——————————————————————-0.022317———0.051632——-0.432——0.66557
REC_ACT————————————————————————-0.179827———0.054679——-3.289——0.00101
PAT_ACT—————————————————————————0.414957———0.057154———7.260—3.86e-13
V_CHANGPOSSno_change:AGENT_ACT——0.001468———0.094597———0.016——0.98761
V_CHANGPOSSno_change:REC_ACT———-0.164878———0.102297——-1.612——0.10701
V_CHANGPOSSno_change:PAT_ACT————0.062054———0.105492———0.588——0.55637
[…]
(Dispersion—parameter—for—binomial—family—taken—to—be—1)
————Null—deviance:—551.74——on—397——degrees—of—freedom
Residual—deviance:—388.64——on—390——degrees—of—freedom
AIC:—404.64
Number—of—Fisher—Scoring—iterations:—5

The output is similar to what you know from lm. I will say something
about the two lines involving the notion of deviance below, but for now
you can just proceed with the model selection process as before:

> model.glm.2<-update(model.glm,—~.—-—V_CHANGPOSS:AGENT_ACT)¶
> summary(model.glm.2);—anova(model.glm,—model.glm.2,—
test="Chi")¶
> model.glm.3<-update(model.glm.2,—~.—-—V_CHANGPOSS:PAT_ACT)¶
> summary(model.glm.3);—anova(model.glm.2,—model.glm.3,—
test="Chi")¶
> model.glm.4<-update(model.glm.3,—~.—-—V_CHANGPOSS:REC_ACT)¶
> summary(model.glm.4);—anova(model.glm.3,—model.glm.4,—
test="Chi")¶
> model.glm.5<-update(model.glm.4,—~.—-—AGENT_ACT)¶
> anova(model.glm.4,—model.glm.5,—test="Chi");—
summary(model.glm.5)¶
[…]

49. I simplify here a lot and recommend Crawley (2007: Ch. 13) for more discussion of link
functions.
296 Selected multifactorial methods

—————————————————————Estimate—Std.—Error—z—value—Pr(>|z|)
(Intercept)——————————-1.07698————0.32527——-3.311——0.00093
V_CHANGPOSSno_change——0.61102————0.26012———2.349——0.01882
REC_ACT——————————————-0.23466————0.04579——-5.125—2.98e-07
PAT_ACT———————————————0.43724————0.04813———9.084——<—2e-16

(Dispersion—parameter—for—binomial—family—taken—to—be—1)
————Null—deviance:—551.74——on—397——degrees—of—freedom
Residual—deviance:—391.72——on—394——degrees—of—freedom
AIC:—399.72

That’s the minimal adequate model: all remaining variables are signifi-
cant. To get a nice summary of the results that involves an R2-value and
some other useful statistics, I advise you to use another very useful func-
tion, lrm (from the library(Design)), which requires the already familiar
kind of formula. For using the library Design, it is useful to also define a
data distribution object which internally summarizes the distribution of
your data.

> library(Design)¶
> DatAlt.dd<-datadist(DatAlt);—options(datadist="DatAlt.dd")¶

Then you generate a model object (with either the vector


CONSTRUCTION1 or the factor CONSTRUCTION2 as dependent variable) and
look at the results (I omit the call in the results output):

> model.lrm<-lrm(CONSTRUCTION1—~—V_CHANGPOSS—+—REC_ACT—+—
PAT_ACT,—x=T,—y=T,—linear.predictors=T);—model.lrm¶
[…]
Frequencies—of—Responses
——0———1
200—198
——Obs——Max—Deriv—Model—L.R.———————d.f.——————————P———————C
——398——————3e-10—————160.01——————————3——————————0———0.842
————————Dxy——————Gamma——————Tau-a—————————R2————————Brier
——————0.685——————0.688——————0.343——————0.441——————— 0.161
——————————————————————Coef————S.E.————Wald—Z—P
Intercept—————————————-1.0770—0.32528—-3.31——0.0009
V_CHANGPOSS=no_change——0.6110—0.26012——2.35——0.0188
REC_ACT———————————————-0.2347—0.04579—-5.12——0.0000
PAT_ACT————————————————0.4372—0.04814——9.08——0.0000

The coefficients at the bottom are the same, but now you also get some
more information. There is a highly significant correlation between the
remaining three variables and the choice of construction: the model’s like-
lihood ratio chi-square is 160.01 (the difference between the two deviance
Binary logistic regression 297

values from the glm output) with df = 3 (the difference between the df-
values of the two deviance values from the glm output) and p ≈ 0. Then,
there is a variety of classification accuracy measures C, Dxy, etc. These
measures answer the question of how good the model is at classifying the
chosen construction for each analyzed instance. C is a coefficient of con-
cordance, which can be considered good when it reaches or exceeds ap-
proximately 0.8, which it does here. Somer’s Dxy is a rank correlation be-
tween the predicted probabilities of the two constructions and the actually
observed constructions. Its value falls between 0 and 1 and since it follows
directly from C it is also good. Gamma, taua, and Brier are comparable
coefficients I will not discuss here, and the R2-value (here it is called Na-
gelkerke’s R2) and its general meaning as an indicator of correlational
strength you already know.
As for the sizes of the effects, the more a coefficient in the above glm or
lrm output deviates from 0, the stronger – on the whole – the observed
effect. Why is that and what do the coefficients mean anyway? Put diffe-
rently, what do they try to predict? To understand that, we look at columns
of two ‘randomly’ chosen cases:

> DatAlt[c(1,—300),-c(1,—2,—5)]¶
————CONSTRUCTION2—V_CHANGPOSS—REC_ACT—PAT_ACT
1————ditransitive———no_change———————8———————0
313———prep_dative——————change———————3———————8

That means, for these two cases the regression equation returns the fol-
lowing predictions:

> c1<-sum(-1.0770,—1*0.6110,—8*-0.2347,—0*0.4372);—c1¶
[1]—-2.3436
> c313<-sum(-1.0770,—0*0.6110,—3*-0.2347,—8*0.4372);—c313¶
[1]—1.7165

These are obviously neither the values 0 or 1 and not even just values
between 0 and 1 – these are values between -∞ and +∞. The values are so-
called logits, i.e., the logarithms of the odds of the event to be predicted,
which is here – and this is very important! – the variable level coded with 1
or, in the case of a factor, the second level, i.e., here the prepositional da-
tive (recall from Section 4.1.2.2: the odds of an event E are defined as
pE/(1-pE)). However, you can compute what are called the inverse logits,
which lie between 0 and 1 and correspond to the probabilities of the pre-
dicted construction. These inverse logits are computed as follows:
298 Selected multifactorial methods

> 1/(1+exp(-c1))—#—or—exp(c1)/(1+exp(c1))¶
[1]—0.087576
> 1/(1+exp(-c1))—#—or—exp(c1)/(1+exp(c1))¶
[1]—0.84768

For case #1, the model predicts only a probability of 8.7576% of a pre-
positional dative, given the behavior of the three independent variables
(which is good, because our data show that this was in fact a ditransitive).
For case #313, the model predicts a probability of 84.768% of a preposi-
tional dative (which is also good, because our data show that this was in-
deed a prepositional dative). These two examples are obviously cases of
firm and correct predictions, and the more the coefficient of a variable de-
viates from 0, the more this variable can potentially make the result of the
regression equation deviate from 0, too, and the larger the predictive power
of this variable. Look what happens when you compute the inverse logit of
0: the probability becomes 50%, which with two alternatives is of course
the probability with the lowest predictive power.

> 1/(1+exp(0))¶
[1]—0.5

Let’s return to the coefficients now. In this case, V_CHANGPOSS has a


strong effect and since its coefficient is positive, on the whole, when
V_CHANGPOSS is NO_CHANGE, then this increases the probability for
CONSTRUCTION: 1 (the prepositional dative). For example, the above case
of c313 is a data point with V_CHANGPOSS:CHANGE. If you changed case
313 into a hypothetical case 313’ with V_CHANGPOSS:NO_CHANGE and left
everything else as is, then the predicted probability of the prepositional
dative would increase from 84.768% to 91.113% (i.e., by 6.345%):

> c313.hyp<-sum(-1.0770,—1*0.6110,—3*-0.2347,—8*0.4372);—
c313.hyp¶
[1]—2.3275
> 1/(1+exp(-c313.hyp))¶
[1]—0.91113

Note: this does not mean that V_CHANGPOSS:NO_CHANGE uniformly in-


creases the probability of CONSTRUCTION: 1 by 6.345%. How much does
changing case 1 to a hypothetical case 1’ with V_CHANGPOSS:NO_CHANGE
influence the probability of CONSTRUCTION: 1?
Binary logistic regression 299

THINK
BREAK

> c1.hyp<-sum(-1.0770,—0*0.6110,—8*-0.2347,—0*0.4372);—
c1.hyp¶
[1]—2.9546
> 1/(1+exp(-c1.hyp))¶
[1]—0.04952

The difference is only one of 3.8056%: the effects of the independent


variables on the probability of the predicted prepositional dative are not
linear on the probability scale – they are linear on the logit scale as you see
when you compare c1-c1.hyp¶ to c313-c313.hyp¶.
Back to the other variables … As for PAT_ACT, high values of PAT_ACT
increase the probability of CONSTRUCTION: 1. On the other hand, REC_ACT
has a negative coefficient, which is why high values of REC_ACT decrease
the probability of CONSTRUCTION: 1. Lastly, note that you can obtain con-
fidence intervals for the coefficients by entering confint(model.glm.5)¶.
Sometimes you will find not the regression coefficients but the expo-
nentiated regression coefficients that tell you how each variable influences
the odds of the predicted outcome, the prepositional dative.

> exp(model.glm.5$coefficients)¶
—————————(Intercept)—V_CHANGPOSSno_change
———————————0.3406224————————————1.8423187
—————————————REC_ACT——————————————PAT_ACT
———————————0.7908433————————————1.5484279
> exp(confint(model.glm.5))¶
Waiting—for—profiling—to—be—done...
—————————————————————————2.5—%————97.5—%
(Intercept)——————————0.1774275—0.6372445
V_CHANGPOSSno_change—1.1100411—3.0844025
REC_ACT——————————————0.7213651—0.8635779
PAT_ACT——————————————1.4135976—1.7079128

The overall logic of the interpretation of the directionality etc. remains


the same. If PAT_ACT increases by one unit, this increases the odds of pre-
positional datives – the ratio pprep.dat./pditrans. – by about 1.55. One nice way
of inspecting all main effects is available by plotting the lrm model object;
the argument fun=plogis instructs R to have the predicted probabilities
(rather than log odds) on the y-axes.
300 Selected multifactorial methods

> par(mfrow=c(1,—3))¶
> plot(model.lrm.2,—fun=plogis,—ylim=c(0,—1),—
ylab="Probability—of—prepositional—dative")¶
> par(mfrow=c(1,—1))—#—restore—the—standard—plotting—setting¶

Figure 68. Partial effects for model.lrm

This basically confirms the results of the monofactorial exploration of


the graphs at the beginning of this section, which is not really a big surprise
given that all interactions were filtered out during the model selection
process. Note, however, that the effects plotted here are not averages across
the whole data set, but each variable’s effect shown is computed with the
other variables set to a particular level (for the ratio-scaled variable, that is
the median); cf. the subtitles! If a variable participates in an interaction, it
can therefore happen that the plot’s result is at odds (no pun intended) with
the overall numerical result because the plotted effect is only true for the
variable level(s) for which the trend is computed, but not for others or in
general.
Instead of the above kind of model summary, you will also sometimes
find ANOVA tables reported for such analyses. There are two ways of
getting these, which here provide identical results. One is applying anova
Binary logistic regression 301

to model.lrm, the other again requires the function Anova from the li-
brary(car) with sum contrasts (!); I only show the output of the latter:

> library(car)—#—for—the—function—Anova¶
> options(contrasts=c("contr.sum",—"contr.poly"))¶
> Anova(model.glm.5,—type="III",—test.statistic="Wald")¶
Anova—Table—(Type—III—tests)
Response:—CONSTRUCTION1
————————————Wald—Chisq—Df—Pr(>Chisq)
(Intercept)—————10.963——1——0.0009296—***
V_CHANGPOSS——————5.518——1——0.0188213—*
REC_ACT—————————26.264——1——2.977e-07—***
PAT_ACT—————————82.517——1——<—2.2e-16—***
> options(contrasts=c("contr.treatment",—"contr.poly"))¶

Here, where all variables are binary factors or ratio-scaled, we get the
same p-values, but with categorical factors, the ANOVA output provides
one p-value for the whole factor (recall note 48 above). The Wald chi-
square values correspond to the squared z-value of the glm output.
Let us now briefly return to the interaction between REC_ACT and
V_CHANGPOSS. This interaction was deleted during the model selection
process, but I would still like to discuss it briefly for three reasons. First,
compared to many other deleted predictors, its p-value was still rather
small (around 0.12); second, an automatic model selection process with
step actually leaves it in the final model (although an anova test would
encourage you to delete it anyway); third, I would briefly like to mention
how you can look at such interactions (in a simple, graphical manner simi-
lar to that used for multiple regression). Let’s first look at the simple main
effect of V_CHANGPOSS in terms of the percentages of constructions:

> prop.table(table(CONSTRUCTION2,—V_CHANGPOSS),—2)¶
——————————————V_CHANGPOSS
CONSTRUCTION2—————change—no_change
——ditransitive—0.5555556—0.4109589
——prep_dative——0.4444444—0.5890411

On the whole – disregarding other variables, that is – this again tells you
V_CHANGEPOSS:CHANGE increases the likelihood of ditransitives. But you
know that already and we now want to look at an interaction. As above,
one quick and dirty way to explore the interaction consists of splitting up
the ratio-scaled variable REC_ACT into groups esp. since its different values
are all rather equally frequent. Let us try this out and split up REC_ACT into
two groups (which can be done on the basis of the mean or the median,
which are here very close to each other anyway):
302 Selected multifactorial methods

> REC_ACT.larger.than.its.mean<-REC_ACT>mean(REC_ACT)¶

(Note that by its very nature such dichotomization loses information,


which is why you must be careful and not apply this prematurely; I will
mention below how you check the interaction without dichotomization,
too.) Then, you can generate a table that represents the relevant interaction:

> interact.table<-table(CONSTRUCTION2,—REC_ACT.larger.
than.its.mean,—V_CHANGPOSS);—interact.table¶
,—,—V_CHANGPOSS—=—change
——————————————————————REC_ACT.larger.than.its.mean
CONSTRUCTION2——————————FALSE—TRUE
——ditransitive————————————51———89
——prepositional—dative————67———45
,—,—V_CHANGPOSS—=—no_change
——————————————————————REC_ACT.larger.than.its.mean
CONSTRUCTION2——————————FALSE—TRUE
——ditransitive————————————18———42
——prepositional—dative————54———32

You get one table with two subtables: one for when change is implied
and one for where it is not. But ideally, we of course look at the construc-
tional percentages – not just their absolute values. You therefore generate
percentage tables for the two subtables of both levels of V_CHANGPOSS
(with prop.table and column percentages). Note how you use subsetting
with two commas (!) to access the first and the second sub-table: [,,1] and
[,,2].

> interact.table.1.perc<-prop.table(interact.table[,,1],
—2)—#—note—how—you—get—the—first—subtable:—[,,1]¶
> interact.table.2.perc<-prop.table(interact.table[,,2],—
2)—#—note—how—you—get—the—first—subtable:—[,,1]¶

Then you can simply do two bar plots. I only show the simplest possible
code here, but a more customized graph whose code you find in the code
file):

> barplot(interact.table.1.perc,—beside=T,—legend=T)¶
> barplot(interact.table.2.perc,—beside=T,—legend=T)¶
Binary logistic regression 303

Figure 69. Bar plots for CONSTRUCTION1~V_CHANGPOSS*PAT_ACT

The graphs show what the (insignificant) interaction was about: when
no change is implied (i.e., in the right panel), then the preference of rather
unfamiliar recipients (i.e., cf. the left two bars) for the prepositional dative
is somewhat stronger (0.75) than when change is implied (0.568; cf. the
two left bars in the left panel). But when no change is implied (i.e., in the
right panel), then the preference of rather familiar recipients (i.e., cf. the
right two bars) for the prepositional dative is somewhat stronger (0.432)
than when change is implied (0.336). But then, this effect did not survive
the model selection process … (Cf. the code file for what happens when
you split up REC_ACT into three groups – after all, there is no particularly
good reason to assume only two groups – as well as further down in the
304 Selected multifactorial methods

code file, Gelman and Hill 2007: Section 5.4, or Everitt and Hothorn 2006:
101–3 for how to plot interactions without grouping ratio-scaled variables.)
Let us now turn to the assessment of the classification accuracy. The
classification scores in the lrm output already gave you an idea that the fit
of the model is pretty good, but it would be nice to assess that a little more
straightforwardly. One way to assess the classification accuracy is by com-
paring the observed constructions to the predicted ones. Thankfully, you
don’t have to compute every case’s constructional prediction like above.
You have seen above in Section 5.2 that the function predict returns pre-
dicted values for linear models, and the good news is that predict can also
be applied to objects produced by glm (or lrm). If you apply predict to the
object model.glm.5 without further arguments, you get the values follow-
ing from the regression equation (with slight differences due to rounding):

> predict(model.glm.5)[c(1,—313)]¶
——————1—————313
-2.3432——1.7170

However, we want easily interpretable probabilities. Thus, you add the


argument type="response":

> predict(model.glm.5,—type="response")[c(1,—313)]¶
———————1——————313
0.087608—0.847739

The easiest way is to generate a vector classifications, which con-


tains for each instance the predicted probability of the prepositional dative.

> classifications<-predict(model.glm.2,—type="response")¶

(You could also generate classifications with mod-


el.glm.2$fitted¶ or predict(model.lrm.2,—type="fitted")¶.) Then,
you can easily convert this into a categorical decision: since there are only
two constructional possibilities, you can simply recode classifications
into classifications.2 with “ditransitive” when the probability of a pre-
positional dative is smaller than 0.5 and as “prep_dative” when the proba-
bility of a prepositional dative is 0.5 or greater. Then, all you need to do is
crosstabulate these classifications with the actually observed constructions:

> classifications.2<-ifelse(classifications>=0.5,—
"prep_dative",—"ditransitive")¶
> evaluation<-table(classifications.2,—CONSTRUCTION2)¶
Binary logistic regression 305

> addmargins(evaluation)¶
——————————————————————CONSTRUCTION2
classifications.2—ditransitive—prep_dative—Sum
——ditransitive—————————————155——————————46—201
——prep_dative———————————————45—————————152—197
——Sum——————————————————————200—————————198—398

The correct predictions are in the main diagonal of this table: 155+152 =
307 of the 398 constructional choices are correct, which corresponds to
77.14%:

> sum(diag(evaluation))/sum(evaluation)¶
[1]—0.7713568

It’s now also possible to look at the characteristics of the misclassified


constructions. I leave that up to you for now, but we will return to this issue
in the exercises.
Let us finally briefly look at model assumptions. Compared to, say, li-
near discriminant analysis, another method that can be used to predict no-
minal/categorical variables, binary logistic regression does not come with
very many distributional assumptions, and many textbooks do not discuss
any at all (but cf. Harrell 2001 and Faraway 2006: Section 6.4). For now,
we just assume that the data points are independent of each other (because,
for example, every constructional choice is from a different corpus file).
The criterion of overdispersion requires that you look at the ratio of the
residual deviance and the residual dfs in the glm output. The ratio of the
two should not be much larger than 1. In the present case, the ratio is
323.83
/295 ≈ 1.1, which is a pretty good result. Several references just say that,
if you get a value that is much larger than 1, e.g. > 2, then you would run
the glm analysis again with the argument family=quasibinomial and then
take it from there. Baayen (2008: 199), quoting Harrell (2001: 231), re-
commends as a heuristic a chi-square test of the residual deviance at the
residual df, which here returns the desired insignificant result:

> pchisq(391.72,—394,—lower.tail=F)¶
[1]—0.5229725

To sum up: “A binary logistic regression shows that there is a highly


significant intermediately strong correlation between some of the indepen-
dent variables and the constructional choice: Log-likelihood ratio χ2 =
160.01; df = 1; p < 0.001. Nagelkerke’s R2 is 0.441, and the minimal ade-
quate model has a good classificatory power: C = 0.842, Dxy = 0.685, and
306 Selected multifactorial methods

77.14% of the constructions are predicted correctly. The variable


V_CHANGPOSS has a strong effect (odds ratio ≈ 1.84; 95%-CI: 1.11 and
3.08; p = 0.0188): change of possession prefers the ditransitive con-
structions. The variable PAT_ACT also exhibits a significant correlation
with the constructional choice (odds ratio ≈ 1.55; 95%-CI: 1.41 and 1.71; p
< 0.001): the more the patient is known/given from the discourse context,
the more the prepositional dative is preferred. The final significant effect is
that of REC_ACT: the more the recipient is known/given, the more the di-
transitive is preferred (odds ratio ≈ 0.79; 95%-CI: 0.72 and 0.87; p <
0.0001); both latter effects are compatible with information-structural giv-
en-before-new analyses. [plus illustrate the effects with some graphs; cf.
the many examples in the code file]”

Recommendation(s) for further study


− just like in Section 5.2, it can also help interpreting the regression coef-
ficients when the input variables are centered
− the function polr from the library(MASS), to compute logistic regres-
sions with ordinal factors as dependent variable (lrm can do that too,
though)
− the function rpart from the library(rpart) to compute classification
and regression trees as an alternative to logistic regressions
− the function validate from the library(Design) to do a resampling
validation of a logistic regression computed with lrm
− Pampel (2000), Jaccard (2001), Crawley (2002: Ch. 30), Crawley 2005:
Ch. 16), Crawley (2007: Ch. 17), Faraway (2006: Ch. 2, 6), Maindonald
and Braun (2003: Ch. 8), Gelman and Hill (2007: Ch. 5), Baayen (2008:
Section 6.3), Johnson (2008: Section 5.4)

5. Hierarchical agglomerative cluster analysis

With the exception of maybe the HCFA we have so far only concerned
ourselves with methods in which independent and dependent variables
were clearly separated and where we already had at least an expectation
and a hypothesis prior to the data collection. Such methods are sometimes
referred to as hypothesis-testing statistics, and we used statistics and p-
values to decide whether or not to reject a null hypothesis. The method
called hierarchical agglomerative cluster analysis that we deal with in this
section is a so-called exploratory, or hypothesis-generating, method or,
more precisely, a family of methods. It is normally used to divide a set of
Hierarchical agglomerative cluster analysis 307

elements into clusters, or groups, such that the members of one group are
very similar to each other and at the same time very dissimilar to members
of other groups. An obvious reason to use cluster analyses to this end is that
this method can handle larger amounts of data and be at the same time
more objective than humans eyeballing huge tables.
To get a first understanding of what cluster analyses do, let us look at a
fictitious example of a cluster analysis based on similarity judgments of
English consonant phonemes. Let’s assume you wanted to determine how
English native speakers distinguish the following consonant phonemes: /b/,
/d/, /f/, /g/, /l/, /m/, /n/, /p/, /s/, /t/, and /v/. You asked 20 subjects to rate the
similarities of all (11·10)/2 = 55 pairs of consonants on a scale from 0 (‘com-
pletely different’) to 1 (‘completely identical’). As a result, you obtained 20
similarity ratings for each pair and could compute an average rating for
each pair. It would now be possible to compute a cluster analysis on the
basis of these average similarity judgments to determine (i) which conso-
nants and consonant groups the subjects appear to distinguish and (ii) how
these groups can perhaps be explained. Figure 70 shows the result that such
a cluster analysis might produce - how would you interpret it?

Figure 70. Fictitious results of a cluster analysis of English consonants

THINK
BREAK

The ‘result’ suggests that the subjects’ judgments were probably strong-
ly influenced by the consonants’ manner of articulation: on a very general
level, there are two clusters, one with /b/, /p/, /t/, /d/, and /g/, and one with
308 Selected multifactorial methods

/l/, /n/, /m/, /v/, /f/, and /s/. It is immediately obvious that the first cluster
contains all and only all plosives (i.e., consonants whose production in-
volves a momentary complete obstruction of the airflow) that were in-
cluded whereas the second cluster contains all and only all nasals, liquids,
and fricatives (i.e., consonants whose production involves only a momenta-
ry partial obstruction of the airflow).
There is more to the results, however. The first of these two clusters has
a noteworthy internal structure of two ‘subclusters’. The first subcluster, as
it were, contains all and only all bilabial phonemes whereas the second
subcluster groups both alveolars together followed by a velar sound.
The second of the two big clusters also has some internal structure with
two subclusters. The first of these contains all and only all nasals and liq-
uids (i.e., phonemes that are sometimes classified as between clearcut vo-
wels and clearcut consonants), and again the phonemes with the same place
of articulation are grouped together first (the two alveolar sounds). The
same is true of the second subcluster, which contains all and only all frica-
tives and has the labiodental fricatives merged first.
The above comments were only concerned with which elements are
members of which clusters. Further attempts at interpretation may focus on
how many of the clusters in Figure 70 differ from each other strongly
enough to be considered clusters in their own right. Such discussion is
ideally based on follow-up tests which are too complex to be discussed
here, but as a quick and dirty heuristic you can look at the lengths of the
vertical lines in such a tree diagram, or dendrogram. Long vertical lines
indicate more autonomous subclusters. For example, the subcluster {/b/
/p/} is rather different from the remaining plosives since the vertical line
leading upwards from it to the merging with {{/t/ /d/} /g/} is rather long.50
Unfortunately, cluster analyses do not usually yield such a perfectly in-
terpretable output but such dendrograms are often surprisingly interesting
and revealing. Cluster analyses are often used in semantic, cognitive-
linguistic, psycholinguistic, and computational-linguistic studies (cf. Miller
1971, Sandra and Rice 1995, Rice 1996, and Manning and Schütze 1999:
Ch. 14 for some examples) and are often an ideal means to detect patterns
in large and seemingly noisy/chaotic data sets. You must realize, however,
that even if cluster analyses as such allow for an objective identification of
groups, the analyst must still make at least three potentially subjective deci-
sions. The first two of these influence how exactly the dendrogram will

50. For a similar but authentic example (based on data on vowel formants), cf. Kornai
(1998).
Hierarchical agglomerative cluster analysis 309

look like; the third you have already seen: one must decide what it is the
dendrogram reflects. In what follows, I will show you how to do such an
analysis with R yourself.
Hierarchical agglomerative cluster analyses typically involve the fol-
lowing steps:

Procedure
Tabulating the data
Computing a similarity/dissimilarity matrix on the basis of a user-defined
similarity/dissimilarity metric
Computing a cluster structure on the basis of a user-defined amalgamation
rule
Representing the cluster structure in a dendrogram and interpreting it

(There are many additional interesting post hoc tests, which we can un-
fortunately not discuss here.) The example we are going to discuss is from
the domain of corpus/computational linguistics. In both disciplines, the
degree of semantic similarity of two words is often approximated on the
basis of the number and frequency of shared collocates. A very loose defi-
nition of a ‘collocates of a word w’ are the words that occur frequently in
w’s environment, where environment in turn is often defined as ‘in the
same sentence’ or within a 4- or 5-word window around w. For example: if
you find the word car in a text, then very often words such as driver, mo-
tor, gas, and/or accident are relatively nearby whereas words such as flour,
peace treaty, dictatorial, and cactus collection are probably not particularly
frequent. In other words, the more collocates two words x and y share, the
more likely there is a semantic relationship between the two (cf. Oakes
1998: Ch. 3, Manning and Schütze 2000: Section 14.1 and 15.2 as well as
Gries 2009 for how to obtain collocates in the first place).
In the present example, we look at the seven English words bronze,
gold, silver, bar, cafe, menu, and restaurant. Of course, I did not choose
these words at random – I chose them because they intuitively fall into two
clusters (and thus constitute a good test case). One cluster consists of three
co-hyponyms of the metal, the other consists of three co-hyponyms of ga-
stronomical establishment as well as a word from the same semantic field.
Let us assume you extracted from the British National Corpus (BNC) all
occurrences of these words and their content word collocates (i.e., nouns,
verbs, adjectives, and adverbs). For each collocate that occurred with at
least one of the seven words, you determined how often it occurred with
each of the seven words. Table 44 is a schematic representation of the first
310 Selected multifactorial methods

six rows of such a table. The first collocate, here referred to as X, co-
occurred only with bar (three times); the second collocate, Y, co-occurred
11 times with gold and once with restaurant, etc.

Table 44. Schematic co-occurrence frequencies of seven English words in the


BNC
Collocate bronze gold silver bar cafe menu restaurant
X 0 0 0 3 0 0 0
Y 0 11 0 0 0 0 1
Z 0 1 1 0 0 0 1
A 0 0 0 1 0 2 0
B 1 0 0 1 0 0 0
C 0 0 0 1 0 0 1
… … … … … … … …

We are now asking the question which words are more similar to each
other than to others. That is, just like in the example above, you want to
group elements – above, phonemes, here, words – on the basis of properties
– above, average similarity judgments, here, co-occurrence frequencies.
First you need a data set such as Table 44, which you can load from the file
<C:/_sflwr/_inputfiles/05-5_collocates.RData>, which contains a large
table of co-occurrence data – seven columns and approximately 31,000
rows.

> load(choose.files())—#—load—the—data—frame¶
> ls()—#—check—what—was—loaded¶
[1]—"collocates"
> attach(collocates)¶
> str(collocates)¶
'data.frame':———30936—obs.—of——7—variables:
—$—bronze————:—num——0—0—0—0—1—0—0—0—0—0—...
—$—gold——————:—num——0—11—1—0—0—0—0—1—0—0—...
—$—silver————:—num——0—0—1—0—0—0—0—0—0—0—...
—$—bar———————:—num——3—0—0—1—1—1—1—0—1—0—...
—$—cafe——————:—num——0—0—0—0—0—0—0—0—0—1—...
—$—menu——————:—num——0—0—0—2—0—0—0—0—0—0—...
—$—restaurant:—num——0—1—0—0—0—0—0—0—0—0—...

Alternatively, you could load those data with read.table(…) from the
file <C:/_sflwr/_inputfiles/05-5_collocates.txt>. If your data contain miss-
ing data, you should disregard those. There are no missing data, but the
function is still useful to know (cf. the recommendation at the end of Chap-
ter 2):
Hierarchical agglomerative cluster analysis 311

> collocates<-na.omit(collocates)¶

Next, you must generate a similarity/dissimilarity matrix for the seven


words. Here, you have to make the first possibly subjective decision, decid-
ing on a similarity/dissimilarity measure. You need to consider two aspects:
the level of measurement of the variables in point and the definition of
similarity to be used. With regard to the former, we will only distinguish
between binary/nominal and ratio-scaled variables. I will discuss similari-
ty/dissimilarity measures for both kinds of variables, but will then focus on
ratio-scaled variables.
In the case of nominal variables, there are four possibilities how two
elements can be similar or dissimilar to each other, which are represented
in Table 45. On the basis of Table 45, the similarity of two elements is
typically quantified using formula (56), in which w1 and w2 are defined by
the analyst:

a + w1 ⋅ d
(56)
(a + w1 ⋅ d ) + ( w2 ⋅ (b + c ))

Table 45. Feature combinations of two binary elements


Element 2 exhibits Element 2 does not
characteristic x exhibit characteristic x
Element 1 exhibits
a b
characteristic x
Element 1 does not
c d
exhibit characteristic x

Three similarity measures are worth mentioning here:

− the Jaccard coefficient: w1 = 0 and w2 = 1;


− the Simple Matching coefficient: w1 = 1 and w2 = 1;
− the Dice coefficient: w1 = 0 and w2 = 0,5.

If you define the following three vectors, what are their pairwise simi-
larity coefficients?

> aa<-c(1,—1,—1,—1,—0,—0,—1,—0,—0,—0)¶
> bb<-c(1,—1,—0,—1,—0,—1,—0,—1,—0,—1)¶
> cc<-c(1,—0,—1,—1,—1,—1,—1,—1,—1,—0)¶
312 Selected multifactorial methods

THINK
BREAK

− Jaccard coefficient: for aa and bb: 0.375, for aa and cc 0.444, for bb
and cc 0.4;
− Simple Matching coefficient: for aa and bb: 0.5, for aa and cc 0.5, for
bb and cc 0.4;
− Dice coefficient: for aa and bb: 0.545, for aa and cc 0.615, for bb and
cc 0.571.

But when do you use which of the three? One rule of thumb is that
when the presence of a characteristic is as informative as its absence, then
you should use the Simple Matching coefficient, otherwise choose the Jac-
card coefficient or the Dice coefficient. The reason for that is that, as you
can see in formula (56) and the measures’ definitions above, that only the
Simple Matching coefficient fully includes the cases where both elements
exhibit or do not exhibit the characteristic in questions.
For ratio-scaled variables, there are (many) other measures, not all of
which I can discuss here. I will focus on (i) a set of distance or dissimilarity
measures (i.e., measures where large values represent large degrees of dis-
similarity) and (ii) a set of similarity measures (i.e., measures where large
values represent large degrees of similarity). The distance measures are
again based on one formula and then differ in terms of parameter settings.
This basic formula is the so-called Minkowski metric represented in (57).

1
 n y y
(57)  ∑ xqi − xri 
 i −1 

When y is set to 2, you get the so-called Euclidean distance.51 If you in-
sert y = 2 into (57) to compute the Euclidean distance of the vectors aa and
bb, you obtain:

51. The Euclidean distance of two vectors of length n is the direct spatial distance between
two points within an n-dimensional space. This may sound complex, but for the simplest
case of a two-dimensional coordinate system this is merely the distance you would
measure with a ruler.
Hierarchical agglomerative cluster analysis 313

> sqrt(sum((aa-bb)^2))¶
[1]—2.236068

When y is set to 1, you get the so-called Manhattan- or City-Block dis-


tance of the above vectors. For aa and bb, you obtain:

> sum(abs(aa-bb))¶
[1]—5

The similarity measures are correlational measures. One of these you


know already: the Pearson product-moment correlation coefficient r. A
similar measure often used in computational linguistics is the cosine (cf.
Manning and Schütze 1999: 299–303). The cosine and all other measures
for ratio-scaled are available from the function Dist from the library
52
amap. This function requires that (i) the data are available in the form of a
matrix or a data frame and that (ii) the elements whose similarities you
want are in the rows, not in the columns as usual. If the latter is not the
case, you can often just transpose a data structure (with t):

> library(amap)¶
> collocates.t<-t(collocates)¶

You can then apply the function Dist to the transposed data structure.
This function takes the following arguments:

− x: the matrix or the data frame for which you want your measures;
− method="euclidean" for the Euclidean distance; method="manhattan"
for the City-Block metric; method="correlation" for the product-
moment correlation r (but see below!); method="pearson" for the co-
sine (but see below!) (there are some more measures available which I
won’t discuss here);
− diag=F (the default) or diag=T, depending on whether the distance ma-
trix should contain its main diagonal or not;
− upper=F (the default) or upper=T, depending on whether the distance
matrix should contain only the lower left half or both halves.

Thus, if you want to generate a distance matrix based on Euclidean dis-


tances for our collocate dataset you simply enter this:

52. The function dist from the standard installation of R also allows you to compute sever-
al similarity/dissimilarity measures, but fewer than Dist from the library(amap).
314 Selected multifactorial methods

> Dist(collocates.t,—method="euclidean",—diag=T,—upper=T)¶

As you can see, you get a (symmetric) distance matrix in which the dis-
tance of each word to itself is of course 0. This matrix now tells you which
word is most similar to which other word. For example, silver is most simi-
lar to cafe because the distance of silver to cafe (2385.566) is the smallest
distance that silver has to any word other than itself.
The following line computes a distance matrix based on the City-Block
metric:

> Dist(collocates.t,—method="manhattan",—diag=T,—upper=T)¶

To get a similarity matrix with product-moment correlations or cosines,


you must compute the difference 1 minus the values in the matrix. To get a
similarity matrix with correlation coefficients, you therefore enter this:

> 1-Dist(collocates.t,—method="correlation",—diag=T,—
upper=T)¶
———————————bronze———gold—silver————bar———cafe———menu—restaurant
bronze—————0.0000—0.1342—0.1706—0.0537—0.0570—0.0462—————0.0531
gold———————0.1342—0.0000—0.3103—0.0565—0.0542—0.0458—————0.0522
silver—————0.1706—0.3103—0.0000—0.0642—0.0599—0.0511—————0.0578
bar————————0.0537—0.0565—0.0642—0.0000—0.1474—0.1197—————0.2254
cafe———————0.0570—0.0542—0.0599—0.1474—0.0000—0.0811—————0.1751
menu———————0.0462—0.0458—0.0511—0.1197—0.0811—0.0000—————0.1733
restaurant—0.0531—0.0522—0.0578—0.2254—0.1751—0.1733—————0.0000

You can check the results by comparing this output with the one you get
from cor(collocates)¶. For a similarity matrix with cosines, you enter:

> 1-Dist(collocates.t,—method="pearson",—diag=T,—upper=T)¶

There are also statistics programs that use 1-r as a distance measure.
They change the similarity measure r (values close to zero mean low simi-
larity) into a distance measure (values close to zero mean high similarity).
If you compare the matrix with Euclidean distances with the matrix with
r, you might notice something that strikes you as strange …

THINK
BREAK
Hierarchical agglomerative cluster analysis 315

In the distance matrix, small values indicate high similarity and the
smallest value in the column bronze is in the row for cafe (1734.509). In
the similarity matrix, large values indicate high similarity and the largest
value in the column bronze is in the row for silver (ca. 0.1706). How can
that be? This difference shows that even a cluster algorithmic approach is
influenced by subjective though hopefully motivated decisions. The choice
for a particular metric influences the results because there are different
ways in which vectors can be similar to each other.
Consider as an example the following data set, which is also represented
graphically in Figure 71.

> y1<-1:10;—y2<-11:20;—y3<-c(6,—6,—6,—5,—5,—5,—4,—4,—4,—3)¶
> y<-t(data.frame(y1,—y2,—y3))¶

The question is, how similar is y1 to y2 and to y3? There are two ob-
vious ways of considering similarity. On the one hand, y1 and y2 are per-
fectly parallel, but they are far away from each other (as much as one can
say that about a diagram whose dimensions are not defined). On the other
hand, y1 and y3 are not parallel to each other at all, but they are close to
each other. The two approaches I discussed above are based on these dif-
ferent perspectives. The distance measures I mentioned (such as the Eucli-
dean distance) are based on the spatial distance between vectors, which is
small between y1 and y3 but large between y1 and y2. The similarity
measures I discussed (such as the cosine) are based on the similarity of the
curvature of the vectors, which is small between y1 and y3, but large be-
tween y1 and y2. You can see this quickly from the actual numerical val-
ues:

> Dist(y,—method="euclidean",—diag=T,—upper=T)¶
—————————y1———————y2———————y3
y1——0.00000—31.62278—12.28821
y2—31.62278——0.00000—35.93049
y3—12.28821—35.93049——0.00000
> 1-Dist(y,—method="pearson",—diag=T,—upper=T)¶
——————————y1————————y2————————y3
y1—0.0000000—0.9559123—0.7796728
y2—0.9559123—0.0000000—0.9284325
y3—0.7796728—0.9284325—0.0000000
316 Selected multifactorial methods

Figure 71. Three fictitious vectors

According to the Euclidean distance, y1 is more similar to y3 than to y2


– 12.288 < 31.623 – but the reverse is true for the cosine: y1 is more simi-
lar to y2 – 0.956 > 0.78. The two measures are based on different concepts
of similarity. The analyst must decide what is more relevant on a case-by-
case basis: low spatial distances or similar curvatures. For now, we assume
you want to adopt a curvature-based approach and use 1-r as a measure; in
your own studies, you of course must state which similarity/distance meas-
ure you used, too.53

> dist.matrix<-Dist(collocates.t,—method="correlation",—diag=T,—
upper=T)¶
> round(dist.matrix,—4)¶
———————————bronze———gold—silver————bar———cafe———menu—restaurant
bronze—————0.0000—0.8658—0.8294—0.9463—0.9430—0.9538—————0.9469
gold———————0.8658—0.0000—0.6897—0.9435—0.9458—0.9542—————0.9478
silver—————0.8294—0.6897—0.0000—0.9358—0.9401—0.9489—————0.9422
bar————————0.9463—0.9435—0.9358—0.0000—0.8526—0.8803—————0.7746
cafe———————0.9430—0.9458—0.9401—0.8526—0.0000—0.9189—————0.8249
menu———————0.9538—0.9542—0.9489—0.8803—0.9189—0.0000—————0.8267
restaurant—0.9469—0.9478—0.9422—0.7746—0.8249—0.8267—————0.0000

53. I am simplifying a lot here: the frequencies are neither normalized nor logged/dampened
etc. (cf. above, Manning and Schütze 1999: Section 15.2.2, or Jurafsky and Martin
2008: Ch. 20).
Hierarchical agglomerative cluster analysis 317

The next step is to compute a cluster structure from this similarity ma-
trix. You do this with the function hclust, which can take up to three ar-
guments of which I will discuss two. The first is a similarity or distance
matrix, and the second chooses an amalgamation rule that defines how the
elements in the distance matrix get merged into clusters. This choice is the
second potentially subjective decision and there are again several possibili-
ties.
The choice method="single" uses the so-called single-linkage- or
nearest-neighbor method. In this method, the similarity of elements x and y
– where x and y may be elements such as individual consonants or subclus-
ters such as {/b/, /p/} in Figure 70 – is defined as the minimal distance be-
tween any one element of x and any one element of y. In the present exam-
ple this means that in the first amalgamation step gold and silver would be
merged since their distance is the smallest in the whole matrix (1-r =
0.6897). Then, bar gets joined with restaurant (1-r = 0.7746). Then, and
now comes the interesting part, {bar restaurant} gets joined with cafe be-
cause the smallest remaining distance is that which restaurant exhibits to
cafe: 1-r = 0.8249. And so on. This amalgamation method is good at identi-
fying outliers in data, but tends to produce long chains of clusters and is,
therefore, often not particularly discriminatory.
The choice method="complete" uses the so-called complete-linkage- or
furthest-neighbor method. Contrary to the single-linkage method, here the
similarity of x and y is defined as the maximal distance between and be-
tween any one element of x and any one element of y. First, gold and silver
are joined as before, then bar and restaurant. In the third step, {bar restau-
rant} gets joined with cafe, but the difference to the single linkage method
is that the distance between the two is now 0.8526, not 0.8249, because this
time the algorithm considers the maximal distances, of which the smallest
is chosen for joining. This approach tends to form smaller homogeneous
groups and is a good method if you suspect there are many smaller groups
in your data.
Finally, the choice method="ward" uses a method whose logic is similar
to that of ANOVAs because it joins those elements whose joining increases
the error sum of squares least (which cannot be explained on the basis of
the above distance matrix). For every possible amalgamation, the method
computes the sums of squared differences/deviations from the mean of the
potential cluster, and then the clustering with the smallest sum of squared
deviations is chosen. This method is known to generate smaller clusters
that are often similar in size and has proven to be quite useful in many ap-
plications. We will use it here, too, and again in your own studies, you
318 Selected multifactorial methods

must explicitly state which amalgamation rule you used. Now you can
compute the cluster structure and plot it.

> clust.ana<-hclust(dist.matrix,—method="ward")¶
> plot(clust.ana)¶
> rect.hclust(clust.ana,—2)—#—red—boxes—around—clusters¶

Figure 72. Dendrogram of seven English words

This is an uncharacteristically clearly interpretable result. As one would


have hoped for, the seven words fall exactly into the two main expected
clusters: one with the ‘metals’ and one with the gastronomy-related words.
The former has a substructure in which bronze is somewhat less similar to
the other two metals, and the latter very little substructure but groups the
three co-hyponyms together before menu is added. With the following line
you can have R show you for each element which cluster it belongs to
when you assume two clusters.

> cutree(clust.ana,—2)¶
bronze—————gold—————silver—————bar————cafe————menu—restaurant
—————1————————1——————————1———————2———————2———————2——————————2

Now you should do the exercises for Chapter 5 …

Recommendation(s) for further study


− the function daisy (from the library(cluster)) to compute distance
matrices for dataset with variables from different levels of measurement
− the function kmeans to do cluster analyses where you provide the num-
ber of clusters beforehand
Hierarchical agglomerative cluster analysis 319

− the function pvclust (from the library(pvclust)) to obtain p-values


for clusters based on resampling methods; cf. also pvrect and pvpick
(from the same library)
− the function cluster.stats (from the library(fpc)), to facilitate the
interpretation and validation of cluster analyses
− Mclust (from the library (mclust)) to determine the number of clus-
ters using model-based clustering
− the function varclus (from the library(Hmisc)) to do variable cluster-
ing
− the function nj (from the library(ape)) to perform neighbor clustering
and phylogenetic cluster analyses
− Crawley (2007: Ch. 23), Baayen (2008: Ch. 5), Johnson (2008: Ch. 6)
Chapter 6
Epilog

Now that you have nearly made it through the whole book, let me give you
a little food for further thought and some additional ideas on the way. Iron-
ically, some of these will probably shake up a bit what you have learnt so
far, but I hope they will also stimulate some curiosity for what else is out
there to discover and explore.
One thing to point out again here is that especially the sections on (ge-
neralized) linear models (ANOVAs and regressions) are very short. For
example, we have not talked about count/Poisson regressions. Also, we
skipped the issue of repeated measures: we did make a difference between
a t-test for independent samples and a t-test for dependent samples, but
have not done the same for ANOVAs. We have not dealt with the differ-
ence between fixed effects and random effects. Methods such as mixed-
effects / multi-level models, which can handle such issues in fascinating
ways, are currently hot in linguistics and I pointed out some references for
further study above.
Another interesting topic to pursue is that of (cross) validation. Very of-
ten, results can be validated by splitting up the existing sample into two or
more parts and then apply the relevant statistical methods to these parts to
determine whether you obtain comparable results. Or, you could apply a
regression to one half of a sample and then check how well the regression
coefficients work when applied to the other half. Such methods can reveal a
lot about the internal structure of a data set and there are several functions
available in R for these methods. A related point is that, given the ever
increasing power of computers, resampling and permutation approaches
become more and more popular; examples include the bootstrap, the jack-
knife procedure, or exhaustive permutation procedures. These procedures
are non-parametric methods you can use to estimate means, variances, but
also correlations or regression parameters without major distributional as-
sumptions. Such methods are not the solution to all statistical problems, but
can still be interesting and powerful tools (cf. the libraries boot as well as
bootstrap).

Recommendation(s) for further study


Good (2005), Rizzo (2008: Ch. 7, 8)
Epilog 321

It is also worth pointing out that R has many many more possibilities of
graphical representation than I could mention here. I only used the tradi-
tional graphics system, but there are other more powerful tools, which are
available from the libraries lattice and ggplot. The website
<http://addictedtor.free.fr/graphiques/> provides many very interesting and
impressive examples for R plots, and several good books illustrate many of
the exciting possibilities for exploration (cf. Unwin, Theus, and Hofmann
2006, Cook and Swayne 2007, and Sarkar 2008)
Finally, note that the null hypothesis testing paradigm that is underlying
most of the methods discussed here is not as uncontroversial as this text-
book (and most others) may make you believe. While the computation of p-
values is certainly still the standard approach, there are researchers who
argue for a different perspective. Some of these argue that p-values are
problematic because they do in fact not represent the conditional probabili-
ty that one is really interested in. Recall, the above p-values answer the
question “How likely is it to get the observed data when H0 is true?” but
what one actually wants to know “How likely is H1 given the data I have?”
Suggestions for improvement include:

− one should focus not on p-values but on effect sizes and/or confidence
intervals (which is why I mentioned these above again and again);
− one should report so-called prep-values, which according to Killeen
(2005) provide the probability to replicate an observed effect (but are
not uncontroversial themselves);
− one should test reasonable null hypotheses rather than hypotheses that
could never be true in the first place (there will always be some effect or
difference).

In addition, some argue against different significance steps of the kind


“significant” vs. “very significant” vs. “highly significant” by saying that
either a result is significant or not but that, once the null hypothesis gets
rejected, no further distinctions are necessary. I find it hard to get excited
about this latter kind of debate. Many R functions provide different num-
bers of asterisks so that one can use them, and as long as one is aware that
“very significant” is part of a particular conventionalized terminology, I see
no reason to make this a bone of contention.
Another interesting approach is the so-called Bayesian approach to sta-
tistics, which allows to include subjective prior knowledge or previous
results with one’s own data. All of these things are worth exploring.
322 Epilog

Recommendation(s) for further study


− Cohen (1994), Loftus (1996), and Denis (2003) for discussion of the
null hypothesis testing paradigm
− Killeen (2005) on prep-values
− Iversen (1984) on Bayes statistics

I hope you can use the techniques covered in this book for many differ-
ent questions, and when this little epilog also makes you try and extend
your knowledge and familiarize yourself with additional tools and methods
– for example, there are many great web resources, one of my favorites is
<http://www.statmethods.net/index.html> – then this book has achieved
one of his main objectives.
References

Agresti, Alan
2002 Categorical Data Analysis. 2nd ed. Hoboken, NJ: John Wiley and
Sons.
Anscombe, Francis J.
1973 Graphs in statistical analysis. American Statistician 27: 17–21.
Baayen, R. Harald
2008 Analyzing Linguistic Data: A Practical Introduction to Statistics Us-
ing R. Cambridge: Cambridge University Press.
Backhaus, Klaus, Bernd Erichson, Wulff Plinke, and Rolf Weiber
2003. Multivariate Analysemethoden: eine anwendungsorientierte
Einführung. 10th ed. Berlin: Springer.
Bencini, Giulia, and Adele E. Goldberg
2000 The contribution of argument structure constructions to sentence
meaning. Journal of Memory and Language 43 (3): 640–651.
Berez, Andrea L., and Stefan Th. Gries
2010 Correlates to middle marking in Denai’na iterative verbs. Internation-
al Journal of American Linguistics.
Bortz, Jürgen
2005 Statistik for Human- und Sozialwissenschaftler. 6th ed. Heidelberg:
Springer Medizin Verlag.
Bortz, Jürgen, and Nicola Döring
1995 Forschungsmethoden und Evaluation. 2nd ed. Berlin, Heidelberg,
New York: Springer.
Bortz, Jürgen, Gustav A. Lienert, and Klaus Boehnke
1990 Verteilungsfreie Methoden in der Biostatistik. Berlin, Heidelberg,
New York: Springer.
Braun, W. John, and Duncan J. Murdoch
2008 A First Course in Statistical Programming with R. Cambridge: Cam-
bridge University Press.
Brew, Chris, and David McKelvie
1996 Word-pair extraction for lexicography. In Proceedings of the 2nd In-
ternational Conference on New Methods in Language Processing,
Kemal O. Oflazer and Harold Somers (eds.), 45–55. Ankara: Bilkent
University.
Chambers, John M.
2008 Software for Data Analysis: Programmming with R. New York:
Springer.
324 References

Chen, Ping
1986 Discourse and Particle Movement in English. Studies in Language 10
(1): 79–95.
Clauß, Günter, Falk Rüdiger Finze, and Lothar Partzsch
1995 Statistik for Soziologen, Pädagogen, Psychologen und Mediziner. Vol.
1. 2nd ed. Thun: Verlag Harri Deutsch
Cohen, Jacob
1994 The earth is round (p < 0.05). American Psychologist 49 (12): 997–
1003.
Cook, Dianne, and Deborah F. Swayne
2007 Interactive and Dynamic Graphics for Data Analysis. New York:
Springer.
Cowart, Wayne
1997 Experimental Syntax: Applying Objective Methods to Sentence Judg-
ments. Thousand Oaks, CA: Sage.
Crawley, Michael J.
2002 Statistical Computing: An Introduction to Data Analysis using S-Plus.
– Chichester: John Wiley.
Crawley, Michael J.
2005 Statistics: An Introduction Using R. – Chichester: John Wiley.
Crawley, Michael J.
2007 The R book. – Chichester: John Wiley.
Dalgaard, Peter
2002 Introductory Statistics with R. New York: Springer.
Denis, Daniel J.
2003 Alternatives to Null Hypothesis Significance Testing. Theory and
Science 4.1. URL <http://theoryandscience.icaap.org/content/vol4.1/
02_denis.html>
Divjak, Dagmar S., and Stefan Th. Gries
2006 Ways of trying in Russian: clustering behavioral profiles. Corpus
Linguistics and Linguistic Theory 2 (1): 23–60.
Divjak, Dagmar S., and Stefan Th. Gries
2008 Clusters in the mind? Converging evidence from near synonymy in
Russian. The Mental Lexicon 3 (2):188–213.
Everitt, Brian S., and Torsten Hothorn
2006 A handbook of statistical analyses using R. Boca Raton, FL: Chapman
and Hall/CRC.
von Eye, Alexander
2002 Configural frequency analysis: methods, models, and applications.
Mahwah, NJ: Lawrence Erlbaum.
Faraway, Julian J.
2005 Linear models with R. Boca Raton: Chapman and Hall/CRC.
References 325

Faraway, Julian J.
2006 Extending the Linear Model with R: Generalized Linear, Mixed Ef-
fects and Nonparametric Regression models. Boca Raton: Chapman
and Hall/CRC.
Frankenberg-Garcia, Ana
2004 Are translations longer than source texts? A corpus-based study of
explicitation. Paper presented at Third International CULT (Corpus
Use and Learning to Translate) Conference, Barcelona, 22–24. Januar
2004.
Fraser, Bruce
1966 Some remarks on the VPC in English. In Problems in Semantics,
History of Linguistics, Linguistics and English, Francis P. Dinneen
(ed.), p. 45–61. Washington, DC: Georgetown University Press.
Gaudio, Rudolf P.
1994 Sounding gay: pitch properties in the speech of gay and straight men.
American Speech 69 (1): 30–57.
Gelman, Andrew, and Jennifer Hill
2007 Data Analysis Using Regression and Multilevel/Hierarchical Models.
Cambridge: Cambridge University Press.
Gentleman, Robert
2009 R Programming for Bioinformatics. Boca Raton, FL: Chapman and
Hall/CRC.
Good, Philip I.
2005 Introduction to Statistics through Resampling Methods and R/S-Plus.
Hoboken, NJ: John Wiley and Sons.
Good, Philip I., and James W. Hardin
2006 Common Errors in Statistics (and How to Avoid Them). 2nd ed. Ho-
boken, NJ: John Wiley and Sons.
Gries, Stefan Th.
2003a Multifactorial Analysis in Corpus Linguistics: A Study of Particle
Placement. London, New York: Continuum.
Gries, Stefan Th.
2003b Towards a corpus-based identification of prototypical instances of
constructions. Annual Review of Cognitive Linguistics 1: 181–200.
Gries, Stefan Th.
2006 Cognitive determinants of subtractive word-formation processes: a
corpus-based perspective. Cognitive Linguistics 17 (4): 535–558.
Gries, Stefan Th.
2009 Quantitative Corpus Linguistics with R: A Practical Introduction.
London, New York: Taylor and Francis.
Gries, Stefan Th.
forthc. Frequency tables: tests, effect sizes, and explorations.
326 References

Gries, Stefan Th., and Stefanie Wulff


2005 Do foreign language learners also have constructions? Evidence from
priming, sorting, and corpora. Annual Review of Cognitive Linguistics
3: 182–200.
Harrell, Frank E. Jr.
2001 Regression Modeling Strategies. With Applications to Linear Models,
Logistic Regression, and Survival Analysis. New York: Springer.
Hawkins, John A.
1994 A Performance Theory of Order and Constituency. Cambridge: Cam-
bridge University Press.
Iversen, Gudmund R.
1984 Bayesian Statistical Inference. Beverly Hills, CA: Sage.
Jaeger, T. Florian
2008 Categorical data analysis: away from ANOVAs (transformation or
not) and towards logit mixed models. Journal of Memory and Lan-
guage 59 (4): 434–446
Jaccard, James
2001 Interaction Effects in Logistic Regression. Thousand Oaks, CA: Sage.
Johnson, Keith
2008 Quantitative Methods in Linguistics. Malden, MA: Blackwell.
Jurafsky, Daniel, and James H. Martin
2008 Speech and Language Processing. 2nd ed.. Upper Saddle River, NJ:
Pearson Prentice Hall.
Killeen, Peter R.
2005 An alternative to null-hypothesis significance tests. Psychological
Science 16 (5): 345–353.
Kornai, Andras
1998 Analytic models in phonology. In The Organization of Phonology:
Constraints, Levels and Representations, Jaques Durand and Bernard
Laks (eds.), 395–418. Oxford: Oxford University Press.
Krauth, Joachim
1993 Einführung in die Konfigurationsfrequenzanalyse. Weinheim: Beltz.
Kučera, Henry, and W. Nelson Francis
1967 Computational analysis of Present-Day American English. Provi-
dence, RI: Brown University Press.
Lautsch, Erwin, and Stefan von Weber
1995 Methoden und Anwendungen der Konfigurationsfrequenzanalyse.
Weinheim: Beltz.
Ligges, Uwe
2005 Programmieren mit R. Berlin, Heidelberg, New York: Springer.
Loftus, Geoffrey R.
1996 Psychology will be a much better science when we change the way we
analyze data. Current Directions in Psychological Science 5 (6): 161–
171.
References 327

Maindonald, W. John, and John Braun


2003 Data Analysis and Graphics Using R: An Example-based Approach.
Cambridge: Cambridge University Press.
Manning, Christopher D., and Hinrich K. Schütze
2000 Foundations of Statistical Natural Language Processing. Cambridge,
MA: The MIT Press.
Marascuilo, Leonard A., and Maryellen McSweeney
1977 Nonparametric and Distribution-free Methods for the Social Sciences.
Monterey, CA: Brooks/Cole.
Matt, Georg E., and Thomas D. Cook
1994 Threats to the validity of research synthesis. In The Handbook of Re-
search Synthesis, H. Cooper and L.V. Hedges (eds.), 503–520. New
York: Russell Sage Foundation.
Miller, George A.
1971 Empirical methods in the study of semantics. In Semantics: An Inter-
disciplinary Reader, Danny D. Steinberg and Leon A. Jakobovits
(eds.), 569–585. London, New York: Cambridge University Press.
Murrell, Paul
2005 R graphics. Boca Raton, FL: Chapman and Hall/CRC.
Nagata, Hiroshi
1987 Long-term effect of repetition on judgments of grammaticality. Per-
ceptual and Motor Skills 65 (5): 295–299.
Nagata, Hiroshi
1989 Effect of repetition on grammaticality judgments under objective and
subjective self-awareness conditions. Journal of Psycholinguistic Re-
search 18 (3): 255–269.
Oakes, Michael P.
1998 Statistics for Corpus Linguistics. Edinburgh: Edinburgh University
Press.
Pampel, Fred C.
2000 Logistic Regression: A Primer. Thousand Oaks, CA: Sage.
Peters, Julia
2001 Given vs. new information influencing constituent ordering in the
VPC. In LACUS Forum XXVII: Speaking and Comprehending, Ruth
Brend, Alan K. Melby, and Arle Lommel (eds.), 133–140. Fullerton,
CA: LACUS.
Rice, Sally
1996 Prepositional prototypes. In The Construal of Space in Language and
Thought, Martin Pütz and René Dirven (eds.), 35–65, Berlin, New
York: Mouton de Gruyter.
Rizzo, Maria L.
2008 Statistical Computing with R. Boca Raton, FL: Chapman and
Hall/CRC.
328 References

Sandra, Dominiek, and Sally Rice


1995 Network analyses of prepositional meaning: Mirroring whose mind –
the linguist’s or the language user’s? Cognitive Linguistics 6 (1): 89–
130.
Sarkar, Deepayan
2008 Lattice: Multivariate Data Visualization with R. New York: Springer.
Shirai, Yasuhiro, and Roger W. Andersen
1995 The acquisition of tense-aspect morphology: A prototype account.
Language 71 (4): 743–762.
Spector, Phil
2008 Data Manipulation with R. New York: Springer.
Spencer, Nancy J.
1973 Differences between linguists and nonlinguists in intuitions of gram-
maticality-acceptability. Journal of Psycholinguistic Research 2 (2):
83–98.
Steinberg, Danny D.
1993 An Introduction to Psycholinguistics. London: Longman.
Stoll, Sabine, and Stefan Th. Gries
forthc. How to characterize development in corpora: an association strength
approach. Journal of Child Language 36 (5): 1075-1090.
Unwin, Anthony, Martin Theus, and Heike Hofmann
2006 Graphics of Large Datasets: Visualizing a Million. New York: Sprin-
ger.
Van Dongen. W. A. Sr.
1919 He Puts on His Hat & He Puts His Hat on. Neophilologus 4: 322–353.
Wright, Daniel B., and Kamala London
2009 Modern Regression Techniques Using R. Los Angeles, London: Sage.
Zar, Jerrold H.
1999 Biostatistical Analysis. 4th ed. Upper Saddle River, NJ: Prentice Hall.
Function index

%in%, 79 bartlett.test, 277, 286


?/help, 66, 104 basicStats, 117
abline, 143, 163, 276 beside, 302
abs, 82, 115, 162, 196, 209, 211, 217, binom.test, 158
223, 229, 235, 313 boot, 320
add, 100 bootstrap, 320
addmargins, 128, 305 boxplot, 118, 133, 137, 189, 207,
ade4, 146, 177 214, 219, 227, 256–257, 276
adonis, 291 bpplot, 119
after, 68 breaks, 106, 115, 131
all, 64, 82–83, 86, 90, 138, 241, 246, by, 70
254, 276, 284, 293 byrow, 155, 178, 193, 215
all.effects, 291
alternative, 165, 198, 203, 210, c, 60, 64, 68–71, 75–76, 79–82, 85–
217, 224, 233 86, 98, 103–104, 107, 109, 111–
amap, 59, 313 112, 114, 118, 153, 160, 178–179,
anova, 261–263, 273, 278, 283, 294– 193, 197, 220, 256, 270, 281, 285,
295, 300–301 289, 297, 300, 304, 311, 315
Anova, 288, 301 ca, 252
any, 82 calc.relimp, 273
aov, 279–281, 283, 287, 289, 291 car, 59, 106, 270, 284, 288, 301
ape, 319 cat, 74, 84
append, 68, 71, 74, 89 cbind, 95
apply, 232, 256, 258 cdplot, 294
arm, 274 ceiling, 75
arrows, 285 center, 122
as.character, 71 cex, 104, 264, 276
as.numeric, 71 chisq.test, 156–157, 172–173,
as.vector, 84 177, 179, 241, 243
assoc, 252 choose.files, 73–74, 88–90, 127,
assocplot, 176, 178–179, 294 139, 150, 160, 167, 181, 186, 189,
assocstats, 177 196, 200, 206, 213, 219, 227, 231,
attach, 91–92, 95–96, 127, 139, 150, 235, 241, 246, 254, 276, 284, 293,
160, 167, 181, 186, 189, 196, 200, 310
206, 213, 219, 227, 231, 235, 246, ckappa, 237
258, 276, 285, 294, 310 cluster, 318
axes, 256 cluster.stats, 319
coef, 268
barplot, 102–103, 153, 302 coefplot, 273
330 Function index

coin, 204 df, 125, 137, 155


col, 99, 102–106, 163 df1, 191
col.h, 164 df2, 191
col.names, 89 diag, 305, 313–316
col.v, 164 diff, 113
colClasses, 95 dim, 94–95, 174
collin.fnc, 273 dir, 73
colnames, 178 dist, 313
colSums, 128 Dist, 313–316
combinat, 212 dmultinom, 158
combn, 212 do.points, 164
comment.char, 88–89–90, 96, 127, dotchart, 106, 177
139, 150, 160, 167, 181, 186, 189,
196, 200, 206, 213, 219, 227, 231, e1071, 117
246, 254, 276, 285, 293 each, 70, 215
complete.cases, 95, 258, 294 ecdf, 164
conf.level, 125, 204, 210, 225 effects, 291
confint, 265–266, 299 eol, 89
contrasts, 258, 278, 287, 295, 301 exact, 203–204, 225, 230
cor, 141, 145–146, 217, 232–233, exactRankTests, 204
235, 255, 265, 314 exclude, 77
cor.test, 211, 233, 236 exp, 110, 297, 298–299
corpora, 61
correct, 157, 173, 179, 183, 203– F, 43, 66, 76–77, 79, 81, 123, 126,
204, 225, 241, 243 155, 157–158, 172–173, 179–180,
cov, 140–141 183, 187, 191–193, 197–198, 203–
cronbach, 237 204, 209–210, 215, 224–225, 230,
CrossTable, 177 232–233, 236, 241, 243, 245–246,
cumsum, 97, 162, 163 305, 313
cut, 115 factor, 83–85, 87
cutree, 318 family, 294–295, 305
fastbw, 273
dagTest, 151 fBasics, 117, 151
daisy, 318 file, 72–74, 76, 84, 88
data, 259, 272 file.choose, 73, 96
data.frame, 86–87, 315 fisher.test, 177
datadist, 296 fitted, 264
dbinom, 41–42, 158, 202–203 fivenum, 118
dec, 72, 88 fix, 93
decreasing, 81–82, 93 fligner.test, 194, 221
default, 73, 127 floor, 75
density, 106 font, 104
Design, 59, 273, 296, 306 formals, 66, 157
detach, 91 formula, 272
Function index 331

fpc, 319
freq, 105, 106 jarqueberaTest, 151
from, 70 jitter, 160, 219, 227
ftable, 129, 247
full.names, 73 kmeans, 318
fun, 299–300 kruskal.test, 283
function, 113 ks.test, 151, 164, 165

ggplot, 273, 321 labels, 102–104, 153, 163, 255–256,


gl, 85 264, 276
glm, 294–297, 301, 304–305 languageR, 273
gmodels, 177 lattice, 59, 321
gplots, 134 leaps, 273
grid, 99, 101, 135, 139, 143, 163, legend, 302
207, 264, 276 length, 67, 78, 97, 107, 109, 112–
gvlma, 273, 282, 291 113, 116, 120, 124, 133, 140, 154,
187, 196, 209, 211, 215, 223, 232,
h, 98 235
hatvalues, 271 level, 265
help/?, 66, 104 levels, 83
hclust, 317–318 library, 61
hdr.boxplot, 119 lines, 100, 106, 163–164
hdrcde, 119 list, 64, 83, 86, 90, 134, 137–138,
head, 227, 264 142, 241, 246, 254, 264, 266–267,
header, 88–90, 96, 127, 139, 150, 269, 276, 284, 293
160, 167, 181, 186, 189, 196, 200, lkappa, 237
206, 213, 219, 227, 231, 235, 241, lm, 141, 233, 259, 268, 272, 278, 287,
246, 254, 276, 284, 293 294–295
hist, 105, 106, 131, 150, 160, 201, lm.influence, 273
220 load, 310
Hmisc, 59, 119, 319 log, 110, 113, 255
loglin, 252
I, 268 loglm, 252
ifelse, 113, 211, 304 lower.tail, 43, 45–47, 125–126,
influence, 273 137, 155–156, 158, 172, 180, 183,
influence.measures, 273 187, 191–193, 197–198, 209–210,
install.packages, 58, 59 215–216, 224, 232–233, 235–236,
interaction.plot, 135, 138, 285 243, 245–246, 305
intersect, 80 lrm, 296–297, 299, 304, 306
IQR, 119, 133, 201, 222, 229 ls, 64, 83, 86, 90, 138, 241, 246, 254,
is.factor, 83 276, 284, 293, 310
is.na, 95 lty, 163, 276
is.nan, 95
is.vector, 67, 78 mad, 117
332 Function index

main, 105–106, 150, 160, 164, 220, nrow, 95


256
margin, 128, 162 oneway.test, 283
MASS, 59, 252, 273, 274, 306 options, 258, 278, 287, 295–296,
match, 79 301
matrix, 155, 178, 193, 215 order, 82, 93–95, 162
max, 101, 107, 113, 162 ordered, 279, 289
mclust, 319
Mclust, 319 p, 43, 155–156
mcnemar.test, 183 paired, 210, 217, 224, 230
mean, 60, 107–111, 115–117, 120– pairs, 252, 254–255
122, 125, 132–134, 137, 140, 151, pairwise.test, 283
196, 206–207, 209, 213, 215, 219– panel, 255
220, 227, 269, 276, 285, 302 par, 104, 160, 178–179, 220, 256,
median, 107–108, 133, 201, 222, 229 270, 281, 289, 300
merge, 95 pareto.chart, 104–105
method, 106, 141, 145–146, 160, 232, paste, 257, 276
235–236, 255, 313–317 pchisq, 44, 156, 158, 172, 180, 183,
mfrow, 160, 178–179, 220, 256, 270, 245, 305
281, 289, 300 pf, 44, 193
min, 101, 113, 174 pie, 102, 153
mjca, 252 plot, 98–101, 103, 129, 130, 134,
model.tables, 280–281, 283, 287, 139, 163–164–165, 168, 234, 256,
291 264, 270, 300, 318
mosaicplot, 130, 252 plot.design, 283
mshapiro.test, 151 plot.stepfun, 165
mu, 198, 203, 204 plotmeans, 134
multiple, 73 plotxlim, 143
mvnormtest, 151 pmax, 101
pmin, 101
na.action, 95 pnorm, 44, 46–47, 224, 236
na.fail, 95 points, 100–101, 131
na.omit, 95, 311 polr, 306
names, 71, 109, 256 pos, 276
names.arg, 102–103, 153 predict, 142, 144, 264–267, 269,
nchar, 199 304
ncol, 95, 155, 178, 193, 215 print, 63
ncv.test, 270 prob, 43, 64–65
newdata, 264, 266–267 probs, 114
nj, 319 prod, 109
nls, 274 prop.table, 97, 120, 128, 131, 162,
nmax, 73 232, 246–247, 301–302
notch, 118, 133, 137, 189, 207, 214, prop.test, 126, 158
219, 227, 276 psy, 237
Function index 333

pt, 44, 197–198, 210, 216, 233 271, 281, 290


pvclust, 319 response, 135
pvpick, 319 rev, 82
pvrect, 319 rm, 64, 83, 86, 90, 138, 241, 246, 254,
276, 284, 293
q, 66 robust, 274
qbinom, 42–44 round, 74, 142–144, 255, 268, 276–
qcc, 59, 104–105 277, 316
qchisq, 44, 155–156, 158, 172, 187, row.names, 87–89, 254, 264
243, 246 rownames, 178
qf, 44, 191, 193 rowSums, 128
qnorm, 44–45, 126, 137, 224, 235 rpart, 59, 291, 306
qplot, 273 rstudent, 273
qt, 44, 124–125, 197, 209–210, 215, rug, 160, 207, 214, 219, 227, 276
232 runLast, 66
quantile, 113–115 runs.test, 183
quantreg, 274
quiet, 73 s.hist, 146
quote, 88–89–90, 96, 127, 139, 150, sample, 64–66, 94
160, 167, 181, 186, 189, 196, 200, sapply, 258
206, 213, 219, 227, 231, 254, 276, save, 66, 95
285, 293 scale, 122–123, 258, 268
scan, 71–74
range, 113, 138 scatterplot, 106
rank, 94–95, 222, 229 scatterplot3d, 106
rbind, 95 sd, 116–117, 120, 122, 133, 137, 141,
read.csv, 95 151, 196, 215, 277
read.csv2, 95 segments, 104
read.delim, 95 select.list, 73
read.spss, 95 sep, 72–74, 84, 88–90, 96, 127, 139,
read.table, 88–90, 95–96, 127, 150, 160, 167, 181, 186, 189, 196,
139, 150, 160, 167, 181, 186, 189, 200, 206, 213, 219, 227, 231, 235,
196, 200, 206, 213, 219, 227, 231, 241, 246, 254, 256, 276, 285, 293
235, 241, 246, 254, 274, 276, 284, seq, 69–71, 74, 121
293, 310 setdiff, 80
rect.hclust, 318 shapiro.test, 150, 186, 189–190,
regsubsets, 273 196, 201, 207, 214, 221, 228, 231–
relaimpo, 273 232, 271, 281, 290
relevel, 85 side, 160, 207, 214, 219, 227, 276
reorder, 85 sieve, 177
rep, 69–70–71, 83–85, 121, 154–155, simulate.p.value, 177
199, 215, 264 size, 43, 64, 65
replace, 64–65 skewness, 117
residuals, 143–144, 234, 260, 264, slog, 112
334 Function index

sort, 81, 281 text, 102–104, 163, 219, 256, 264,


source, 249 276
space, 102 to, 70
spineplot, 130, 294 trace.factor, 135
sqrt, 63–64, 67, 119–120, 124, 126, trim, 108
137, 174, 209, 211, 215, 217, 223, trunc, 75
232, 235, 260, 313 tseries, 183
srt, 104 TukeyHSD, 279, 283, 289
status, 66 type, 98, 101, 103, 114, 131, 163,
step, 273, 301 165, 256, 264, 285, 288, 301, 304
stepAIC, 273
str, 69, 87, 89–90, 96, 139, 160, 167, union, 80
181, 189, 200, 206, 213, 219, 231, unique, 80, 83
246, 257, 276, 310 update, 260, 262–263, 295
stripchart, 106, 160 upper, 313–316
subset, 93 use, 255
substr, 199
sum, 42, 77, 107, 112–113, 116, 140, v, 163
154, 174, 179, 202–203, 222, 229, validate, 306
260, 297–299, 305, 313 values, 68
sumary, 262 var, 116, 120, 124, 133, 186–187,
summary, 118, 173, 233, 254, 259, 189, 191, 209
260, 263, 272, 280, 281, 285, 287– var.equal, 210
288, 293, 295 var.test, 193–194, 208
varclus, 319
t, 176, 313, 315 vcd, 59, 177, 252, 254
T, 43, 64, 66, 76–79, 83, 86, 88, 90, vegan, 291
96, 100, 118, 122, 138, 150, 155, verticals, 164
157, 173, 177, 178, 181, 183, 186, vioplot, 119
189, 193, 196, 200, 203–204, 206,
210, 213–216, 219, 225, 227, 230– what, 71–74
231, 235, 241, 246, 254, 276, 279, which, 76–79, 91–92, 243, 270
284, 289, 293, 302 which.max, 107
t.test, 125, 198, 210–211, 216–217 wilcox.exact, 204, 226, 230
table, 77, 96–97, 102–103, 105, 107, wilcox.test, 203, 204, 224–225,
115, 120, 127–128, 130–131, 162, 230
168, 183, 241, 243, 246–247, 301– with, 95
302, 304 write, 74
table.cont, 177 write.table, 89
tapply, 132–134, 136–137, 189–190,
207, 213, 219, 220–222, 227, 229, x.factor, 135
269, 276–277, 285 xlab, 99, 101, 103, 105–106, 131,
test, 295 139, 143, 150, 160, 163, 179, 220,
test.statistic, 301 256, 264
Function index 335

xlim, 99, 101, 103, 105–106, 139, ylab, 99, 101, 103, 105–106, 131,
150, 160, 163, 220, 264 139, 143, 150, 160, 163, 179, 220,
xtabs, 129 256, 264, 300
ylim, 99–101, 103, 105–106, 131,
133, 138–139, 143, 160, 207, 214,
219, 220, 264, 285, 300

You might also like