Using R For Data Analysis and Graphics Introduction, Code and Commentary
Using R For Data Analysis and Graphics Introduction, Code and Commentary
J H Maindonald
©J. H. Maindonald 2000, 2004. A licence is granted for personal study and classroom use. Redistribution in
any other form is prohibited.
Languages shape the way we think, and determine what we can think about (Benjamin Whorf.).
14 November 2004
ii
C a m ba r v ille W h ia n W h ia n C o n o n da le B ulbur in
B e llbir d B y r a n ge r y A lly n R iv e r
f e m a le m a le
60 65 70 75
42
40
t a i l
38
le n gt h
36
34
3 2
75
70
f o o t
le n gt h
65
60
55
50
ear co n ch
le n gt h
45
4 0
32 36 40
4 0
45 50 55
Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. : Morphological variation
among populations of the mountain brushtail possum, trichosurus caninus Ogibly
(Phalangeridae:Marsupialia). Australian Journal of Zoology 43: 449-459, 1995.
possum n. 1 Any of many chiefly herbivorous, long-tailed, tree-dwelling, mainly Australian marsupials, some
of which are gliding animals (e.g. brush-tailed possum, flying possum). 2 a mildly scornful term for a person. 3
an affectionate mode of address.
From the Australian Oxford Paperback Dictionary, 2nd ed, 1996.
ii
Introduction ..................................................................................................................................................................1
1. Starting Up ................................................................................................................................................................3
1.1 Getting started under Windows ......................................................................................................................3
1.2 Use of an Editor Script Window .....................................................................................................................4
1.3 A Short R Session ............................................................................................................................................5
1.4 Further Notational Details.............................................................................................................................7
1.5 On-line Help....................................................................................................................................................7
1.6 The Loading or Attaching of Datasets...........................................................................................................7
1.7 Exercises..........................................................................................................................................................8
2. An Overview of R .....................................................................................................................................................9
2.1 The Uses of R .......................................................................................................................................................9
2.2 R Objects ............................................................................................................................................................11
*2.3 Looping.............................................................................................................................................................12
2.4 Vectors................................................................................................................................................................12
2.5 Data Frames ......................................................................................................................................................15
2.6 Common Useful Functions ................................................................................................................................16
2.7 Making Tables....................................................................................................................................................17
2.8 The Search List ..................................................................................................................................................17
2.9 Functions in R ....................................................................................................................................................18
2.10 More Detailed Information .............................................................................................................................19
2.11 Exercises...........................................................................................................................................................19
3. Plotting.....................................................................................................................................................................21
3.1 plot () and allied functions ................................................................................................................................21
3.2 Fine control – Parameter settings ....................................................................................................................21
3.3 Adding points, lines and text .............................................................................................................................22
3.4 Identification and Location on the Figure Region ..........................................................................................24
3.5 Plots that show the distribution of data values................................................................................................25
3.6 Other Useful Plotting Functions.......................................................................................................................27
3.7 Plotting Mathematical Symbols ........................................................................................................................29
3.8 Guidelines for Graphs .......................................................................................................................................29
3.9 Exercises.............................................................................................................................................................30
3.10 References ........................................................................................................................................................31
8. Functions .................................................................................................................................................................67
8.1 Functions for Confidence Intervals and Tests .................................................................................................67
8.2 Matching and Ordering.....................................................................................................................................67
8.3 String Functions.................................................................................................................................................67
8.4 Application of a Function to the Columns of an Array or Data Frame.........................................................68
*8.5 aggregate() and tapply() .................................................................................................................................68
*8.7 Merging Data Frames .....................................................................................................................................69
8.8 Dates...................................................................................................................................................................69
8.9. Writing Functions and other Code ..................................................................................................................70
8.10 Exercises...........................................................................................................................................................73
iv
v
v
vi
vi
1
Introduction
These notes are designed to allow individuals who have a basic grounding in statistical methodology to work
through examples that demonstrate the use of R for a range of types of data manipulation, graphical presentation
and statistical analysis. Books that provide a more extended commentary on the methods illustrated in these
examples include Maindonald and Braun (2003).
The R System
R implements a dialect of the S language that was developed at AT&T Bell Laboratories by Rick Becker, John
Chambers and Allan Wilks. Versions of R are available, at no cost, for 32-bit versions of Microsoft Windows
for Linux, for Unix and for Macintosh OS X. (There are are older versions of R that support 8.6 and 9.) It is
available through the Comprehensive R Archive Network (CRAN). Web addresses are given below.
The citation for John Chambers’ 1998 Association for Computing Machinery Software award stated that S has
“forever altered how people analyze, visualize and manipulate data.” The R project enlarges on the ideas and
insights that generated the S language.
Here are points relating to the use of R that potential users might note:
R has extensive and powerful graphics abilities, that are tightly linked with its analytic abilities.
The R system is developing rapidly. New features and abilities appear every few months.
Simple calculations and analyses can be handled straightforwardly, albeit (in the current version) using a
command line interface. Chapters 1 and 2 indicate the range of abilities that are immediately available to novice
users. If simple methods prove inadequate, there can be recourse to the huge range of more advanced abilities
that R offers. Adaptation of available abilities allows even greater flexibility.
The R community is widely drawn, from application area specialists as well as statistical specialists. It is a
community that is sensitive to the potential for misuse of statistical techniques and suspicious of what might
appear to be mindless use. Expect scepticism of the use of models that are not susceptible to some minimal form
of data-based validation.
Because R is free, users have no right to expect attention, on the R-help list or elsewhere, to queries. Be grateful
for whatever help is given.
Point and click interfaces are at an early stage of development.
While R is as reliable as any statistical software that is available, and exposed to higher standards of scrutiny
than most other systems, there are traps that call for special care. Many of the model fitting routines are leading
edge. There is a limited tradition of experience of the limitations and pitfalls of some of the newer abilities.
Whatever the statistical system, and especially when there is some element of complication, check each step
with care.
There is no substitute for experience and expert knowledge, even when the statistical analysis task may seem
straightforward. Neither R nor any other statistical system will give the statistical expertise that is needed to use
sophisticated abilities, or to know when naïve methods are not enough. Experience with the use of R is
however, more than with most systems, likely to be an educational experience.
Hurrah for the R development team!
1
The structure of an R program has similarities with programs that are written in C or in its successors C++ and Java.
Important differences are that R has no header files, most declarations are implicit, there are no pointers, and vectors of text
strings can be defined and manipulated directly. The implementation of R uses a computing model that is based on the
Scheme dialect of the LISP language.
2
The R Project
The initial version of R was developed by Ross Ihaka and Robert Gentleman, both from the University of
Auckland. Development of R is now overseen by a `core team’ of about a dozen people, widely drawn from
different institutions worldwide. The development model is similar to that of the Linux operating system.
Like Linux, R is an “open source” system. Source-code is available for inspection or for adaptation to other
systems. In principle, if it is unclear what a routine does, one can check the source code. Exposing code to the
critical scrutiny of highly expert users has proved an extremely effective way to identify bugs and other
inadequacies, and to elicit ideas for enhancement. Reported bugs are commonly fixed in the next minor-minor
release, which will usually appear within a matter of weeks.
Novice users will notice small but occasionally important differences between the S dialect that R implements
and the commercial S-PLUS implementation of S. Those who write their own substantial functions and (more
importantly) packages will find large differences. Packages that have been written for R offer abilities that are
broadly comparable with, or in some instances go beyond, those in S-PLUS libraries. These give access to up-
to-date methodology from leading statistical researchers. R has strong graphics abilities. The lattice graphics
package gives many of the abilities that are in the S-PLUS trellis library.
R provides a language environment that is attractive for the development of new scientific computational tools.
Computer-intensive components can, if computational efficiency demands, be handled by a call to a function
that is written in the C language.
The R system may struggle to handle very large data sets. Depending on available computer memory, the
processing of a data set containing one hundred thousand observations and perhaps twenty variables may press
the limits of what R can easily handle.
_________________________________________________________________________
Jeff Wood (CMIS, CSIRO), Andreas Ruckstuhl (Technikum Winterthur Ingenieurschule, Switzerland) and John
Braun (University of Western Ontario) gave me exemplary help in getting the earlier S-PLUS version of this
document somewhere near shipshape form. John Braun gave valuable help with proofreading, and provided
several of the data sets and a number of the exercises. I take full responsibility for the errors that remain. I am
grateful, also, to various scientists named in the notes who have allowed me to use their data.
3
1. Starting Up
R must be installed on your system! If it is not, follow the installation instructions appropriate to the operating
system. Installation is now especially straightforward for Windows users. Copy down the latest SetupR.exe
from the relevant base directory on the nearest CRAN site, click on its icon to start installation, and follow
instructions. Packages that do not come with the base distribution must be downloaded and installed separately.
It pays to have a separate working directory for each major project. For more details. see the README file that
is included with the R distribution. Users of Microsoft Windows may wish to create a separate icon for each
such working directory. First create the directory. Then right click|copy2 to copy an existing R icon, it, right
click|paste to place a copy on the desktop, right click|rename on the copy to rename it3, and then finally go to
right click|properties to set the Start in directory to be the working directory that was set up earlier.
The command line prompt, i.e. the >, is an invitation to start typing in your commands. For example, type 2+2
and press the Enter key. Here is what appears on the screen:
> 2+2
[1] 4
>
Here the result is 4. The[1] says, a little strangely, “first requested element will follow”. Here, there is just one
element. The > indicates that R is ready for another command.
2
This is a shortcut for “right click, then left click on the copy menu item”.
3
Enter the name of your choice into the name field. For ease of remembering, choose a name that closely matches the name
of the workspace directory, perhaps the name itself.
4
Fig. 2: The
focus is on an
R display file
window, with
the console
window in the
background.
Fig. 3: This shows the five icons that appear when the focus
is on a script file window. The icons are, starting from the
left:
Open script, Save script, Run line or selection, Return focus
to console, and Print. The text in a script file window can be
edited, or new text added. Display file windows, which have
a somewhat similar set of icons but do not allow editing, are
another possibility.
Under Unix, the standard form of input is the command line interface. Under both Microsoft Windows and
Linux (or Unix), a further possibility is to run R from within the emacs editor 4. Under Microsoft Windows, an
attractive option is to use the R-WinEdt utility that is designed for use with the shareware WinEdt editor5.
4
This requires emacs, and ESS which runs under emacs. Both are free. Look under Software|Other on the CRAN page.
5
The following reads in the data from the file austpop.txt on a disk in drive a:
> austpop <- read.table(“a:/austpop.txt”, header=T)
The <- is a left diamond bracket ( <) followed by a minus sign (-). It means “is assigned to”. Use of header=T
causes R to use= the first line to get header information for the columns. If column headings are not included in
the file, the argument can be omitted.
Now type in austpop at the command line prompt, displaying the object on the screen:
> austpop
Year NSW Vic. Qld SA WA Tas. NT ACT Aust.
1 1917 1904 1409 683 440 306 193 5 3 4941
2 1927 2402 1727 873 565 392 211 4 8 6182
. . .
The object austpop is, in R parlance, a data frame. Data frames that consist entirely of numeric data have the
same form of rectangular layout as numeric matrices. Here is a plot of the ACT population between 1917 and
1997 (Figure 4).
300
200
ACT
100
50
0
Year
5
The R-WinEdt utility, which is free, is a “plugin” for WinEdt. For links to the relevant web pages, for WinEdt R-WinEdt
and various other editors that work with R, look under Software|Other on the CRAN web page.
6
The option pch=16 sets the plotting character to solid black dots. Figure 4 shows the graph:This plot can be
improved greatly. We can specify more informative axis labels, change size of the text and of the plotting
symbol, and so on.
In these notes, we often continue commands over more than one line, but omit the + that will appear on the
commands window if the command is typed in as we show it.
For the names of R objects or commands, case is significant. Thus Austpop is different from austpop. For
file names however, the Microsoft Windows conventions apply, and case does not distinguish file names. On
Unix systems letters that have a different case are treated as different.
Anything that follows a # on the command line is taken as comment and ignored by R.
Note: Recall that, in order to quit from the R session we had to type q(). This is because q is a function.
Typing q on its own, without the parentheses, displays the text of the function on the screen. Try it!
In R for Windows, an alternative is to click on the help menu item, and then use key words to do a search. To
get help on a specific R function, e.g. plot(), type in
> help(plot)
The two search functions help.search() and apropos() can be a huge help in finding what one wants.
Examples of their use are:
> help.search("matrix")
(This lists all functions whose help pages have a title or alias in which the text string
“matrix” appears.)
> apropos(“matrix”)
(This lists all function names that include the text “matrix”.)
The function help.start() opens a browser window that gives access to the full range of documentation for
syntax, packages and functions.
Experimentation often helps clarify the precise action of an R function.
Files that are mentioned in these notes, and that are not supplied with R (e.g., from the datasets or
MASS packages) should then be available without need for any further action.
6
Multiple commands may appear on the one line, with the semicolon (;) as the separator.
8
Users can also load (use load()) or attach (use attach()) specific files. These have a similar
effect, the difference being that with attach() datasets are loaded into memory only when required
for use.
Distinguish between the attaching of image files and the attaching of data frames. The attaching of
data frames will be discussed later in these notes.
1.7 Exercises
1. In the data frame elasticband from section 1.3.1, plot distance against stretch.
2. The following ten observations, taken during the years 1970-79, are on October snow cover for Eurasia.
(Snow cover is in millions of square kilometers):
year snow.cover
1970 6.5
1971 12.0
1972 14.9
1973 10.0
1974 10.7
1975 7.9
1976 21.9
1977 12.5
1978 14.5
1979 9.2
i. Enter the data into R. [Section 1.3.1 showed one way to do this. To save keystrokes, enter the successive
years as 1970:1979]
ii. Plot snow.cover versus year.
iii Use the hist() command to plot a histogram of the snow cover values.
iv. Repeat ii and iii after taking logarithms of snow cover.
3. Input the following data, on damage that had occurred in space shuttle launches prior to the disastrous launch
of Jan 28 1986. These are the data, for 6 launches out of 24, that were included in the pre-launch charts that
were used in deciding whether to proceed with the launch. (Data for the 23 launches where information is
available is in the data set orings that accompanies these notes.)
Temperature Erosion Blowby Total
(F) incidents incidents incidents
53 3 2 5
57 1 0 1
63 1 0 1
70 1 0 1
70 1 0 1
75 0 2 1
Enter these data into a data frame, with (for example) column names temperature, erosion, blowby and
total. (Refer back to Section 1.3.1). Plot total incidents against temperature.
9
2. An Overview of R
We may for example require information on ranges of variables. Thus the range of distances (first column) is
from 2 miles to 28 miles, while the range of times (third column) is from 15.95 (minutes) to 204.6 minutes.
We will discuss graphical summaries in the next section.
7
There is also a version in the Venables and Ripley MASS library.
10
Suppose we wish to calculate logarithms, and then calculate correlations. We can do all this in one step, thus:
> cor(log(hills))
distance climb time
distance 1.00 0.700 0.890
climb 0.70 1.000 0.724
time 0.89 0.724 1.000
Unfortunately R was not clever enough to relabel distance as log(distance), climb as log(climb), and time as
log(time). Notice that the correlations between time and distance, and between time and climb, have reduced.
Why has this happened?
Straight Line Regression:
Here is a straight line regression calculation. The data are stored in the data frame elasticband that
accompanies these notes. The variable names are the names of columns in that data frame. The formula that is
11
supplied to the lm() command asks for the regression of distance travelled by the elastic band ( distance) on
the amount by which it is stretched (stretch).
> plot(distance ~ stretch,data=elasticband, pch=16)
> elastic.lm <- lm(distance~stretch,data= elasticband)
> lm(distance ~stretch,data= elasticband)
Call:
lm(formula = distance ~ stretch, data = elasticband)
Coefficients:
(Intercept) stretch
-63.571 4.554
Try it!
2.2 R Objects
All R entities, including functions and data structures, exist as objects. They can all be operated on as data.
Type in ls() to see the names of all objects in your workspace. An alternative to ls() is objects(). In both
cases there is provision to specify a particular pattern, e.g. starting with the letter `p’ 8.
Typing the name of an object causes the printing of its contents. Try typing q, mean, etc.
In a long session, it makes sense to save the contents of the working directory from time to time. It is also
possible to save individual objects, or collections of objects into a named image file. Some possibilities are:
save.image() # Save contents of workspace, into the file .RData
save.image(file="archive.RData") # Save into the file archive.RData
save(celsius, fahrenheit, file="tempscales.RData")
Image files, from the working directory or (with the path specified) from another directory, can be attached, thus
making objects in the file available on request. For example
attach("tempscales.RData")
ls(pos=2) # Check the contents of the file that has been attached
The parameter pos gives the position on the search list. (The search list is discussed later in this chapter, in
Section 2.9.)
8
Type in help(ls) and help(grep) to get details. The pattern matching conventions are those used for grep(),
which is modelled on the Unix grep command.
12
Important: On quitting, R offers the option of saving the workspace image, by default in the file .RData in the
working directory. This allows the retention, for use in the next session in the same workspace, any objects that
were created in the current session. Careful housekeeping may be needed to distinguish between objects that are
to be kept and objects that will not be used again. Before typing q() to quit, use rm() to remove objects that
are no longer required. Saving the workspace image will then save everything remains. The workspace image
will be automatically loaded upon starting another session in that directory.
*92.3 Looping
A simple example of a for loop is10
for (i in 1:10) print(i)
Here is another example of a for loop, to do in a complicated way what we did very simply in section 2.1.5:
> # Celsius to Fahrenheit
> for (celsius in 25:30)
+ print(c(celsius, 9/5*celsius + 32))
[1] 25 77
[1] 26.0 78.8
[1] 27.0 80.6
[1] 28.0 82.4
[1] 29.0 84.2
[1] 30 86
The calculation iteratively builds up the object answer, using the successive values of j listed in the vector
(31,51,91). i.e. Initially, j=31, and answer is assigned the value 31 + 0 = 31. Then j=51, and answer is
assigned the value 51 + 31 = 82. Finally, j=91, and answer is assigned the value 91 + 81 = 173. Then the
procedure ends, and the contents of answer can be examined by typing in answer and pressing the Enter key.
There is a more straightforward way to do this calculation:
> sum(c(31,51,91))
[1] 173
Skilled R users have limited recourse to loops. There are often, as in this and earlier examples, better
alternatives.
2.4 Vectors
Examples of vectors are
c(2,3,5,2,7,1)
3:10 # The numbers 3, 4, .., 10
c(T,F,F,F,T,T,F)
c(”Canberra”,”Sydney”,”Newcastle”,”Darwin”)
9
Asterisks (*) identify sections that are more technical and might be omitted at a first reading
10
Other looping constructs are:
repeat <expression> ## break must appear somewhere inside the loop
while (x>0) <expression>
Here <expression> is an R statement, or a sequence of statements that are enclosed within braces
13
Vectors may have mode logical, numeric or character 11. The first two vectors above are numeric, the third is
logical (i.e. a vector with elements of mode logical), and the fourth is a string vector (i.e. a vector with elements
of mode character).
The missing value symbol, which is NA, can be included as an element of a vector.
2. Specify a vector of logical values. The elements that are extracted are those for which the logical value is T.
Thus suppose we want to extract values of x that are greater than 10.
> x>10 # This generates a vector of logical (T or F)
[1] F T F T T
> x[x>10]
[1] 11 15 12
Arithmetic relations that may be used in the extraction of subsets of vectors are <, <=, >, >=, ==, and !=. The
first four compare magnitudes, == tests for equality, and != tests for inequality.
11
It will, later in these notes, be important to know the “class” of such objects. This determines how the
method used by such generic functions as print(), plot() and summary(). Use the function class()
to determine the class of an object.
12
A third more subtle method is available when vectors have named elements. One can then use a vector of
names to extract the elements, thus:
> c(Andreas=178, John=185, Jeff=183)[c("John","Jeff")]
John Jeff
185 183
14
2.4.4 Factors
A factor is a special type of vector, stored internally as a numeric vector with values 1, 2, 3, k. The value k is
the number of levels. An attributes table gives the ‘level’ for each integer value13. Factors provide a compact
way to store character strings. They are crucial in the representation of categorical effects in model and
graphics formulae. The class attribute of a factor has, not surprisingly, the value “factor”.
Consider a survey that has data on 691 females and 692 males. If the first 691 are females and the next 692
males, we can create a vector of strings that that holds the values thus:
gender <- c(rep(“female”,691), rep(“male”,692))
(The usage is that rep(“female”, 691) creates 691 copies of the character string “female”, and similarly for
the creation of 692 copies of “male”.)
We can change the vector to a factor, by entering:
gender <- factor(gender)
Internally the factor gender is stored as 691 1’s, followed by 692 2’s. It has stored with it the table:
1 female
2 male
Once stored as a factor, the space required for storage is reduced.
In most cases where the context seems to demand a character string, the 1 is translated into “female” and the 2
into “male”. The values “female” and “male” are the levels of the factor. By default, the levels are in
alphanumeric order, so that “female” precedes “male”. Hence:
> levels(gender) # Assumes gender is a factor, created as above
[1] "female" "male"
The order of the levels in a factor determines the order in which the levels appear in graphs that use this
information, and in tables. To cause “male” to come before “female”, use
gender <- relevel(gender, ref=“male”)
An alternative is
gender <- factor(gender, levels=c(“male”, “female”))
This last syntax is available both when the factor is first created, or later when one wishes to change the order of
levels in an existing factor. Incorrect spelling of the level names will generate an error message. Try
gender <- factor(c(rep(“female”,691), rep(“male”,692)))
table(gender)
gender <- factor(gender, levels=c(“male”, “female”))
table(gender)
gender <- factor(gender, levels=c(“Male”, “female”))
# Erroneous - "male" rows now hold missing values
table(gender)
rm(gender) # Remove gender
13
The attributes() function makes it possible to inspect attributes. For example
attributes(factor(1:3))
The data frame has row labels (access with row.names(Cars93.summary)) Compact, Large, . . . The column
names (access with names(Cars93.summary)) are Min.passengers (i.e. the minimum number of
passengers for cars in this category), Max.passengers, No.of.cars., and abbrev. The first three columns
have mode numeric, and the fourth has mode character. Columns can be vectors of any mode. The column
abbrev could equally well be stored as a factor.
Any of the following 14 will pick out the fourth column of the data frame Cars93.summary, then storing it in
the vector type.
type <- Cars93.summary$abbrev
type <- Cars93.summary[,4]
type <- Cars93.summary[,”abbrev”]
type <- Cars93.summary[[4]] # Take the object that is stored
# in the fourth list element.
14
Also legal is Cars93.summary[2]. This gives a data frame with the single column Type.
15
In general forms of list, elements that are of arbitrary type. They may be any mixture of scalars, vectors, functions, etc.
16
Type data() to get a list of built-in data sets in the packages that have been loaded 16.
The functions mean(), median(), range(), and a number of other functions, take the argument na.rm=T;
i.e. remove NAs, then proceed with the calculation.
By default, sort() omits any NAs. The function order() places NAs last. Hence:
> x <- c(1, 20, 2, NA, 22)
> order(x)
[1] 1 3 2 5 4
> x[order(x)]
[1] 1 2 20 22 NA
> sort(x)
[1] 1 2 20 22
16
The list include all packages that are in the current environment.
17
Source: Ash, J. and Southern, W. 1982: Forest biomass at Butler’s Creek, Edith & Joy London Foundation, New South
Wales, Unpublished manuscript. See also Ash, J. and Helman, C. 1990: Floristics and vegetation biomass of a forest
catchment, Kioloa, south coastal N.S.W. Cunninghamia, 2(2): 167-182.
17
The functions mean() and range(), and a number of other functions, take the parameter na.rm. For example
> range(rainforest$branch, na.rm=T) # Omit NAs, then determine the range
[1] 4 120
One can specify na.rm=T as a third argument to the function sapply. This argument is then automatically
passed to the function that is specified in the second argument position. For example:
> sapply(rainforest[,-7], range, na.rm=T)
dbh wood bark root rootsk branch
[1,] 4 3 8 2 0.3 4
[2,] 56 1530 105 135 24.0 120
Chapter 8 has further details on the use of sapply(). There is an example that shows how to use it to count the
number of missing values in each column of data.
WARNING: NAs are by default ignored. The action needed to get NAs tabulated under a separate NA category
depends, annoyingly, on whether or not the vector is a factor. If the vector is not a factor, specify
exclude=NULL. If the vector is a factor then it is necessary to generate a new factor that includes “NA” as a
level. Specify x <- factor(x,exclude=NULL)
> x_c(1,5,NA,8)
> x <- factor(x)
> x
[1] 1 5 NA 8
Levels: 1 5 8
> factor(x,exclude=NULL)
[1] 1 5 NA 8
Levels: 1 5 8 NA
FALSE TRUE
Acacia mabellae 6 10
C. fraseri 0 12
Acmena smithii 15 11
B. myrtifolia 1 10
Thus for Acacia mabellae there are 6 NAs for the variable branch (i.e. number of branches over 2cm in
diameter), out of a total of 16 data values.
Notice that the loading of a new package extends the search list.
> library(MASS)
> search()
[1] ".GlobalEnv" "package:MASS" "package:methods"
[4] "package:stats" "package:graphics" "package:grDevices"
[7] "package:utils" "package:datasets" "Autoloads"
[10] "package:base"
Use of attach() likewise extends the search list. This function can be used to attach data frames or lists (use the
name, without quotes) or image (.RData) files (the file name is placed in quotes).
The following demonstrates the attaching of the data frame primates:
> names(primates)
[1] "Bodywt" "Brainwt"
> Bodywt
Error: Object "Bodywt" not found
> attach(primates) # R will now know where to find Bodywt
> Bodywt
[1] 10.0 207.0 62.0 6.8 52.2
Once the data frame primates has been attached, its columns can be accessed by giving their names, without
further reference to the name of the data frame. In technical terms, the data frame becomes a database, which is
searched as required for objects that the user may specify.
2.9 Functions in R
We give two simple examples of R functions.
The return value is the value of the final (and in this instance only) expression that appears in the function
body18. Use the function thus
> miles.to.km(175) # Approximate distance to Sydney, in miles
[1] 280
The function will do the conversion for several distances all at once. To convert a vector of the three distances
100, 200 and 300 miles to distances in kilometers, specify:
> miles.to.km(c(100,200,300))
[1] 160 320 480
Here is a function that makes it possible to plot the figures for any pair of candidates.
plot.florida <- function(xvar=”BUSH”, yvar=”BUCHANAN”){
x <- florida[,xvar]
y <- florida[,yvar]
18
Alternatively a return value may be given using an explicit return() statement. This is however an uncommon
construction
19
plot(x, y, xlab=xvar,ylab=yvar)
mtext(side=3, line=1.75,
“Votes in Florida, by county, in \nthe 2000 US Presidential election”)
}
2.11 Exercises
1. For each of the following code sequences, predict the result. Then do the computation:
a) answer <- 0
for (j in 3:5){ answer <- j+answer }
b) answer<- 10
for (j in 3:5){ answer <- j+answer }
c) answer <- 10
for (j in 3:5){ answer <- j*answer }
2. Look up the help for the function prod(), and use prod() to do the calculation in 1(c) above. Alternatively,
how would you expect prod() to work? Try it!
3. Add up all the numbers from 1 to 100 in two different ways: using for and using sum. Now apply the
function to the sequence 1:100. What is its action?
4. Multiply all the numbers from 1 to 50 in two different ways: using for and using prod.
20
5. The volume of a sphere of radius r is given by 4 r3/3. For spheres having radii 3, 4, 5, …, 20 find the
corresponding volumes and print the results out in a table. Use the technique of section 2.1.5 to construct a data
frame with columns radius and volume.
6. Use sapply() to apply the function is.factor to each column of the supplied data frame tinting. For
each of the columns that are identified as factors, determine the levels. Which columns are ordered factors?
[Use is.ordered()].
21
3. Plotting
The functions plot(), points(), lines(), text(), mtext(), axis(), identify() etc. form a suite
that plots points, lines and text. To see some of the possibilities that R offers, enter
demo(graphics)
Comment on the appearance that these graphs present. Is it obvious that these points lie on a sine curve? How
can one make it obvious? (Place the cursor over the lower border of the graph sheet, until it becomes a double-
sided arror. Drag the border in towards the top border, making the graph sheet short and wide.)
Here are two further examples.
attach(elasticband) # R now knows where to find distance & stretch
plot(distance ~ stretch)
plot(ACT ~ Year, data=austpop, type="l")
plot(ACT ~ Year, data=austpop, type="b")
The points() function adds points to a plot. The lines() function adds lines to a plot19. The text()
function adds text at specified locations. The mtext() function places text in one of the margins. The axis()
function gives fine control over axis ticks and labels.
Here is a further possibility
attach(austpop)
plot(spline(Year, ACT), type="l") # Fit smooth curve through points
detach(austpop) # In S-PLUS, specify detach(“austpop”)
19
Actually these functions differ only in the default setting for the parameter type. The default setting for
points() is type = "p", and for lines() is type = "l". Explicitly setting type = "p" causes either
function to plot points, type = "l" gives lines.
22
increases the text and plot symbol size 25% above the default. The addition of mex=1.25 makes room in the
margin to accommodate the increased text size.
On the first use of par() to make changes to the current device, it is often useful to store existing settings, so
that they can be restored later. For this, specify
oldpar <- par(cex=1.25, mex=1.25)
This stores the existing settings in oldpar, then changes parameters (here cex and mex) as requested. To
restore the original parameter settings at some later time, enter par(oldpar). Here is an example:
attach(elasticband)
oldpar <- par(cex=1.5, mex=1.5)
plot(distance ~ stretch)
par(oldpar) # Restores the earlier settings
detach(elasticband)
Type in help(par) to get details of all the parameter settings that are available with par().
Observe that the row names store labels for each row 20.
attach(primates) # Needed if primates is not already attached.
plot(Bodywt, Brainwt)
text(x=Bodywt, y=Brainwt, labels=row.names(primates), adj=0)
# adj=0 implies left adjusted text
Figure 8: Plot of the primates data, with labels on points. Figure 8B is an improved version of
Figure 8A.
Figure 8A would be adequate for identifying points, but is not a presentation quality graph. We now show how
to improve it.
Figure 8B uses the xlab (x-axis) and ylab (y-axis) parameters to specify meaningful axis titles. It uses the
parameter setting pos=4 to move the labelling to the right of the points. It sets pch=16 to make the plot
character a heavy black dot. This helps make the points stand out against the labelling.
Here is the R code for Figure 8B:
plot(x=Bodywt, y=Brainwt, pch=16,
xlab="Body weight (kg)", ylab="Brain weight (g)",
xlim=c(0,310), ylim=c(0,1100))
# Specify xlim so that there is room for the labels
text(x=Bodywt y=Brainwt, labels=row.names(primates), pos=4)
detach(primates)
The following, added to the plot that results from the above three statements, demonstrates other choices of pch.
points(1:7,rep(2,7), pch=(0:6)+7) # Plot symbols 7 to 13
20
Row names can be created in several different ways. They can be assigned directly, e.g.
row.names(primates) <- c("Potar monkey","Gorilla","Human", "Rhesus monkey","Chimp")
When using read.table() to input data, the parameter row.names is available to specify, by number or
name, a column that holds the row names.
24
5
4 456
3
2
0 1 23
7 8 9 10 11 12 13
1
14 15 16 17 18 19 20
0
1 2 3 4 5 6 7
A variety of color palettes are available. Here is a function that displays some of the possibilities:
view.colours <- function(){
plot(1, 1, xlim=c(0,14), ylim=c(0,3), type="n", axes=F,
xlab="",ylab="")
text(1:6, rep(2.5,6), paste(1:6), col=palette()[1:6], cex=2.5)
text(10, 2.5, "Default palette", adj=0)
rainchars <- c("R","O","Y","G","B","I","V")
text(1:7, rep(1.5,7), rainchars, col=rainbow(7), cex=2.5)
text(10, 1.5, "rainbow(7)", adj=0)
cmtxt <- substring("cm.colors", 1:9,1:9)
# Split “cm.colors” into its 9 characters
text(1:9, rep(0.5,9), cmtxt, col=cm.colors(9), cex=3)
text(10, 0.5, "cm.colors(9)", adj=0)
}
A click with the right mouse button signifies that the identification or location task is complete, unless the
setting of the parameter n is reached first. For identify() the default setting of n is the number of data
points, while for locator() the default setting is n = 500.
3.4.1 identify()
This function requires specification of a vector x, a vector y, and a vector of text strings that are available for
use a labels. The data set florida has the votes for the various Presidential candidates, county by county in
the state of Florida. We plot the vote for Buchanan against the vote for Bush, then invoking identify() so
that we can label selected points on the plot.
attach(florida)
plot(BUSH, BUCHANAN, xlab=”Bush”, ylab=”Buchanan”)
identify(BUSH, BUCHANAN, County)
detach(florida)
Click to the left or right, and slightly above or below a point, depending on the preferred positioning of the
label. When labelling is terminated (click with the right mouse button), the row numbers of the observations that
have been labelled are printed on the screen, in order.
3.4.2 locator()
Left click at the locations whose coordinates are required
attach(florida) # if not already attached
plot(BUSH, BUCHANAN, xlab=”Bush”, ylab=”Buchanan”)
locator()
detach(florida)
The function can be used to mark new points (specify type=”p”) or lines (specify type=”l”) or both points
and lines (specify type=”b”).
3.5.1 Histograms
The shapes of histograms depend on the placement of the breaks, as Figure 10 illustrates:
Figure 10: The two graphs show the same data, but with a different choice of breakpoints.
Figure 11: On each of the histograms from Figure 10 a density plot has been overlaid.
Density plots do not depend on a choice of breakpoints. The choice of width and type of window, controlling
the nature and amount of smoothing, does affect the appearance of the plot. The main effect is to make it more
or less smooth.
The following will give a density plot:
attach(possum)
plot(density(totlngth[here]),type="l")
detach(possum)
Note that in Fig. 12 the y-axis for the histogram is labelled so that the area of a rectangle is the frequency for
that rectangle. To get the plot on the left, specify:
attach(possum)
here <- sex == "f"
dens <- density(totlngth[here])
xlim <- range(dens$x)
ylim <- range(dens$y)
hist(totlngth[here], breaks = 72.5 + (0:5) * 5, probability = T,
xlim = xlim, ylim = ylim, xlab="Total length", main="")
lines(dens)
detach(possum)
3.5.3 Boxplots
We now make a boxplot of the above data:
attach(possum)
boxplot(totlngth[here])
detach(possum)
Figure 13 shows the plots. There is one unusually small value. Otherwise the points for the female possum
lengths are as close to a straight line as in many of the plots for random normal data.
.
Figure 13: Normal probability plots. If data are from a normal distribution then points should
fall, approximately, along a line. The plot in the top left hand corner shows the 43 lengths of
female possums. The other plots are for independent normal random samples of size 43.
The idea is an important one. In order to judge whether data are normally distributed, examine a number of
randomly generated samples of the same size from a normal distribution. It is a way to train the eye.
By default, rnorm() generates random samples from a distribution with mean 0 and standard deviation 1.
21
Data relate to the paper: Telford, R.D. and Cunningham, R.B. 1991: Sex, sport and body-size dependency of hematology
in highly trained athletes. Medicine and Science in Sports and Exercise 23: 788-794.
28
3.6.3 Rugplots
By default rug(x) adds, along the x-axis of the current plot, vertical bars showing the distribution of values of
x. It can however be particularly useful for showing the actual values along the side of a boxplot. Figure 14
shows a boxplot of the distribution of height of female athletes, with a rugplot added on the y-axis.
3.6.5 Dotcharts
These can be a good alternative to barcharts. They have a much higher information to ink ratio! Try
data(islands) # Use for versions <=1.9.1; base package
dotchart(islands) # vector of named numeric values
Unfortunately there are many names, and there is substantial overlap. The following is better, but shrinks the
sizes of the points so that they almost disappear:
dotchart(islands, cex=0.5)
8000
6000
4000
2
Area= ! r
2000
0
0 10 20 30 40 50
Radius
Notice that in expression(Area == pi*r^2), there is a double equals sign (“==”), although what will
appear on the plot is Area = pi*r^2, with a single equals sign. The reason for this is
that Area == pi*r^2 is a valid mathematical expression, while Area = pi*r^2 is not.
See help(plotmath) for detailed information on the plotting of mathematical
expressions. There is a further example in chapter 12.
The final plot from
demo(graphics)
Use graphs from which information can be read directly and easily in preference to those that rely on visual
impression and perspective. Thus in scientific papers contour plots are much preferable to surface plots or two-
dimensional bar graphs.
Draw graphs so that reduction and reproduction will not interfere with visual clarity.
Explain clearly how error bars should be interpreted — SE limits, 95% confidence interval, SD limits, or
whatever. Explain what source of `error(s)’ is represented. It is pointless to present information on a source of
error that is of little or no interest, for example analytical error when the relevant source of `error’ for
comparison of treatments is between fruit.
Use colour or different plotting symbols to distinguish different groups. Take care to use colours that contrast.
The list of references at the end of this chapter has further comments on graphical and other presentation issues.
3.9 Exercises
1. Plot the graph of brain weight (brain) versus body weight (body) for the data set Animals from the MASS
package. Label the axes appropriately.
[To access this data frame, specify library(MASS); data(Animals)]
2. Repeat the plot 1, but this time plotting log(brain weight) versus log(body weight). Use the row labels to
label the points with the three largest body weight values. Label the axes in untransformed units.
3. Repeat the plots 1 and 2, but this time place the plots side by side on the one page.
4. The data set huron that accompanies these notes has mean July average water surface elevations, in feet,
IGLD (1955) for Harbor Beach, Michigan, on Lake Huron, Station 5014, for 1860-198622. (Alternatively you
can work with the vector LakeHuron from the datasets package, that has mean heights for 1875-1772 only.)
a) Plot mean.height against year.
b) Use the identify function to determine which years correspond to the lowest and highest mean levels. That is,
type
identify(huron$year,huron$mean.height,labels=huron$year)
and use the left mouse button to click on the lowest point and highest point on the plot. To quit, press both
mouse buttons simultaneously.
c) As in the case of many time series, the mean levels are correlated from year to year. To see how each year's
mean level is related to the previous year's mean level, use
lag.plot(huron$mean.height)
This plots the mean level at year i against the mean level at year i-1.
5. Check the distributions of head lengths (hdlngth) in the possum23 data set that accompanies these notes.
Compare the following forms of display:
a) a histogram (hist(possum$hdlngth));
b) a stem and leaf plot (stem(qqnorm(possum$hdlngth));
c) a normal probability plot (qqnorm(possum$hdlngth)); and
d) a density plot (plot(density(possum$hdlngth)).
What are the advantages and disadvantages of these different forms of display?
6. Try x <- rnorm(10). Print out the numbers that you get. Look up the help for rnorm. Now generate a
sample of size 10 from a normal distribution with mean 170 and standard deviation 4.
7. Use mfrow() to set up the layout for a 3 by 4 array of plots. In the top 4 rows, show normal probability plots
(section 3.4.2) for four separate `random’ samples of size 10, all from a normal distribution. In the middle 4
22
Source: Great Lakes Water Levels, 1860-1986. U.S. Dept. of Commerce, National Oceanic and
AtmosphericAdministration, National Ocean Survey.
23
Data relate to the paper: Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. 1995.
Morphological variation among populations of the mountain brush tail possum, Trichosurus caninus Ogilby (Phalangeridae:
Marsupialia). Australian Journal of Zoology 43: 449-458.
31
rows, display plots for samples of size 100. In the bottom four rows, display plots for samples of size 1000.
Comment on how the appearance of the plots changes as the sample size changes.
8. The function runif() can be used to generate a sample from a uniform distribution, by default on the
interval 0 to 1. Try x <- runif(10), and print out the numbers you get. Then repeat exercise 6 above, but
taking samples from a uniform distribution rather than from a normal distribution. What shape do the points
follow?
*9. If you find exercise 8 interesting, you might like to try it for some further distributions. For example x <-
rchisq(10,1) will generate 10 random values from a chi-squared distribution with degrees of freedom 1. The
statement x <- rt(10,1) will generate 10 random values from a t distribution with degrees of freedom one.
Make normal probability plots for samples of various sizes from these distributions.
10. For the first two columns of the data frame hills, examine the distribution using:
(a) histograms
(b) density plots
(c) normal probability plots.
Repeat (a), (b) and (c), now working with the logarithms of the data values.
3.10 References
Bell Lab's Trellis Page: http://cm.bell-labs.com/cm/ms/departments/sia/project/trellis/
Becker, R.A., Cleveland, W.S. and Shyu, M. The Visual Design and Control of Trellis Display. Journal of
Computational and Graphical Statistics.
Cleveland, W. S. 1993. Visualizing Data. Hobart Press, Summit, New Jersey.
Cleveland, W. S. 1985. The Elements of Graphing Data. Wadsworth, Monterey, California.
Maindonald J H 1992. Statistical design, analysis and presentation issues. New Zealand Journal of Agricultural
Research 35: 121-141.
Maindonald J H and Braun W J 2003. Data Analysis and Graphics Using R – An Example-Based Approach.
Cambridge University Press.
Tufte, E. R. 1983. The Visual Display of Quantitative Information. Graphics Press, Cheshire, Connecticut,
U.S.A.
Tufte, E. R. 1990. Envisioning Information. Graphics Press, Cheshire, Connecticut, U.S.A.
Tufte, E. R. 1997. Visual Explanations. Graphics Press, Cheshire, Connecticut, U.S.A.
Wainer, H. 1997. Visual Revelations. Springer-Verlag, New York
33
4. Lattice graphics
Lattice plots allow the use of the layout on the page to reflect meaningful aspects of data structure. They offer
abilities similar to those in the S-PLUS trellis library.
The lattice package sits on top of the grid package. To use lattice graphics, both these packages must be
installed. Providing it is installed, the grid package will be loaded automatically when lattice is loaded.
The older coplot() function that is in the base package has some of same abilities as xyplot( ), but with a
limitation to two conditioning factors or variables only.
Figure 16: csoa versus it, for each combination of females/males and elderly/young.
The two targets (low, + = high contrast) are shown with different symbols.
In a simplified version of Figure 16 above, we might plot csoa against it for each combination of sex and
agegp. For this simplified version, it would be enough to type:
xyplot(csoa ~ it | sex * agegp, data=tinting) # Simple use of xyplot()
24
Data relate to the paper: Burns, N. R., Nettlebeck, T., White, M. and Willson, J. 1999. Effects of car window tinting on
visual performance: a comparison of elderly and young drivers. Ergonomics 42: 428-443.
34
Here is the statement used to get Figure 16. The two different symbols distinguish between low contrast and
high contrast targets.
xyplot(csoa~it|sex*agegp, data=tinting, panel=panel.superpose,
groups=target, auto.key=list(columns=2))
If colour is available, different colours will be used for the different groups.
A striking feature is that the very high values, for both csoa and it, occur only for elderly males. It is apparent
that the long response times for some of the elderly males occur, as we might have expected, with the low
contrast target. The following puts smooth curves through the data, separately for the two target types:
xyplot(csoa~it|sex*agegp, data=tinting, panel=panel.superpose,
groups=target, type=c("p","smooth")
The relationship between csoa and it seems much the same for both levels of contrast.
Finally, we do a plot (Figure 17) that uses different symbols (in black and white) for different levels of tinting.
The longest times are for the high level of tinting.
xyplot(csoa~it|sex*agegp, data=tinting, groups=tint,
auto.key=list(columns=3))
Figure 17: csoa versus it, for each combination of females/males and elderly/young.
The different levels of tinting (no, +=low, >=high) are shown with different symbols.
4.3 Exercises
1. The following data gives milk volume (g/day) for smoking and nonsmoking mothers25:
Smoking Mothers: 621, 793, 593, 545, 753, 655, 895, 767, 714, 598, 693
Nonsmoking Mothers: 947, 945, 1086, 1202, 973, 981, 930, 745, 903, 899, 961
Present the data (i) in side by side boxplots; (ii) using a dotplot form of display.
2. Repeat the plot as in exercise 1, but this time including a scatterplot smooth on each panel.
3. For the possum data set, generate the following plots:
a) histograms of hdlngth – use hist();
b) normal probability plots of hdlngth – use qqnorm();
c) density plots of hdlngth – use plot(density()). Investigate the effect of varying the density bandwidth
(bw).
4. The following exercises relate to the data frame possum that accompanies these notes:
(a) Using the coplot function, explore the relation between hdlngth and totlngth, taking into account sex
and Pop.
(b) Construct a contour plot of chest versus belly and totlngth.
(c) Construct box and whisker plots for hdlngth, using site as a factor.
(d) Construct normal probability plots for hdlgth, for each separate level of sex and Pop. Is there evidence
that the distribution of hdlgth varies with the level of these other factors.
6. The frame airquality that is in the datasets package has columns Ozone, Solar.R, Wind, Temp, Month
and Day. Plot Ozone against Solar.R for each of three temperature ranges, and each of three wind ranges.
25
Data are from the paper ``Smoking During Pregnancy and Lactation and Its Effects on Breast Milk Volume'' (Amer. J. of
Clinical Nutrition).
36
37
Here distance ~ stretch is a model formula. Other model formulae will appear in the course of this
chapter. Figure 18 shows the plot:
The output from the regression is an lm object, which we have called elastic.lm . Now examine a summary
of the regression results. Notice that the output documents the model formula that was used:
> options(digits=4)
> summary(elastic.lm)
Call:
lm(formula = distance ~ stretch, data = elasticband)
Residuals:
1 2 3 4 5 6 7
2.107 -0.321 18.000 1.893 -27.786 13.321 -7.214
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -63.57 74.33 -0.86 0.431
stretch 4.55 1.54 2.95 0.032
Various functions are available for extracting information that you might want from the list. This is better than
manipulating the list directly. Examples are:
> coef(elastic.lm)
(Intercept) stretch
-63.571 4.554
> resid(elastic.lm)
1 2 3 4 5 6 7
2.1071 -0.3214 18.0000 1.8929 -27.7857 13.3214 -7.2143
The function most often used to inspect regression output is summary(). It extracts the information that users
are most likely to want. For example, in section 5.1, we had
summary(elastic.lm)
There is a plot method for lm objects that gives the diagnostic information shown in Figure 19.
By default the first, second and fourth plot use the row names to identify the three most extreme residuals. [If
explicit row names are not given for the data frame, then the row numbers are used.]
The model matrix relates to the part of the model that appears to the right of the equals sign. The straight line
model is
y = a + b x + residual
which we write as
y=1 a+x b + residual
The parameters that are to be estimated are a and b. Fitted values are given by multiplying each column of the
model matrix by its corresponding parameter, i.e. the first column by a and the second column by b, and adding.
Another name is predicted values. The aim is to reproduce, as closely as possible, the values in the y-column.
The residuals are the differences between the values in the y-column and the fitted values. Least squares
regression, which is the form of regression that we describe in this course, chooses a and b so that the sum of
squares of the residuals is as small as possible.
The function model.matrix() prints out the model matrix. Thus:
> model.matrix(distance ~ stretch, data=elasticband)
(Intercept) stretch
1 1 46
2 1 54
3 1 48
4 1 50
5 1 44
6 1 42
7 1 52
attr(,"assign")
[1] 0 1
The following are the fitted values and residuals that we get with the estimates of a (= -63.6) and b ( = 4.55)
that result from least squares regression
X yˆ y y " yˆ
Stretch (mm) (Fitted) (Observed) (Residual)
-63.6 4.55 -63.6 + 4.55 Stretch Distance Observed -
! (mm) ! Fitted
1 46 -63.6 + 4.55 46 = 145.7 148 148-145.7 = 2.3
1 54 -63.6 + 4.55 54 = 182.1 182 182-182.1 = -0.1
1 48 -63.6 + 4.55 48 = 154.8 173 173-154.8 = 18.2
1 50 -63.6 + 4.55 50 = 163.9 166 166-163.9 = 2.1
1 44 -63.6 + 4.55 44 = 136.6 109 109-136.6 = -
27.6
1 42 -63.6 + 4.55 42 = 127.5 141 141-127.5 = 13.5
1 52 -63.6 + 4.55 52 = 173.0 166 166-173.0 = -7.0
Note the use of the symbol yˆ [pronounced y-hat] for predicted values.
We might alternatively fit the simpler (no intercept) model. For this we have
y=x b+e
! variable with mean 0. The X matrix then consists of a single column, the x’s.
where e is a random
Call:
lm(formula = formds, data = elasticband)
Coefficients:
(Intercept) distance
26.3780 0.1395
Note that graphics formulae can be manipulated in exactly the same way as model formulae.
26
The original source is O.L. Davies (1947) Statistical Methods in Research and Production. Oliver and Boyd,
Table 6.1 p. 119.
41
Figure 20: Scatterplot matrix for the Rubber data frame from the
MASS package.
There is a negative correlation between loss and hardness. We proceed to regress loss on hard and tens.
Rubber.lm <- lm(loss~hard+tens, data=Rubber)
> options(digits=3)
> summary(Rubber.lm)
Call:
lm(formula = loss ~ hard + tens, data = Rubber)
Residuals:
Min 1Q Median 3Q Max
-79.38 -14.61 3.82 19.75 65.98
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 885.161 61.752 14.33 3.8e-14
hard -6.571 0.583 -11.27 1.0e-11
tens -1.374 0.194 -7.07 1.3e-07
In addition to the use of plot.lm(), note the use of termplot(). Figure 21) used the following code:
par(mfrow=c(1,2))
termplot(Rubber.lm, partial=TRUE, smooth=panel.smooth)
par(mfrow=c(1,1))
Figure 21: Plot, obtained with termplot(), showing the contribution of each of the two terms in
the model, at the mean of the contributions for the other term. A smooth curve has, in each
panel, been fitted through the partial residuals. There is a clear suggestion that, at the upper
end of the range, the response is not linear with tensile strength.
> logbooks.lm2<-lm(weight~thick+height,data=logbooks)
> summary(logbooks.lm2)$coef
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.263 3.552 -0.356 0.7303
thick 0.313 0.472 0.662 0.5243
height 2.114 0.678 3.117 0.0124
> logbooks.lm3<-lm(weight~thick+height+width,data=logbooks)
> summary(logbooks.lm3)$coef
43
27
Data are from McLeod, C. C. (1982) Effect of rates of seeding on barley grown for grain. New Zealand Journal of
Agriculture 10: 133-136. Summary details are in Maindonald, J. H. (1992).
44
We will need an X-matrix with a column of ones, a column of values of rate, and a column of values of
2
rate . For this, both rate and I(rate^2) must be included in the model formula.
> seedrates.lm2 <- lm(grain ~ rate+I(rate^2), data=seedrates)
> summary(seedrates.lm2)
Call:
lm(formula = grain ~ rate + I(rate^2), data = seedrates)
Residuals:
1 2 3 4 5
0.04571 -0.12286 0.09429 -0.00286 -0.01429
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 24.060000 0.455694 52.80 0.00036
rate -0.066686 0.009911 -6.73 0.02138
I(rate^2) 0.000171 0.000049 3.50 0.07294
This was a (small) extension of linear models, to handle a specific form of non-linear relationship. Any
transformation can be used to form columns of the model matrix. Thus, an x3 column might be added.
Once the model matrix has been formed, we are limited to taking linear combinations of columns.
The F-value is large, but on this evidence there are too few degrees of freedom to make a totally convincing
case for preferring a quadratic to a line. However the paper from which these data come gives an independent
estimate of the error mean square (0.17 on 35 d.f.) based on 8 replicate results that were averaged to give each
value for number of grains per head. If we compare the change in the sum of squares (0.1607, on 1 df) with a
mean square of 0.172 (35 df), the F-value is now 5.4 on 1 and 35 degrees of freedom, and we have p=0.024 .
The increase in the number of degrees of freedom more than compensates for the reduction in the F-statistic.
> # However we have an independent estimate of the error mean
> # square. The estimate is 0.17^2, on 35 df.
> 1-pf(0.16/0.17^2, 1, 35)
[1] 0.0244
Finally note that R2 was 0.972 for the straight line model. This may seem good, but given the accuracy of these
data it was not good enough! The statistic is an inadequate guide to whether a model is adequate. Even for any
one context, R2 will in general increase as the range of the values of the dependent variable increases. (R2 is
larger when there is more variation to be explained.) A predictive model is adequate when the standard errors of
predicted values are acceptably small, not when R2 achieves some magic threshold.
The extrapolation has deliberately been taken beyond the range of the data, in order to show how the confidence
bounds spread out. Confidence bounds for a fitted line spread out more slowly, but are even less believable!
To formulate this as a regression model, we take kWh as the dependent variable, and the factor insulation as the
explanatory variable.
> insulation <- factor(c(rep("without", 8), rep("with", 7)))
> # 8 without, then 7 with
> kWh <- c(10225, 10689, 14683, 6584, 8541, 12086, 12467,
+ 12669, 9708, 6700, 4307, 10315, 8017, 8162, 8022)
> insulation.lm <- lm(kWh ~ insulation)
> summary(insulation.lm, corr=F)
Call:
lm(formula = kWh ~ insulation)
Residuals:
Min 1Q Median 3Q Max
-4409 -979 132 1575 3690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7890 874 9.03 5.8e-07
insulation 3103 1196 2.59 0.022
The p-value is 0.022, which may be taken to indicate (p < 0.05) that we can distinguish between the two types
of houses. But what does the “intercept” of 7890 mean, and what does the value for “insulation” of 3103 mean?
To interpret this, we need to know that the factor levels are, by default, taken in alphabetical order, and that the
initial level is taken as the baseline. So with comes before without, and with is the baseline. Hence:
Average for Insulated Houses = 7980
To get the estimate for uninsulated houses take 7980 + 3103 = 10993.
The standard error of the difference is 1196.
28
Data are from Hand, D. J.; Daly, F.; Lunn, A. D.; Ostrowski, E., eds. (1994). A Handbook of Small Data Sets. Chapman
and Hall.
47
Type
model.matrix(kWh~insulation)
Another possibility is to use what are called the “sum” contrasts. With the “sum” contrasts the baseline is the
mean over all factor levels. The effect for the first level is omitted; the user has to calculate it as minus the sum
of the remaining effects. Here is the output from use of the `sum’ contrasts29:
> options(contrasts = c("contr.sum", "contr.poly"), digits = 2)
# Try the `sum’ contrasts
> insulation <- factor(insulation, levels=c("without", "with"))
# Make `without' the baseline
> insulation.lm <- lm(kWh ~ insulation)
> summary(insulation.lm, corr=F)
Call:
lm(formula = kWh ~ insulation)
Residuals:
Min 1Q Median 3Q Max
-4409 -979 132 1575 3690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 9442 598 15.78 7.4e-10
insulation 1551 598 2.59 0.022
29
The second string element, i.e. "contr.poly", is the default setting for factors with ordered levels. [Use the function
ordered() to create ordered factors.]
48
Also available are the helmert contrasts. These are not at all intuitive and rarely helpful, even though S-PLUS
uses them as the default. Novices should avoid them30.
Call:
lm(formula = logheart ~ logweight, data = dolphins)
Residuals:
Min 1Q Median 3Q Max
-0.15874 -0.08249 0.00274 0.04981 0.21858
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.325 0.522 2.54 0.024
logweight 1.133 0.133 8.52 6.5e-07
30
In general, use either the treatment contrasts or the sum contrasts. With the sum contrasts the baseline is the overall
mean.
49
Enter summary(cet.lm2) to get an output summary, and plot(cet.lm2) to plot diagnostic information for
this model.
For model C, the statement is:
> cet.lm3 <- lm(logheart ~ factor(species) + logweight +
factor(species):logweight, data=dolphins)
By default, R uses the treatment contrasts for factors, i.e. the first level is taken as the baseline or reference
level. A useful function is relevel(). The parameter ref can be used to set the level that you want as the
reference level.
Call:
aov(formula = weight ~ group)
Residuals:
Min 1Q Median 3Q Max
-1.0710 -0.4180 -0.0060 0.2627 1.3690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.0320 0.1971 25.527 <2e-16
grouptrt1 -0.3710 0.2788 -1.331 0.1944
grouptrt2 0.4940 0.2788 1.772 0.0877
> help(cabbages)
> data(cabbages) # From the MASS package
> names(cabbages)
[1] "Cult" "Date" "HeadWt" "VitC"
> coplot(HeadWt~VitC|Cult+Date,data=cabbages)
Examination of the plot suggests that cultivars differ greatly in the variability in head weight. Variation in the
vitamin C levels seems relatively consistent between cultivars.
> VitC.aov<-aov(VitC~Cult+Date,data=cabbages)
> summary(VitC.aov)
Df Sum Sq Mean Sq F value Pr(>F)
Cult 1 2496.15 2496.15 53.0411 1.179e-09
Date 2 909.30 454.65 9.6609 0.0002486
Residuals 56 2635.40 47.06
51
Error: block:shade
Df Sum Sq Mean Sq F value Pr(>F)
block 2 172.35 86.17 4.1176 0.074879
shade 3 1394.51 464.84 22.2112 0.001194
Residuals 6 125.57 20.93
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 36 438.58 12.18
> coef(kiwishade.aov)
(Intercept) :
(Intercept)
96.5327
block:shade :
blocknorth blockwest shadeAug2Dec shadeDec2Feb shadeFeb2May
0.993125 -3.430000 3.030833 -10.281667 -7.428333
Within :
numeric(0)
5.9 Exercises
1. Here are two sets of data that were obtained the same apparatus, including the same rubber band, as the data
frame elasticband. For the data set elastic1, the values are:
stretch (mm): 46, 54, 48, 50, 44, 42, 52
distance (cm): 183, 217, 189, 208, 178, 150, 249.
For the data set elastic2, the values are:
stretch (mm): 25, 45, 35, 40, 55, 50 30, 50, 60
distance (cm): 71, 196, 127, 187, 249, 217, 114, 228, 291.
Using a different symbol and/or a different colour, plot the data from the two data frames elastic1 and
elastic2 on the same graph. Do the two sets of results appear consistent.
31
Data relate to the paper: Snelgar, W.P., Manson. P.J., Martin, P.J. 1992. Influence of time of shading on flowering and
yield of kiwifruit vines. Journal of Horticultural Science 67: 481-487.
Further details, including a diagram showing the layout of plots and vines and details of shelter, are in Maindonald (1992).
The two papers have different shorthands (e.g. Sept-Nov versus Aug-Dec) for describing the time periods for which the
shading was applied.
52
For each of the data sets elastic1 and elastic2, determine the regression of stretch on distance. In each
case determine (i) fitted values and standard errors of fitted values and (ii) the R2 statistic. Compare the two sets
of results. What is the key difference between the two sets of data?
Using the data frame beams (in the data sets accompanying these notes), carry out a regression of strength on
SpecificGravity and Moisture. Carefully examine the regression diagnostic plot, obtained by supplying
the name of the lm object as the first parameter to plot(). What does this indicate?
Using the data frame cars (in the datasets package), plot distance (i.e. stopping distance) versus speed. Fit
a line to this relationship, and plot the line. Then try fitting and plotting a quadratic curve. Does the quadratic
curve give a useful improvement to the fit? If you have studied the dynamics of particles, can you find a theory
that would tell you how stopping distance might change with speed?5. Using the data frame hills (in package
MASS), regress time on distance and climb. What can you learn from the diagnostic plots that you get
when you plot the lm object? Try also regressing log(time) on log(distance) and log(climb). Which
of these regression equations would you prefer?
Use the method of section 5.7 to determine, formally, whether one needs different regression lines for the two
data frames elastic1 and elastic2.
In section 5.7 check the form of the model matrix (i) for fitting two parallel lines and (ii) for fitting two arbitrary
lines when one uses the sum contrasts. Repeat the exercise for the helmert contrasts.
6. Type
hosp<-rep(c(”RNC”,”Hunter”,”Mater”),2)
hosp
fhosp<-factor(hosp)
levels(fhosp)
Now repeat the steps involved in forming the factor fhosp, this time keeping the factor levels in the order RNC,
Hunter, Mater.
Use contrasts(fhosp) to form and print out the matrix of contrasts. Do this using helmert contrasts,
treatment contrasts, and sum contrasts. Using an outcome variable
y <- c(2,5,8,10,3,9)
fit the model lm(y~fhosp), repeating the fit for each of the three different choices of contrasts. Comment on
what you get.
For which choice(s) of contrasts do the parameter estimates change when you re-order the factor levels?
In the data set cement (MASS package), examine the dependence of y (amount of heat produced) on x1, x2, x3
and x4 (which are proportions of four constituents). Begin by examining the scatterplot matrix. As the
explanatory variables are proportions, do they require transformation, perhaps by taking log(x/(100-x))? What
alternative strategies one might use to find an effective prediction equation?
In the data set pressure (datasets package), examine the dependence of pressure on temperature.
[Transformation of temperature makes sense only if one first converts to degrees Kelvin. Consider
transformation of pressure. A logarithmic transformation is too extreme; the direction of the curvature changes.
What family of transformations might one try?
Modify the code in section 5.5.3 to fit: (a) a line, with accompanying 95% confidence bounds, and (b) a cubic
curve, with accompanying 95% pointwise confidence bounds. Which of the three possibilities (line, quadratic,
curve) is most plausible? Can any of them be trusted?
*Repeat the analysis of the kiwishade data (section 5.8.2), but replacing Error(block:shade) with
block:shade. Comment on the output that you get from summary(). To what extent is it potentially
misleading? Also do the analysis where the block:shade term is omitted altogether. Comment on that
analysis.
5.10 References
Atkinson, A. C. 1986. Comment: Aspects of diagnostic regression analysis. Statistical Science 1, 397–402.
Atkinson, A. C. 1988. Transformations Unmasked. Technometrics 30: 311-318.
Cook, R. D. and Weisberg, S. 1999. Applied Regression including Computing and Graphics. Wiley.
Dehejia, R.H. and Wahba, S. 1999. Causal effects in non-experimental studies: re-evaluating the evaluation of training
programs. Journal of the American Statistical Association 94: 1053-1062.
Harrell, F. E., Lee, K. L., and Mark, D. B. 1996. Tutorial in Biostatistics. Multivariable Prognostic Models: Issues in
Developing Models, Evaluating Assumptions and Adequacy, and Measuring and Reducing Errors. Statistics in Medicine
15: 361-387.
Lalonde, R. 1986. Evaluating the economic evaluations of training programs. American Economic Review 76: 604-620.
Maindonald J H 1992. Statistical design, analysis and presentation issues. New Zealand Journal of Agricultural
Research 35: 121-141.
Maindonald J H and Braun W J 2003. Data Analysis and Graphics Using R – An Example-Based Approach.
Cambridge University Press.
Venables, W. N. and Ripley, B. D., 4th edn 2002. Modern Applied Statistics with S. Springer, New York.
We now look (Figure 23) at particular views of the data that we get from a principal components analysis:
possum.prc <- princomp(log(possum[here,6:14])) # Principal components
# Print scores on second pc versus scores on first pc,
# by populations and sex, identified by site
xyplot(possum.prc$scores[,2] ~
possum.prc$scores[,1]|possum$Pop[here]+possum$sex[here], groups=possum$site,
auto.key=list(columns=3))
32
Data relate to the paper: Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. 1995.
Morphological variation among columns of the mountain brushtail possum, Trichosurus caninus Ogilby
(Phalangeridae: Marsupiala). Australian Journal of Zoology 43: 449-458.
56
33
References are at the end of the chapter.
57
The singular values are the ratio of between to within group sums of squares, for the canonical variates in turn.
Clearly canonical variates after the third will have little if any discriminatory power. One can use
predict.lda() to get (among other information) scores on the first few canonical variates.
Note that there may be interpretative advantages in taking logarithms of biological measurement data. The
standard against which patterns of measurement are commonly compared is that of allometric growth, which
implies a linear relationship between the logarithms of the measurements. Differences between different sites
are then indicative of different patterns of allometric growth. The reader may wish to repeat the above analysis,
but working with the logarithms of measurements.
Where there are two groups, logistic regression is often effective. A source of code for handling more general
supervised classification problems is Hastie and Tibshirani’s mda (mixture discriminant analysis) package.
There is a brief overview of this package in the Venables and Ripley `Complements’, referred to in section 13.2
.
Tree-based models, also known as “Classification and Regression Trees” (CART), may be suitable for
regression and classification problems when there are extensive data. One advantage of such methods is that
they automatically handle non-linearity and interactions. Output includes a “decision tree” that is immediately
useful for prediction.
library(rpart)
data(fgl) # Forensic glass fragment data; from MASS package
glass.tree <- rpart(type ~ RI+Na+Mg+Al+Si+K+Ca+Ba+Fe, data=fgl)
plot(glass.tree); text(glass.tree)
summary(glass.tree)
To use these models effectively, you also need to know about approaches to pruning trees, and about cross-
validation. Methods for reduction of tree complexity that are based on significance tests at each individual node
(i.e. branching point) typically choose trees that over-predict.
The Atkinson and Therneau rpart (recursive partitioning) package is closer to CART than is the S-PLUS tree
library. It integrates cross-validation with the algorithm for forming trees.
6.5 Exercises
1. Using the data set painters (MASS package), apply principal components analysis to the scores for
Composition, Drawing, Colour, and Expression. Examine the loadings on the first three principal
components. Plot a scatterplot matrix of the first three principal components, using different colours or symbols
to identify the different schools.
2. The data set Cars93 is in the MASS package. Using the columns of continuous or ordinal data, determine
scores on the first and second principal components. Investigate the comparison between (i) USA and non-USA
cars, and (ii) the six different types ( Type) of car. Now create a new data set in which binary factors become
columns of 0/1 data, and include these in the principal components analysis.
3. Repeat the calculations of exercises 1 and 2, but this time using the function lda() from the MASS package
to derive canonical discriminant scores, as in section 6.3.
4. The MASS package has the Aids2 data set, containing de-identified data on the survival status of patients
diagnosed with AIDS before July 1 1991. Use tree-based classification (rpart()) to identify major influences
on survival.
5. Investigate discrimination between plagiotropic and orthotropic species in the data set leafshape34.
6.6 References
Chambers, J. M. and Hastie, T. J. 1992. Statistical Models in S. Wadsworth and Brooks Cole Advanced Books
and Software, Pacific Grove CA.
Friedman, J., Hastie, T. and Tibshirani, R. (1998). Additive logistic regression: A statistical view of boosting. Available
from the internet.
Maindonald J H and Braun W J 2003. Data Analysis and Graphics Using R – An Example-Based Approach.
Cambridge University Press.
Ripley, B. D. 1996. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge UK.
Therneau, T. M. and Atkinson, E. J. 1997. An Introduction to Recursive Partitioning Using the RPART Routines. This is
one of two documents included in: http://www.stats.ox.ac.uk/pub/SWin/rpartdoc.zip
Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York.
34
Data relate to the paper: King. D.A. and Maindonald, J.H. 1999. Tree architecture in relation to leaf
dimensions and tree stature in temperate and tropical rain forests. Journal of Ecology 87: 1012-1024.
59
7.1 Vectors
Recall that vectors may have mode logical, numeric or character.
If instead one wants four 2s, then four 3s, then four 5s, enter rep(c(2,3,5), c(4,4,4)).
> rep(c(2,3,5),c(4,4,4)) # An alternative is rep(c(2,3,5), each=4)
[1] 2 2 2 2 3 3 3 3 5 5 5 5
Note further that, in place of c(4,4,4) we could write rep(4,3). So a further possibility is that in place of
rep(c(2,3,5), c(4,4,4)) we could enter rep(c(2,3,5), rep(4,3)).
In addition to the above, note that the function rep() has an argument length.out, meaning “keep on
repeating the sequence until the length is length.out.”
WARNING: This is chiefly for those who may move between R and S-PLUS. In important respects, R’s
behaviour with missing values is more intuitive than that of S-PLUS. Thus in R
y[x>2] <- x[x>2]
60
gives the result that the naïve user might expect, i.e. replace elements of y with corresponding elements of x
wherever x>2. Wherever x>2 gives the result NA, no action is taken. In R, any NA in x>2 yields a value of NA
for y[x>2] on the left of the equation, and a value of NA for x[x>2] on the right of the equation.
In S-PLUS, the result on the right is the same, i.e. an NA. However, on the left, elements that have a subscript
NA drop out. The vector on the left to which values will be assigned has, as a result, fewer elements than the
vector on the right.
Thus the following has the effect in R that the naïve user might expect, but not in S-PLUS:
x <- c(1,6,2,NA,10)
y <- c(1,4,2,3,0)
y[x>2] <- x[x>2]
y
The safe way, in both S-PLUS and R, is to use !is.na(x) to limit the selection, on one or both sides as
necessary, to those elements of x that are not NAs. We will have more to say on missing values in the section on
data frames that now follows.
The first column holds the row labels, which in this case are the numbers of the rows that have been extracted.
In place of c("variety","yield") we could have written, more simply, c(2,4).
The default Windows distribution includes many commonly required packages. Other packages must be
explicitly installed. For remaining sections of these notes, the MASS package, which comes with the default
distributution, will be used from time to time.
The base package, and several other packages, are automatically attached at the beginning of the session. To
attach any other installed package, use the library() command.
Then
primates <- read.table("a:/primates.txt")
will create the data frame primates, from a file on the a: drive. The text strings in the first column will
become the first column in the data frame.
Suppose that primates is a data frame with three columns – species name, body weight, and brain weight. You
can give the columns names by typing in:
names(primates)<-c(“Species”,"Bodywt","Brainwt")
Specify header=TRUE if there is an initial how of header information. If the number of headers is one less than
the number of columns of data, then the first column will be used, providing entries are unique, for row labels.
62
7.4.1 Idiosyncrasies
The function read.table() is straightforward for reading in rectangular arrays of data that are entirely
numeric. When, as in the above example, one of the columns contains text strings, the column is by default
stored as a factor with as many different levels as there are unique text strings35.
Problems may arise when small mistakes in the data cause R to interpret a column of supposedly numeric data
as character strings, which are automatically turned into factors. For example there may be an O (oh)
somewhere where there should be a 0 (zero), or an el (l) where there should be a one ( 1). If you use any missing
value symbols other than the default (NA), you need to make this explicit see section 7.3.2 below. Otherwise
any appearance of such symbols as *, period(.) and blank (in a case where the separator is something other than
a space) will cause to whole column to be treated as character data.
Users who find this default behaviour of read.table() confusing may wish to use the parameter setting
36
as.is = TRUE. If the column is later required for use as a factor in a model or graphics formula, it may be
necessary to make it into a factor at that time. Some functions do this conversion automatically.
The function read.table() expects missing values to be coded as NA, unless you set na.strings to
recognise other characters as missing value indicators. If you have a text file that has been output from SAS,
you will probably want to set na.strings=c(".").
There may be multiple missing value indicators, e.g. na.strings=c(“NA”,".",”*”,""). The "" will
ensure that empty cells are entered as NAs.
35
Storage of columns of character strings as factors is efficient when a small number of distinct strings that are of modest
length are each repeated a large number of times.
36
Specifying as.is = T prevents columns of (intended or unintended) character strings from being converted into
factors.
37
One way to get mixed text and numeric data across from Excel is to save the worksheet in a .csv text file with comma
as the separator. If for example file name is myfile.csv and is on drive a:, use
read.table("a:/myfile.csv", sep=",") to read the data into R. This copes with any spaces which may appear
in text strings. [But watch that none of the cell entries include commas.]
38
Factors are vectors which have mode numeric and class “factor”. They have an attribute levels that holds the level names.
63
Printing the contents of the column with the name country gives the names, not the integer values. As in most
operations with factors, R does the translation invisibly. There are though annoying exceptions that can make
the use of factors tricky. To be sure of getting the country names, specify
as.character(islandcities$country)
By default, R sorts the level names in alphabetical order. If we form a table that has the number of times that
each country appears, this is the order that is used:
> table(islandcities$country)
Australia Cuba Indonesia Japan Philippines Taiwan United Kingdom
3 1 4 6 2 1 2
This order of the level names is purely a convenience. We might prefer countries to appear in order of latitude,
from North to South. We can change the order of the level names to reflect this desired order:
> lev <- levels(islandcities$country)
> lev[c(7,4,6,2,5,3,1)]
[1] "United Kingdom" "Japan" "Taiwan" "Cuba"
[5] "Philippines" "Indonesia" "Australia"
> country <- factor(islandcities$country, levels=lev[c(7,4,6,2,5,3,1)])
> table(country)
United Kingdom Japan Taiwan Cuba Philippines Indonesia Australia
2 6 1 1 2 4 3
In ordered factors, i.e. factors with ordered levels, there are inequalities that relate factor levels.
Factors have the potential to cause a few surprises, so be careful! Here are two points to note:
When a vector of character strings becomes a column of a data frame, R by default turns it into a factor.
Enclose the vector of character strings in the wrapper function I() if it is to remain character.
There are some contexts in which factors become numeric vectors. To be sure of getting the vector of text
strings, specify e.g. as.character(country).
To extract the numeric levels 1, 2, 3, …, specify as.numeric(country).
Later we will meet the notion of inheritance. Ordered factors inherit the attributes of factors, and have a further
ordering attribute. When you ask for the class of an object, you get details both of the class of the object, and of
any classes from which it inherits. Thus:
> class(ordf.stress)
[1] "ordered" "factor"
64
7.7 Lists
Lists make it possible to collect an arbitrary set of R objects together under a single name. You might for
example collect together vectors of several different modes and lengths, scalars, matrices or more general
arrays, functions, etc. Lists can be, and often are, a rag-tag of different objects. We will use for illustration the
list object that R creates as output from an lm calculation.
For example, consider the linear model (lm) object elastic.lm (c. f. sections 1.1.4 and 2.1.4) created thus:
elastic.lm <- lm(distance~stretch, data=elasticband)
It is readily verified that elastic.lm consists of a variety of different kinds of objects, stored as a list. You
can get the names of these objects by typing in
> names(elastic.lm)
[1] "coefficients" "residuals" "effects" "rank"
[5] "fitted.values" "assign" "qr" "df.residual"
[9] "xlevels" "call" "terms" "model"
places columns of xx, in order, into the vector x. In the example above, we get back the elements 1, 2, . . . , 6.
Matrices have the attribute “dimension”. Thus
> dim(xx)
[1] 2 3
In fact a matrix is a vector (numeric or character) whose dimension attribute has length 2.
Now set
> x34 <- matrix(1:12,ncol=4)
> x34
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
The dimnames() function assigns and/or extracts matrix row and column names. The dimnames() function
gives a list, in which the first list element is the vector of row names, and the second list element is the vector of
column names. This generalises in the obvious way for use with arrays, which we now discuss.
7.8.1 Arrays
The generalisation from a matrix (2 dimensions) to allow > 2 dimensions gives an array. A matrix is a 2-
dimensional array.
Consider a numeric vector of length 24. So that we can easily keep track of the elements, we will make them 1,
2, .., 24. Thus
x <- 1:24
Then
dim(x) <- c(2,12)
Now try
> dim(x) <-c(3,4,2)
> x
, , 1
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
, , 2
[,1] [,2] [,3] [,4]
[1,] 13 16 19 22
[2,] 14 17 20 23
66
[3,] 15 18 21 24
7.9 Exercises
Generate the numbers 101, 102, …, 112, and store the result in the vector x.
Generate four repeats of the sequence of numbers (4, 6, 3).
Generate the sequence consisting of eight 4s, then seven 6s, and finally nine 3s. Store the numbers obtained , in
order, in the columns of a 6 by 4 matrix.
Create a vector consisting of one 1, then two 2’s, three 3’s, etc., and ending with nine 9’s.
For each of the following calculations, what you would expect? Check to see if you were right!
In the built-in data frame airquality (datasets package): (a) Determine, for each of the columns of the data
frame airquality (datasets package), the median, mean, upper and lower quartiles, and range; (b) Extract the
row or rows for which Ozone has its maximum value; (c) extract the vector of values of Wind for values of
Ozone that are above the upper quartile.
Refer to the Eurasian snow data that is given in Exercise 1.6 . Find the mean of the snow cover (a) for the odd-
numbered years and (b) for the even-numbered years.
Determine which columns of the data frame Cars93 (MASS package) are factors. For each of these factor
columns, print out the levels vector. Which of these are ordered factors?
Use summary() to get information about data in the data frames attitude (both in the datasets
package), and cpus (MASS package). Write brief notes, for each of these data sets, on what this reveals.
From the data frame mtcars (MASS package) extract a data frame mtcars6 that holds only the information for
cars with 6 cylinders.
From the data frame Cars93 (MASS package), extract a data frame which holds only information for small and
sporty cars.
67
8. Functions
Numeric vectors will be sorted in numerical order. Character vectors will be sorted in alphanumeric order.
The operator %in% can be highly useful in picking out subsets of data. For example:
> x <- rep(1:5,rep(3,5))
> x
[1] 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
> two4 <- x %in% c(2,4)
> two4
[1] FALSE FALSE FALSE TRUE TRUE TRUE FALSE FALSE FALSE TRUE
[11] TRUE TRUE FALSE FALSE FALSE
> # Now pick out the 2s and the 4s
> x[two4]
[1] 2 2 2 4 4 4
To find the position at which the first space appears, we might do the following:
nblank <- sapply(Cars93$Make, function(x){n <- nchar(x);
a <- substring(x, 1:n, 1:n); m <- match(" ", a,nomatch=1); m})
68
8.4.1 apply()
The function apply() can be used on data frames as well as matrices. Here is an example:
> apply(airquality,2,mean) # All elements must be numeric!
Ozone Solar.R Wind Temp Month Day
NA NA 9.96 77.88 6.99 15.80
> apply(airquality,2,mean,na.rm=T)
Ozone Solar.R Wind Temp Month Day
42.13 185.93 9.96 77.88 6.99 15.80
The use of apply(airquality,1,mean) will give means for each row. These are not, for these data, useful
information!
8.4.2 sapply()
The function sapply() can be useful for getting information about the columns of a data frame. Here we use it
to count that number of missing values in each column of the built-in data frame airquality.
> sapply(airquality, function(x)sum(is.na(x)))
Ozone Solar.R Wind Temp Month Day
37 7 0 0 0 0
Here are several further examples that use the data frame moths that accompanies these notes:
> sapply(moths,is.factor) # Determine which columns are factors
meters A P habitat
FALSE FALSE FALSE TRUE
> # How many levels does each factor have?
> sapply(moths, function(x)if(!is.factor(x))return(0) else length(levels(x)))
meters A P habitat
0 0 0 8
The syntax for tapply() is similar, except that the name of the second argument is INDEX rather than by.
The output is an array with as many dimensions as there are factors. Where there are no data values for a
particular combination of factor levels, NA is returned.
We proceed thus to add a column that has the abbreviations to the data frame. Here however our demands are
simple, and we can proceed thus:
new.Cars93 <- merge(x=Cars93,y=Cars93.summary[,4,drop=F],
by.x="Type",by.y="row.names")
This creates a data frame that has the abbreviations in the additional column with name “abbrev”.
If there had been rows with missing values of Type, these would have been omitted from the new data frame.
This can be avoided by making sure that Type has NA as one of its levels, in both data frames.
8.8 Dates
Since version 1.9.0, the date package has been superseded by functions for working with dates that are in R
base. See help(Dates), help(as.Date) and help(format.Date) for detailed information.
Use as.Date() to convert text strings into dates. The default is that the year comes first, then the month, and
then the day of the month, thus:
> # Electricity Billing Dates
> dd <- as.Date(c("2003/08/24","2003/11/23","2004/02/22","2004/05/> diff(dd)
Time differences of 91, 91, 91 days
Use format() to set or change the way that a date is formatted. The following are a selection of the symbols
used:
%d: day, as number
%a: abbreviated weekday name (%A: unabbreviated)
%m: month (00-12)
%b: month abbreviated name ( %B: unabbreviated)
%y: final two digits of year ( %Y: all four digits)
The default format is "%Y-%m-%d".
The function as.Date() will take a vector of character strings that has an appropriate format, and convert it
into a dates object. By default, dates are stored using January 1 1970 as origin. This becomes apparent when
as.integer() is used to convert a date into an integer value. Here are examples:
> as.Date("1/1/1960", format="%d/%m/%Y")
[1] "1960-01-01"
70
> as.Date("1:12:1960",format="%d:%m:%Y")
[1] "1960-12-01"
> as.Date("1960-12-1")-as.Date("1960-1-1")
as.Date("1960-12-1")-as.Date("1960-1-1")
> as.Date("31/12/1960","%d/%m/%Y")
[1] "1960-12-31"
> as.integer(as.Date("1/1/1970","%d/%m/%Y")
[1] 0
> as.integer(as.Date("1/1/2000","%d/%m/%Y"))
[1] 10957
The function format() allows control of the formatting of dates. See help(format.Date).
> dec1 <- as.Date("2004-12-1")
> format(dec1, format="%b %d %Y")
[1] "Dec 01 2004"
> format(dec1, format="%a %b %d %Y")
[1] "Wed Dec 01 2004"
The function returns the value (fahrenheit-32)*5/9. More generally, a function returns the value of the
last statement of the function. Unless the result from the function is assigned to a name, the result is printed.
Here is a function that prints out the mean and standard deviation of a set of numbers:
> mean.and.sd <- function(x=1:10){
+ av <- mean(x)
+ sd <- sqrt(var(x))
+ c(mean=av, SD=sd)
+ }
>
> # Now invoke the function
> mean.and.sd()
mean SD
5.500000 3.027650
> mean.and.sd(hills$climb)
mean SD
1815.314 1619.151
Earlier, we encountered the function sapply() that can be used to repeat a calculation on all columns of a
data frame. [More generally, the first argument of sapply() may be a list.] To apply faclev() to all
columns of the data frame moths we can specify
> sapply(moths, faclev)
We can alternatively give the definition of faclev directly as the second argument of sapply, thus
> sapply(moths, function(x)if(!is.factor(x))return(0)
else length(levels(x)))
Finally, we may want to do similar calculations on a number of different data frames. So we create a function
check.df() that encapsulates the calculations. Here is the definition of check.df().
check.df <- function(df=moths)
sapply(df, function(x)if(!is.factor(x))return(0) else
length(levels(x)))
to get the names of objects that have been added since the start of the session.
Break functions up into a small number of sub-functions or “primitives”. Re-use existing functions wherever
possible. Write any new “primitives” so that they can be re-used. This helps ensure that functions contain well-
tested and well-understood components. Watch the r-help electronic mail list (section 13.3) for useful functions
for routine tasks.
Wherever possible, give parameters sensible defaults. Often a good strategy is to use as defaults parameters that
will serve for a demonstration run of the function.
NULL is a useful default where the parameter mostly is not required, but where the parameter if it appears may
be any one of several types of data structure. The test if(!is.null()) then determines whether one needs to
investigate that parameter further.
Structure computations so that it is easy to retrace them. For this reason substantial chunks of code should be
incorporated into functions sooner rather than later.
Structure code to avoid multiple entry of information.
8.9.7 Graphs
Use graphs freely to shed light both on computations and on data. One of R’s big pluses is its tight integration
of computation and graphics.
One can thus write an R function that simulates a student guessing at a True-False test consisting of some
arbitrary number of questions. We leave this as an exercise.
8.10 Exercises
1. Use the round function together with runif() to generate 100 random integers between 0 and 99. Now
look up the help for sample(), and use it for the same purpose.
2. Write a function that will take as its arguments a list of response variables, a list of factors, a data frame, and
a function such as mean or median. It will return a data frame in which each value for each combination of
factor levels is summarised in a single statistic, for example the mean or the median.
3. Determine the number of days, according to R, between the following dates:
January 1 in the year 1700, and January 1 in the year 1800
January 1 in the year 1998, and January 1 in the year 2000
4. The supplied data frame milk has columns four and one. Seventeen people rated the sweetness of each of
two samples of a milk product on a continuous scale from 1 to 7, one sample with four units of additive and the
other with one unit of additive. Here is a function that plots, for each patient, the four result against the one
result, but insisting on the same range for the x and y axes.
plot.one <- function(){
xyrange <- range(milk) # Calculates the range of all values in the data frame
par(pin=c(6.75, 6.75)) # Set plotting area = 6.75 in. by 6.75 in.
plot(four, one, data=milk, xlim=xyrange, ylim=xyrange, pch=16)
abline(0,1) # Line where four = one
}
Rewrite this function so that, given the name of a data frame and of any two of its columns, it will plot the
second named column against the first named column, showing also the line y=x.
5. Write a function that prints, with their row and column labels, only those elements of a correlation matrix for
which abs(correlation) >= 0.9.
6. Write your own wrapper function for one-way analysis of variance that provides a side by side boxplot of the
distribution of values by groups. If no response variable is specified, the function will generate random normal
data (no difference between groups) and provide the analysis of variance and boxplot information for that.
7. Write a function that computes a moving average of order 2 of the values in a given vector. Apply the
function to the data (in the data set huron that accompanies these notes) for the levels of Lake Huron. Repeat
for a moving average of order 3.
9. Find a way of computing the moving averages in exercise 3 that does not involve the use of a for loop.
10. Create a function to compute the average, variance and standard deviation of 1000 randomly generated
uniform random numbers, on [0,1]. (Compare your results with the theoretical results: the expected value of a
uniform random variable on [0,1] is 0.5, and the variance of such a random variable is 0.0833.)
11. Write a function that generates 100 independent observations on a uniformly distributed random variable on
the interval [3.7, 5.8]. Find the mean, variance and standard deviation of such a uniform random variable. Now
modify the function so that you can specify an arbitrary interval.
12. Look up the help for the sample() function. Use it to generate 50 random integers between 0 and 99,
sampled without replacement. (This means that we do not allow any number to be sampled a second time.)
Now, generate 50 random integers between 0 and 9, with replacement.
74
13. Write an R function that simulates a student guessing at a True-False test consisting of 40 questions. Find
the mean and variance of the student's answers. Compare with the theoretical values of .5 and .25.
14. Write an R function that simulates a student guessing at a multiple choice test consisting of 40 questions,
where there is chance of 1 in 5 of getting the right answer to each question. Find the mean and variance of the
student's answers. Compare with the theoretical values of .2 and .16.
15. Write an R function that simulates the number of working light bulbs out of 500, where each bulb has a
probability .99 of working. Using simulation, estimate the expected value and variance of the random variable
X, which is 1 if the light bulb works and 0 if the light bulb does not work. What are the theoretical values?
16. Write a function that does an arbitrary number n of repeated simulations of the number of accidents in a
year, plotting the result in a suitable way. Assume that the number of accidents in a year follows a Poisson
distribution. Run the function assuming an average rate of 2.8 accidents per year.
17. Write a function that simulates the repeated calculation of the coefficient of variation (= the ratio of the
mean to the standard deviation), for independent random samples from a normal distribution.
18. Write a function that, for any sample, calculates the median of the absolute values of the deviations from the
sample median.
*19. Generate random samples from normal, exponential, t (2 d. f.), and t (1 d. f.), thus:
a) xn<-rnorm(100) b) xe<-rexp(100)
c) xt2<-rt(100, df=2) d) xt2<-rt(100, df=1)
Apply the function from exercise 17 to each sample. Compare with the standard deviation in each case.
*20. The vector x consists of the frequencies
5, 3, 1, 4, 6
The first element is the number of occurrences of level 1, the second is the number of occurrences of level 2,
and so on. Write a function that takes any such vector x as its input, and outputs the vector of factor levels, here
1 1 1 1 1 2 2 2 3 . . .
[You’ll need the information that is provided by cumsum(x). Form a vector in which 1’s appear whenever the
factor level is incremented, and is otherwise zero. . . .]
*21. Write a function that calculates the minimum of a quadratic, and the value of the function at the minimum.
*22. A “between times” correlation matrix, has been calculated from data on heights of trees at times 1, 2, 3, 4, .
. . Write a function that calculates the average of the correlations for any given lag.
*23. Given data on trees at times 1, 2, 3, 4, . . ., write a function that calculates the matrix of “average” relative
growth rates over the several intervals.
1 dw d log w
[The relative growth rate may be defined as = . Hence it is reasonable to calculate the average
w dt dt
log w 2 " log w1
over the interval from t1 to t2 as .]
t 2 " t1
!
!
75
Use lm() to fit multiple regression models. The various other models we describe are, in essence,
generalizations of this model.
Additive Model
y = "1 (x1 ) + " 2 (x 2 ) + ....+ " p (x p ) + #
Additive models are a generalization of lm models. In 1 dimension y = "1 (x1 ) + #
Some of z1 = "1 (x1 ),z2 = " 2 (x 2 ),...,z p = " p (x p ) may be smoothing functions, while others may be the
! usual linear model terms. The constant term gets absorbed into one or more of the " s.
!
Generalized Additive Model
!
!
39
There are various generalizations. Models which have this form may be nested within other models which have this basic
form. Thus there may be `predictions’ and `errors’ at different levels within the total model.
76
! Some of z1 = "1 (x1 ),z2 = " 2 (x 2 ),...,z p = " p (x p ) may be smoothing functions, while others may be the
usual linear model terms.
We can transform to get the model y = g(z1 + z2 + ...+ z p ) + "
! Notice that even if p = 1, we may still want to retain both "1(.) and g(.), i.e. y = g("1 (x1 )) + # .
The reason is that g(.) is a specific function, such as the inverse of the logit function. The function g(.) does as
much as it can of the task of transformation, with "1 (.) doing anything more that seems necessary.
!
The fitting of spline (bs() or ns()) terms in a linear model or a generalized linear model can be a good
! model.
alternative to the use of a full generalized additive !
The logit or log(odds) function turns expected proportions into values that may range from - to + . It is not
satisfactory to use a linear model to predict proportions. The values from the linear model may well lie outside
the range from 0 to 1. It is however in order to use a linear model to predict logit(proportion). The logit
function is an example of a link function.
There are various other link functions that we can use with proportions. One of the commonest is the
complementary log-log function.
77
Alveolar Concentration
Nomove 0.8 1 1.2 1.4 1.6 2.5
0 6 4 2 2 0 0
1 1 1 4 4 4 2
Total 7 5 6 6 4 2
_____________________________________________
Table 1: Patients moving (0) and not moving (1), for each of
six different alveolar concentrations.
We fit two models, the logit model and the complementary log-log model. We can fit the models either directly
to the 0/1 data, or to the proportions in Table 1. To understand the output, you need to know about “deviances”.
A deviance has a role very similar to a sum of squares in regression. Thus we have:
40
I am grateful to John Erickson (Anesthesia and Critical Care, University of Chicago) and to Alan Welsh (Centre for
Mathematics & its Applications, Australian National University) for allowing me use of these data.
78
If individuals respond independently, with the same probability, then we have Bernoulli trials. Justification for
assuming the same probability will arise from the way in which individuals are sampled. While individuals will
certainly be different in their response the notion is that, each time a new individual is taken, they are drawn at
random from some larger population. Here is the R code:
> anaes.logit <- glm(nomove ~ conc, family = binomial(link = logit),
+ data = anesthetic)
Coefficients:
Value Std. Error t value
(Intercept) -6.47 2.42 -2.68
conc 5.57 2.04 2.72
Correlation of Coefficients:
(Intercept)
conc -0.981
With such a small sample size it is impossible to do much that is useful to check the adequacy of the model.
Try also plot(anaes.logit).
79
41
I am grateful to Dr Edward Linacre, Visiting Fellow, Geography Department, Australian National University, for making
these data available.
80
to get a list of the summary methods that are available. You may want to mix and match, e.g. summary.lm()
on an aov or glm object. The output may not be what you might expect. So be careful!
9.9 Exercises
1. Fit a Poisson regression model to the data in the data frame moths that Accompanies these notes. Allow
different intercepts for different habitats. Use log(meters) as a covariate.
9.10 References
Dobson, A. J. 1983. An Introduction to Statistical Modelling. Chapman and Hall, London.
Hastie, T. J. and Tibshirani, R. J. 1990. Generalized Additive Models. Chapman and Hall, London.
Maindonald J H and Braun W J 2003. Data Analysis and Graphics Using R – An Example-Based Approach.
Cambridge University Press.
McCullagh, P. and Nelder, J. A., 2nd edn., 1989. Generalized Linear Models. Chapman and Hall.
Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York.
81
Random effects:
Formula: ~1 | block
(Intercept)
StdDev: 2.019373
This was a balanced design, which is why section 5.8.2 could use aov(). We can get an output summary that is
helpful for showing how the error mean squares match up with standard deviation information given above thus:
> intervals(kiwishade.lme)
Approximate 95% confidence intervals
Fixed effects:
lower est. upper
(Intercept) 96.62977 100.202500 103.775232
shadeAug2Dec -1.53909 3.030833 7.600757
shadeDec2Feb -14.85159 -10.281667 -5.711743
shadeFeb2May -11.99826 -7.428333 -2.858410
Random Effects:
Level: block
lower est. upper
sd((Intercept)) 0.5473014 2.019373 7.45086
Level: plot
lower est. upper
sd((Intercept)) 0.3702555 1.478639 5.905037
We are interested in the three sd estimates. By squaring the standard deviations and converting them to
variances we get the information in the following table:
Variance component Notes
2
block 2.019 = 4.076 Three blocks
2
plot 1.479 = 2.186 4 plots per block
residual (within group) 3.4902=12.180 4 vines (subplots) per plot
The above gives the information for an analysis of variance table. We have:
Variance component Mean square for anova table d.f.
block 4.076 12.180 + 4 2.186 + 16 4.076 2
= 86.14 (3-1)
plot 2.186 12.180 + 4 2.186 6
= 20.92 (3-1) (2-1)
residual 12.180 12.18 3 4 (4-1)
(within gp)
83
Now fsee where these same pieces of information appeared in the analysis of variance table of section 5.8.2:
> kiwishade.aov<-aov(yield~block+shade+Error(block:shade),data=kiwishade)
> summary(kiwishade.aov)
Error: block:shade
Df Sum Sq Mean Sq F value Pr(>F)
block 2 172.35 86.17 4.1176 0.074879
shade 3 1394.51 464.84 22.2112 0.001194
Residuals 6 125.57 20.93
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 36 438.58 12.18
A reasonable guess is that first order interactions may be all we need, i.e.
it2.lme<-lme(log(it)~(tint+target+agegp+sex)^2,
random=~1|id, data=tinting,method="ML")
Finally, there is the very simple model, allowing only for main effects:
it1.lme<-lme(log(it)~(tint+target+agegp+sex),
random=~1|id, data=tinting,method="ML")
Note that we have fitted all these models by maximum likelihood. This is so that we can do the equivalent of an
analysis of variance comparison. Here is what we get:
> anova(itstar.lme,it2.lme,it1.lme)
Model df AIC BIC logLik Test L.Ratio p-value
itstar.lme 1 26 8.146187 91.45036 21.926906
it2.lme 2 17 -3.742883 50.72523 18.871441 1 vs 2 6.11093 0.7288
it1.lme 3 8 1.138171 26.77022 7.430915 2 vs 3 22.88105 0.0065
42
Data relate to the paper: Burns, N. R., Nettlebeck, T., White, M. and Willson, J. 1999. Effects of car window tinting on
visual performance: a comparison of elderly and young drivers. Ergonomics 42: 428-443.
84
The model that limits attention to first order interactions is adequate. We will need to examine the first order
interactions individually. For this we re-fit the model used for it2.lme, but now with method="REML".
it2.reml<-update(it2.lme,method="REML")
Random effects:
Formula: ~Run | Expt
Structure: General positive-definite
StdDev Corr
85
10.3 Exercises
1. Use the function acf() to plot the autocorrelation function of lake levels in successive years in the data set
huron. Do the plots both with type=”correlation” and with type=”partial”.
86
10.4 References
Chambers, J. M. and Hastie, T. J. 1992. Statistical Models in S. Wadsworth and Brooks Cole Advanced Books
and Software, Pacific Grove CA.
Diggle, Liang & Zeger 1996. Analysis of Longitudinal Data. Clarendon Press, Oxford.
Everitt, B. S. and Dunn, G. 1992. Applied Multivariate Data Analysis. Arnold, London.
Hand, D. J. & Crowder, M. J. 1996. Practical longitudinal data analysis. Chapman and Hall, London.
Little, R. C., Milliken, G. A., Stroup, W. W. and Wolfinger, R. D. (1996). SAS Systems for Mixed Models. SAS Institute
Inc., Cary, New Carolina.
Maindonald J H and Braun W J 2003. Data Analysis and Graphics Using R – An Example-Based Approach.
Cambridge University Press.
Pinheiro, J. C. and Bates, D. M. 2000. Mixed effects models in S and S-PLUS. Springer, New York.
Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York.
87
11.1. Methods
R is an object-oriented language. Objects may have a “class”. For functions such as print(), summary(),
etc., the class of the object determines what action will be taken. Thus in response to print(x), R determines
the class attribute of x, if one exists. If for example the class attribute is “factor”, then the function which
finally handles the printing is print.factor(). The function print.default() is used to print objects that
have not been assigned a class.
More generally, the class attribute of an object may be a vector of strings. If there are “ancestor” classes –
parent, grandparent, . . ., these are specified in order in subsequent elements of the class vector. For example,
ordered factors have the class “ordered”, which inherits from the class “factor”. Thus:
> fac<-ordered(1:3)
> class(fac)
[1] "ordered" "factor"
Here fac has the class “ordered”, which inherits from the parent class “factor”.
The function print.ordered(), which is the function that is called when you invoke print() with an
ordered factor, could be rewritten to use the fact that “ordered” inherits from “factor”, thus:
> print.ordered
function (x, quote = FALSE)
{
if (length(x) <= 0)
cat("ordered(0)\n")
else NextMethod(“print”)
cat("Levels: ", paste(levels(x), collapse = " < "), "\n")
invisible(x)
}
The system version of print.ordered() does not use print.factor(). The function print.glm() does
not call print.lm(), even though glm objects inherit from lm objects. The mechanism is avaialble for use if
required.
> extract.arg(a=xy)
[1] “xy”
If the argument is a function, we may want to get at the arguments to the function. Here is how one can do it
deparse.args <-
function (a)
{
s <- substitute (a)
if(mode(s) == "call"){
88
For example:
> deparse.args(list(x+y, foo(bar)))
[1] "The function is: list ()"
[[1]]
[1] "x + y"
[[2]]
[1] "foo(bar)"
stores this unevaluated expression in my.exp . The actual contents of my.exp are a little different from what is
printed out. R gives you as much information as it thinks helpful.
Note that expression(mean(x+y)) is different from expression(“mean(x+y)”), as is obvious when the
expression is evaluated. A text string is a text string is a text string, unless one explicitly changes it into an
expression or part of an expression.
Let’s see how this works in practice
> x <- 101:110
> y <- 21:30
> my.exp <- expression(mean(x+y))
> my.txt <- expression("mean(x+y)")
> eval(my.exp)
[1] 131
> eval(my.txt)
[1] "mean(x+y)"
What if we already have “mean(x+y)” stored in a text string, and want to turn it into an expression? The
answer is to use the function parse(), but indicate that the parameter is text rather than a file name. Thus
> parse(text="mean(x+y)")
expression(mean(x + y))
Here is a function that creates a new data frame from an arbitrary set of columns of an existing data frame.
Once in the function, we attach the data frame so that we can leave off the name of the data frame, and use only
the column names
make.new.df <- function(old.df = austpop, colnames = c("NSW", "ACT"))
{
attach(old.df)
on.exit(detach(old.df))
argtxt <- paste(colnames, collapse = ",")
exprtxt <- paste("data.frame(", argtxt, ")", sep = "")
expr <- parse(text = exprtxt)
df <- eval(expr)
names(df) <- colnames
df
}
The function do.call() makes it possible to supply the function name and the argument list in separate text
strings. When do.call is used it is only necessary to use parse() in generating the argument list.
For example
make.new.df <-
function(old.df = austpop, colnames = c("NSW", "ACT"))
{
attach(old.df)
on.exit(detach(old.df))
argtxt <- paste(colnames, collapse = ",")
listexpr <- parse(text=paste("list(", argtxt, ")", sep = ""))
df <- do.call(“data.frame”, eval(listexpr))
names(df) <- colnames
df
}
assign("x", list(...)[[1]])
assign(xname, x)
}
y <- eval(expr)
yexpr <- parse(text=left)[[1]]
xexpr <- parse(text=xname)[[1]]
plot(x, y, ylab = yexpr, xlab = xexpr, type="n")
lines(spline(x,y))
mainexpr <- parse(text=paste(left, "==", right))
title(main = mainexpr)
}
Try
plotcurve()
plotcurve("ang=asin(sqrt(p))", p=(1:49)/50)
12. Appendix 1
Burns, P. J. A Guide for the Unwilling S User. [Available from CRAN sites]
[The style is leisurely. However this assumes some prior knowledge of computing language terms. It may suit
users with some initial knowledge of R.]
Chambers, J. M. 1998. Programming with Data. A Guide to the S Language. Springer-Verlag, New York.
[This is a book for specialists.]
Chambers, J. M. and Hastie, T. J. 1992. Statistical Models in S. Wadsworth and Brooks Cole Advanced Books and
Software, Pacific Grove CA
[This is the basic reference on R and S-PLUS model formulae and models.]
Dalgaard, P. 2002. Introductory Statistics with R. Springer, New York.
[This is an R-based introductory text, with a biostatistical emphasis.]
Fox, J. 2002. An R and S-PLUS Companion to Applied Regression. Sage Books.
Maindonald J H and Braun W J 2003. Data Analysis and Graphics Using R – An Example-Based Approach.
Cambridge University Press.
[This is an intermediate level text.]
Krause, A. and Olsen, M. 1997. The Basics of S and S-PLUS. Springer 1997.
[This is an introductory book, at about the same level as Spector.]
Spector, P. 1994. An Introduction to S and S-PLUS. Duxbury Press.
[This is a readable and compact beginner’s guide to the S language.]
Venables, W.N., Smith, D.M. and the R Development Core Team. An Introduction to R. Notes on R: A
Programming Environment for Data Analysis and Graphics.
[A current version is available from CRAN sites. This is derived from an original set of notes written by Bill
Venables and Dave Smith for the S and S-PLUS environments.
Venables, W. N. and Ripley, B. D., 4th edn 2002. Modern Applied Statistics with S. Springer, NY.
[This has become a text book for the use of S-PLUS and R for applied statistical analysis. It assumes a fair level of
statistical sophistication. Explanation is careful, but often terse. Together with the ‘Complements’ it gives brief
introductions to extensive packages of functions that have been written or adapted by Ripley, Venables, and a number of
other statisticians. Supplementary material (`Complements’) is available from
http://www.stats.ox.ac.uk/pub/MASS4/.]
Venables, W.N. and Ripley, B.D. 2000. S Programming. Springer 2000. This is a terse and careful
introduction to the dialects of the S language, including R.
Section 1.6
1. plot(distance~stretch,data=elasticband)
2. (ii), (iii), (iv)
plot(snow.cover ~ year, data = snow)
hist(snow$snow.cover)
hist(log(snow$snow.cover))
Section 2.7
1. The value of answer is (a) 12, (b) 22, (c) 600.
2. prod(c(10,3:5))
3(i) bigsum <- 0; for (i in 1:100) {bigsum <- bigsum+i }; bigsum
3(ii) sum(1:100)
4(i) bigprod <- 1; for (i in 1:50) {bigprod <- bigprod*i }; bigprod
4(ii) prod(1:50)
5. radius <- 3:20; volume <- 4*pi*radius^3/3
sphere.data <- data.frame(radius=radius, volume=volume)
6. sapply(tinting, is.factor)
sapply(tinting[, 4:6], levels)
sapply(tinting[, 4:6], is.ordered)
Section 3.9
1. plot(Animals$body, Animals$brain, pch=1,
xlab="Body weight (kg)",ylab="Brain weight (g)")
93
2. plot(log(Animals$body),log(Animals$brain),pch=1,
xlab="Body weight (kg)", ylab="Brain weight (g)", axes=F)
brainaxis <- 10^seq(-1,4)
bodyaxis <-10^seq(-2,4)
axis(1, at=log(bodyaxis), lab=bodyaxis)
axis(2, at=log(brainaxis), lab=brainaxis)
box()
identify(log(Animals$body), log(Animals$brain), labels=row.names(Animals))
3. par(mfrow = c(1,2)), etc.
Section 7.9
1. x <- seq(101,112) or x <- 101:112
2. rep(c(4,6,3),4)
3. c(rep(4,8),rep(6,7),rep(3,9)) or rep(c(4,6,3),c(8,7,9))
mat64 <- matrix(c(rep(4,8),rep(6,7),rep(3,9)), nrow=6, ncol=4)
4. rep(seq(1,9),seq(1,9)) or rep(1:9, 1:9)
6. (a) Use summary(airquality) to get this information.
(b) airquality[airquality$Ozone == max(airquality$Ozone),]
(c) airquality$Wind[airquality$Ozone > quantile(airquality$Ozone, .75)]
7. mean(snow$snow.cover[seq(2,10,2)])
mean(snow$snow.cover[seq(1,9,2)])
9. summary(attitude); summary(cpus)
Comment on ranges of values, whether distributions seem skew, etc.
10. mtcars6<-mtcars[mtcars$cyl==6,]
11. Cars93[Cars93$Type==”Small”|Cars93$Type==”Sporty”,]