Using R For Data Analysis and Graphics An Introduction
Using R For Data Analysis and Graphics An Introduction
J H Maindonald
Statistical Consulting Unit of the Graduate School,
J. H. Maindonald 2001. A licence is granted for personal study and classroom use. Redistribution in any other form is prohibited.
Languages shape the way we think, and determine what we can think about (Benjamin Whorf.).
25 June 2001
Bulburin
fem ale
m ale
t ail len gt h
70
75
fo o t len gt h
60
65
ear co n ch len gt h
32
36
40
40
45
50
55
Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. : Morphological variation among populations of the mountain brushtail possum, trichosurus caninus Ogibly (Phalangeridae:Marsupialia). Australian Journal of Zoology 43: 449-459, 1995.
possum n. 1 Any of many chiefly herbivorous, long-tailed, tree-dwelling, mainly Australian marsupials, some of which are gliding animals (e.g. brush-tailed possum, flying possum). 2 a mildly scornful term for a person. 3 an affectionate mode of address.
From the Australian Oxford Paperback Dictionary, 2nd ed, 1996.
40
45
50
55
32
34
36
38
40
42
Contents
Introduction ........................................................................................................................................................1 1. Starting Up ......................................................................................................................................................3 1.1 1.2 1.3 1.4 1.5 1.6 Getting started under Windows ............................................................................................................3 Using the Console (or Command Line) Window ..................................................................................5 A Short R Session..................................................................................................................................5 Further Notational Details ...................................................................................................................7 On-line Help .........................................................................................................................................7 Exercise ................................................................................................................................................8
2. An Overview of R ...........................................................................................................................................9 2.1 The Uses of R..............................................................................................................................................9 2.2 The Look and Feel of R.............................................................................................................................11 2.3 R Objects ..................................................................................................................................................12 *2.4 Looping...................................................................................................................................................12 2.5 R Functions...............................................................................................................................................13 2.6 Vectors......................................................................................................................................................14 2.7 Data Frames .............................................................................................................................................16 2.8 Common Useful Functions .......................................................................................................................18 2.9 Making Tables ..........................................................................................................................................19 2.10 The R Directory Structure ......................................................................................................................19 2.11 More Detailed Information.....................................................................................................................20 2.11 Exercises.................................................................................................................................................20 3. Plotting ..........................................................................................................................................................21 3.1 plot () and allied functions........................................................................................................................21 3.2 Fine control Parameter settings ............................................................................................................22 3.3 Adding points, lines and text.....................................................................................................................23 3.4 Identification and Location on the Figure Region ...................................................................................25 3.5 Plots that show the distribution of data values.........................................................................................26 3.6 Other Useful Plotting Functions...............................................................................................................29 3.7 Plotting Mathematical Symbols................................................................................................................31 3.8 Guidelines for Graphs ..............................................................................................................................31 3.9 Exercises...................................................................................................................................................32 3.10 References...............................................................................................................................................33 4. Lattice graphics, and coplot() ......................................................................................................................35 4.1 Examples that Present Panels of Scatterplots Using xyplot() .........................................................35 4.2 Using coplot() ....................................................................................................................................37 4.3 Exercises...................................................................................................................................................37
5. Linear (Multiple Regression) Models and Analysis of Variance ..............................................................39 5.1 The Model Formula in Straight Line Regression .....................................................................................39 5.2 Regression Objects ...................................................................................................................................40 5.3 Model Formulae, and the X Matrix ..........................................................................................................41 5.4 Multiple Linear Regression Models..........................................................................................................43 5.5 Polynomial and Spline Regression ...........................................................................................................45 5.6 Using Factors in R Models .......................................................................................................................48 5.7 Multiple Lines Different Regression Lines for Different Species...........................................................51 5.8 aov models (Analysis of Variance) ...........................................................................................................52 5.9 Exercises...................................................................................................................................................54 5.10 References...............................................................................................................................................55 6. Multivariate and Tree-Based Methods.......................................................................................................57 6.1 Multivariate EDA, and Principal Components Analysis ..........................................................................57 6.2 Cluster Analysis ........................................................................................................................................58 6.3 Discriminant Analysis...............................................................................................................................58 6.4 Decision Tree models (Tree-based models)..............................................................................................60 6.5 Exercises...................................................................................................................................................60 6.6 References.................................................................................................................................................60 *7. R Data Structures .......................................................................................................................................63 7.1 Vectors......................................................................................................................................................63 7.2 Missing Values..........................................................................................................................................63 7.3 Data frames ..............................................................................................................................................64 7.4 Data Entry ................................................................................................................................................65 7.5 Factors and Ordered Factors...................................................................................................................67 7.6 Ordered Factors .......................................................................................................................................68 7.7 Lists...........................................................................................................................................................68 *7.8 Matrices and Arrays ...............................................................................................................................69 7.9 Different Types of Attachments.................................................................................................................70 7.10 Exercises.................................................................................................................................................70 8. Useful Functions ...........................................................................................................................................73 8.1 Confidence Intervals and Tests.................................................................................................................73 8.2 Matching and Ordering ............................................................................................................................73 8.3 String Functions .......................................................................................................................................73 8.4 Application of a Function to the Columns of an Array or Data Frame....................................................74 *8.5 tapply() ...................................................................................................................................................74 8.6 Splitting Vectors and Data Frames Down into Lists split()...................................................................76 *8.7 Merging Data Frames ............................................................................................................................76 8.8 Dates.........................................................................................................................................................76
ii
8.9 Exercises...................................................................................................................................................77 9. Writing Functions and other Code..............................................................................................................79 9.1 Syntax and Semantics ...............................................................................................................................79 9.2 Issues for the Writing and Use of Functions.............................................................................................80 9.3 Functions as aids to Data Management ...................................................................................................81 9.4 A Simulation Example ..............................................................................................................................81 9.5 Exercises...................................................................................................................................................82 *10. GLM, and General Non-linear Models...................................................................................................85 10.1 A Taxonomy of Extensions to the Linear Model .....................................................................................85 10.2 Logistic Regression.................................................................................................................................86 10.3 glm models (Generalized Linear Regression Modelling) .......................................................................90 10.4 Models that Include Smooth Spline Terms .............................................................................................90 10.5 Non-linear Models..................................................................................................................................90 10.6 Model Summaries ...................................................................................................................................90 10.7 Further Elaborations..............................................................................................................................91 10.8 Exercises.................................................................................................................................................91 10.9 References...............................................................................................................................................91 *11. Multi-level Models, Time Series and Survival Analysis ........................................................................93 11.1 Multi-Level Models, Including Repeated Measures Models...................................................................93 11.2 Time Series Models.................................................................................................................................97 11.3 Survival Analysis ....................................................................................................................................98 11.4 Exercises.................................................................................................................................................98 11.5 References...............................................................................................................................................98 *12. Advanced Programming Topics ..............................................................................................................99 12.1. Methods .................................................................................................................................................99 12.2 Extracting Arguments to Functions ........................................................................................................99 12.3 Parsing and Evaluation of Expressions................................................................................................100 12.4 Plotting a mathematical expression......................................................................................................101 12.4 Searching R functions for a specified token..........................................................................................102 13. R Resources ...............................................................................................................................................103 13.1 R Packages for Windows ......................................................................................................................103 13.2 Literature written by expert users.........................................................................................................103 13.3 The R-help electronic mail discussion list ............................................................................................104 13.4 Competing Systems XLISP-STAT.......................................................................................................104 14. Appendix 1.................................................................................................................................................105 14.1 Data Sets Referred to in these Notes ....................................................................................................105 14.2 Answers to Selected Exercises ..............................................................................................................105
iii
iv
Introduction
R implements a dialect of the S language that was developed at AT&T Bell Laboratories by Rick Becker, John Chambers and Allan Wilks. Versions of R are available, at no cost, for 32-bit versions of Microsoft Windows for Linux, for Unix and for Macintosh systems 8.6 or later. It is available through the Comprehensive R Archive Network (CRAN). Web addresses are given below. The citation for John Chambers 1998 Association for Computing Machinery Software award stated that S has forever altered how people analyze, visualize and manipulate data. The R project enlarges on the ideas and insights that generated the S language. Here are points relating to the use of R that potential users might consider: 1. R has extensive and powerful graphics abilities, that are tightly linked with its analytic abilities. 2. Although there is no official support for R, its informal support network, accessible from the r-help mailing list, can be highly effective. 3. Simple calculations and analyses can be handled straightforwardly, albeit (in the current version) using a command line interface. Chapters 1 and 2 are intended to give the flavour of what is possible without getting deeply into the R language. If simple methods prove inadequate, there can be recourse to the huge range of more advanced abilities that R offers. Adaptation of available abilities allows even greater flexibility. 4. The R community is widely drawn, from application area specialists as well as statistical specialists. It is a community that is sensitive to the potential for misuse of statistical techniques and suspicious of what might appear to be mindless use. Expect scepticism of the use of models that are not susceptible to some minimal form of data-based validation. 5. Because R is free, users have no right to expect attention, on the r-help list or elsewhere, to queries. Be grateful for whatever help is given. There is no substitute for experience and expert knowledge, even when the statistical analysis task may seem straightforward. Neither R nor any other statistical system will give the statistical expertise that is needed to use sophisticated abilities, or to know when nave methods are not enough. Experience with the use of R is however, more than with most systems, likely to be an educational experience. While R is as reliable as any statistical software that is available, and exposed to higher standards of scrutiny than most other systems, there are traps that call for special care. Many of the model fitting routines in R are leading edge. There may be a limited tradition of experience of the limitations and potential pitfalls of some of the newer abilities. Whatever the statistical system, and especially when there is some element of complication, check each step with care. Hurrah for the R development team!
and look for the nearest CRAN (Comprehensive R Archive Network) site. Australian users may wish to go directly to the site:
http://mirror.aarnet.edu.au/pub/CRAN
The R Project
The initial version of R was developed by Ross Ihaka and Robert Gentleman, both from the University of Auckland. Development of R is now overseen by a `core team of about a dozen people, widely drawn from different institutions worldwide. The development model is similar to that of the increasingly popular Linux operating system. Like Linux, R is an open source system. Source-code is available for inspection or for adaptation to other systems. In principle, if it is unclear what a routine does, one can check the source code. Exposing code to the critical scrutiny of highly expert users has proved an extremely effective way to identify bugs and other inadequacies, and to elicit ideas for enhancement. Reported bugs are commonly fixed in the next minor-minor release, which will usually appear within a matter of weeks. A point and click interface is at an early stage of development. Users should be aware that R is developing rapidly. Substantial new features appear every few months. As of version 1.2, R has a dynamic memory model. Depending on available computer memory, the processing of a data set containing one hundred thousand observations and perhaps twenty variables may press the limits of what R can reasonably handle. Novice users will notice small but occasionally important differences between the S dialect that R implements and the commercial S-PLUS implementation of S. Those who write their own substantial functions and (more importantly) libraries will find large differences. Libraries that have been written for R offer abilities that are broadly comparable with, or in some instances go beyond, those in S-PLUS libraries. These give access to up-todate methodology from leading statistical researchers. R has strong graphics abilities. The recently released beta version of the lattice graphics library gives many of the abilities that are in the S-PLUS trellis library. R is attractive as a language environment for the development of new scientific computational tools. Computerintensive components can, if computational efficiency demands, be handled by a call to a function that is written in the C language. The R-help mailing list is a useful source of advice and help. Be sure to check the available documentation before posting this list. Archives are available that can be searched for questions that may have been previously answered. The final chapter gives useful web addresses. _________________________________________________________________________ Jeff Wood (CMIS, CSIRO), Andreas Ruckstuhl (Technikum Winterthur Ingenieurschule, Switzerland) and John Braun (University of Western Ontario) gave me exemplary help in getting the earlier S-PLUS version of this document somewhere near shipshape form. John Braun gave valuable help with proofreading, and provided several of the data sets and a number of the exercises. I take full responsibility for the errors that remain. I am grateful, also, to the various scientists named in the notes who have allowed me to use their data.
1. Starting Up
R must be installed on your system! If it is not, follow the installation instructions appropriate to the operating system. Installation is now especially straightforward for Windows users. Copy down the latest SetupR.exe from the relevant base directory on the nearest CRAN site, click on its icon to start installation, and follow instructions. Libraries that do not come with the base distribution must be downloaded and installed separately. It pays to have a separate workspace directory for each major project. For more details. see the README file that is included with the R distribution. Users of Microsoft Windows may wish to create a separate icon for each 1 such workspace. First create the directory that will be used for the new workspace. Then right click|copy to copy an existing R icon, it, right click|paste to place a copy on the desktop, right click|rename on the copy to 2 rename it , and then finally go to right click|properties to set the Start in directory to be the workspace directory that was set up earlier.
1.1
Click on the R icon. Or if there is more than one icon, choose the icon that corresponds to the project that is in hand. For this demonstration I will click on my r-notes icon. In interactive use under Microsoft Windows there are several ways to input commands to R. Figures 1 and 2 demonstrate two of the possibilities. Either or both of the following may be used at the users discretion: For the moment, we will type commands into the command window, at the command line prompt. Fig. 1 shows the command window as it appears when R has just been started, for version 0.90.0. At the time of writing, the latest version is 1.3.0.
1 2
This is a shortcut for right click, then left click on the copy menu item.
Enter the name of your choice into the name field. For ease of remembering, choose a name that closely matches the name of the workspace directory.
The screen snapshot in Fig.2 shows a display file window. This allows input to R of statements from a file that has been set up in advance. To get a display file window, go to the File menu. Then click on Display File. You will be asked for the name of a file whose contents are then displayed in the window. In Fig. 2 the file was rcommands.txt. Highlight the commands that are intended for input to R. Click on the `Paste to console icon, on the far left of the display file toolbar in Figs. 2 and 3, to send these commands to R.
Fig. 2: The focus is on an R display file window, with the console window in the background.
Fig. 3: The `paste to console, `print, and `return focus to console icons.
Under Unix, the standard form of input is the command line interface. Under both Microsoft Windows and 3 Linux (or Unix), a further possibility is to run R from within the emacs editor . This works much better under
This requires both emacs and the emacs add-on call ESS. Both are free. look under Software|Other on the CRAN web page.
Linix/Unix than under Windows. Under Microsoft Windows, an attractive option is to use a utility that is 4 designed for use with the shareware WinEdt editor .
1.2
Fig. 1 showed the console window when it was first opened. The command line prompt, i.e. the >, is an invitation to start typing in your commands. For example, type in 2+2 and press the Enter key. Here is what I get on my screen:
> 2+2 [1] 4 >
Here the result is 4. The[1] says, a little strangely, first requested element will follow. Here, there is just one element. The > indicates that R is ready for another command. The exit or quit command is > q() Alternatives are to click on the File menu and then on Exit, or to click on the in the top right hand corner of the R window. There will be a message asking whether to save the workspace image. Clicking Yes (the safe option) will save all the objects that remain in the workspace any that were there at the start of the session and any that have been added since.
1.3
A Short R Session
We will read into R a file that holds the population figures for Australian states and territories, and the total population, at various times since 1917. We will use information from this file to create a graph. Here is the information in the file: Year NSW Vic. Qld SA WA Tas. NT ACT Aust. 1917 1904 1409 683 440 306 193 5 3 4941 1927 2402 1727 873 565 392 211 4 8 6182 1937 2693 1853 993 589 457 233 6 11 6836 1947 2985 2055 1106 646 502 257 11 17 7579 1957 3625 2656 1413 873 688 326 21 38 9640 1967 4295 3274 1700 1110 879 375 62 103 11799 1977 5002 3837 2130 1286 1204 415 104 214 14192 1987 5617 4210 2675 1393 1496 449 158 265 16264 1997 6274 4605 3401 1480 1798 474 187 310 18532 The following reads in the data from the file austpop.txt on a disk in drive a:
> austpop <<- read.table(a:/austpop.txt, header=T)
The <- is a left diamond bracket (<) followed by a minus sign (-). It means is assigned to. Use of header=T causes R to use= the first line to get header information for the columns. If column headings are not included in the file, the argument can be omitted. Now type in austpop at the command line prompt, displaying the object on the screen:
> austpop Year NSW Vic. Qld 683 873 SA 440 565 WA Tas. 306 392 193 211 NT ACT Aust. 5 4 3 8 4941 6182 1 1917 1904 1409 2 1927 2402 1727 . . .
We will learn later that austpop is a special form of R object, known as a data frame. Data frames that consist entirely of numeric data have a structure that is similar to that of numeric matrices.
The R-WinEdt utility, which is free, is a plugin for WinEdt. For links to the relevant web pages, for WinEdt and R-WinEdt , look under Software|Other on the CRAN web page.
We will now do a plot of the ACT population between 1917 and 1997. We will first of all remind ourselves of the column names:
> names(austpop) names(austpop) [1] "Year" [9] "ACT" "NSW" "Aust." "Vic." "Qld" "SA" "WA" "Tas." "NT"
The option pch=16 sets the plotting character to solid black dots. Fig. 4 shows the graph:
300 ACT 0 1920 50 100 200
1940
1960 Year
1980
2000
Figure 4: ACT population, at various times between 1917 and 1997. This plot can be improved greatly. We can specify more informative axis labels, change size of the text and of the plotting symbol, and so on.
One can use data.frame() to input these (or other) data directly at the command line. We will give the data frame the name elasticband: elasticband <- data.frame(stretch=c(46,54,48,50,44,42,52), distance=c(148,182,173,166,109,141,166))
space are sep="," and sep="\ sep="\t". This last choice makes tabs separators. Similarly, users can control over the choice of missing value character or characters, which by default is NA. If the missing value character is a period (.), specify na.strings=".". R has several variants of read.table() that differ only in having different default parameter settings. Note in particular read.csv(), which has settings that are suitable for comma delimited (csv) files that have been generated from Excel spreadsheets. If read.table() detects that lines in the input file have different numbers of fields, data input will fail, with an error message that draws attention to the discrepancy. It is then often useful to use the function count.fields() to report the number of fields that were identified on each separate line of the file.
1.4
>
As noted earlier, the command line prompt is R commands (expressions) are typed in following this prompt . There is also a continuation prompt, used when, following a carriage return, the command is still not complete. By default, the continuation prompt is
+
5
In these notes, we often continue commands over more than one line, but omit the + that will appear on the commands window if the command is typed in as we show it. For the names of R objects or commands, case is significant. Thus Austpop is different from austpop. For file names however, the Microsoft Windows conventions apply, and case does not distinguish file names. On Unix systems letters that have a different case are treated as different. Anything that follows a # on the command line is taken as comment and ignored by R. Note: Recall that, in order to quit from the R session we had to type q(). This is because q is a function. Typing q on its own, without the parentheses, displays the text of the function on the screen. Try it!
1.5
On-line Help
To get a help window (under R for Windows) with a list of help topics, type:
> help()
In R for Windows, an alternative is to click on the help menu item, and then use key words to do a search. To get help on a specific R function, e.g. plot(), type in
> help(plot)
The two search functions help.search() and apropos() can be a huge help in finding what one wants. Examples of their use are:
> help.search("matrix") help.search("matrix")
This lists all functions whose help pages have a title or alias in which the text string matrix appears.
> apropos(matrix)
This lists all function names that include the text matrix. Experimentation often helps clarify the precise action of an R function.
Multiple commands may appear on the one line, with the semicolon (;) as the separator.
1.6
Exercise
1. In the data frame elasticband from section 1.3.1, plot distance against stretch. 2. The following ten observations, taken during the years 1970-79, are on October snow cover for Eurasia. (Snow cover is in millions of square kilometers): year snow.cover 1970 6.5 1971 12.0 1972 14.9 1973 10.0 1974 10.7 1975 7.9 1976 21.9 1977 12.5 1978 14.5 1979 9.2 i. Enter the data into R. [Section 1.3.1 showed one way to do this. To save keystrokes, enter the successive years as 1970:1979 1970:1979] ii. Plot snow.cover versus year. iii Use the hist() command to plot a histogram of the snow cover values. iv. Repeat ii and iii after taking logarithms of snow cover. 3. Input the following data, on damage that had occurred in space shuttle launches prior to the disastrous launch of Jan 28 1986. These are the data, for 6 launches out of 24, that were included in the pre-launch charts that were used in deciding whether to proceed with the launch. (Data for the 23 launches where information is available is in the data set orings that accompanies these notes.) Temperature Erosion (F) incidents 53 3 57 1 63 1 70 1 70 1 75 0 Blowby incidents 2 0 0 0 0 2 Total incidents 5 1 1 1 1 1
Enter these data into a data frame, with (for example) column names temperature, erosion, blowby and total. (Refer back to Section 1.3.1). Plot total incidents against temperature.
We may for example require information on ranges of variables. Thus the range of distances (first column) is from 2 miles to 28 miles, while the range of times (third column) is from 15.95 (minutes) to 204.6 minutes. We will discuss graphical summaries in the next section.
1000
4000
7000 25
distance
7000
4000
climb
1000
time
5 15 25 50 150 50
Suppose we wish to calculate logarithms, and then calculate correlations. We can do all this in one step, thus:
> cor(log(hills)) distance climb distance climb time time 1.00 0.700 0.890 0.70 1.000 0.724 0.89 0.724 1.000
Unfortunately R was not clever enough to relabel distance as log(distance), climb as log(climb), and time as log(time). Notice that the correlations between time and distance, and between time and climb, have reduced. Why has this happened?
150
15
10
Straight Line Regression: Here is a straight line regression calculation. One specifies an lm (= linear model) expression, which R evaluates. The data are stored in the data frame elasticband that accompanies these notes. The variable names are the names of columns in that data frame. The command asks for the regression of distance travelled by the elastic band (distance) on the amount by which it is stretched (stretch). > plot(distance ~ stretch,data=elasticband, pch=16) > elastic.lm <<- lm(distance~stretch,data=elasticband) > lm(distance ~stretch,data=elasticband)
Call: lm(formula = distance ~ stretch, data = elast elasticband icband) Coefficients: (Intercept) -63.571 stretch 4.554
Try it!
We could also have used a loop. In general it is preferable to avoid loops whenever, as here, there is a good alternative. Loops may involve severe computational overheads.
Note however that R has no header files, most declarations are implicit, there are no pointers, and vectors of text strings can be defined and manipulated directly. The implementation of R relies heavily on list processing ideas from the LISP language. Lists are a key part of R syntax.
11
2.3 R Objects
All R entities, including functions and data structures, exist as objects. They can all be operated on as data. Type in ls() to see the names of all objects in your workspace. An alternative to ls() is objects(). In 8 both cases there is provision to specify a particular pattern, e.g. starting with the letter `p . Typing the name of an object causes the printing of its contents. Try typing q, mean, etc. Important: On quitting, R offers the option of saving the workspace image. This allows the retention, for use in the next session in the same workspace, any objects that were created in the current session. Careful housekeeping may be needed to distinguish between objects that are to be kept and objects that will not be used again. Before typing q() to quit, use rm() to remove objects that are no longer required. Saving the workspace image will then save everything remains. The workspace image will be automatically loaded upon starting another session in that directory.
*92.4 Looping
In R there is often a better alternative to writing an explicit loop. Where possible, use one of the built-in functions to avoid explicit looping. A simple example of a for loop is
for (i in 1:10) print(i) 10
Here is another example of a for loop, to do in a complicated way what we did very simply in section 2.1.5:
> # Celsius to Fahrenheit > for (celsius in 25:30) + print(c(celsius, 9/5*celsius + 32)) [1] 25 77 [1] 26.0 78.8 [1] 27.0 80.6 [1] 28.0 82.4 [1] 29.0 84.2 [1] 30 86
The calculation iteratively builds up the object answer, using the successive values of j listed in the vector (31,51,91). i.e. Initially, j=31, and answer is assigned the value 31 + 0 = 31. Then j=51, and answer is assigned the value 51 + 31 = 82. Finally, j=91, and answer is assigned the value 91 + 81 = 173. Then the procedure ends, and the contents of answer can be examined by typing in answer and pressing the Enter key.
Type in help(ls) and help(grep) to get details. The pattern matching conventions are those used for grep(), which is modelled on the Unix grep command. Asterisks (*) identify sections that are more technical and might be omitted at a first reading Other looping constructs are: repeat <expression> ## break must appear somewhere inside the loop
10
while (x>0) <expression> Here <expression> is an R statement, or a sequence of statements that are enclosed within braces
12
Skilled R users have limited recourse to loops. There are often, as in the example above, better alternatives.
2.5 R Functions
We give two simple examples of R functions.
The return value is the value of the final (and in this instance only) expression that appears in the function 11 body . Use the function thus
> miles.to.km(175) miles.to.km(175) [1] 280 # Approximate distance to Sydney, in miles
The function will do the conversion for several distances all at once. To convert a vector of the three distances 100, 200 and 300 miles to distances in kilometers, specify:
> miles.to.km(c(100,200,300)) miles.to.km(c(100,200,300)) [1] 160 320 480
Here is a function that makes it possible to plot the figures for any pair of candidates.
plot.florida <<- function(xvar=BUSH, yvar=BUCHANAN){ yvar=BUCHANAN){ x <<- florida[,xvar] y<y<- florida[,yvar] plot(x, y, xlab=xvar,ylab=yvar) mtext(side=3, line=1.75, Votes in Florida, by county, in \nthe 2000 US Presidential election) }
Note that the function body is enclosed in braces ({ }). As well as plot.florida(), this allows, e.g.
plot.florida(yvar=NADER) # yvar=NADER overover-rides the default plot.florida(xvar=GORE, yvar=NADER)
Fig. 6 shows the graph produced by plot.florida(), i.e. parameter settings are left at their defaults.
11
Alternatively a return value may be given using an explicit return() statement. This is however an uncommon construction
13
500
1500
2500
50000
150000 BUSH
250000
Figure 6: Election night count of votes received, by county, in the US 2000 Presidential election.
2.6 Vectors
Examples of vectors are c(2,3,5,2,7,1) 3:10 # The numbers 3, 4, .., 10 c(T,F,F,F,T,T,F) c(Canberra,Sydney,Newcastle,Darwin) Vectors may have mode logical, numeric or character . The first two vectors above are numeric, the third is logical (i.e. a vector with elements of mode logical), and the fourth is a string vector (i.e. a vector with elements of mode character). The missing value symbol, which is NA, can be included as an element of a vector.
12
Below, we will meet the notion of class, which is important for some of the more sophisticated language features of S-PLUS. The logical, numeric and character vectors just given have class NULL, i.e. they have no class. There are special types of numeric vector which do have a class attribute. Factors (see section 2.6.3) are an most important example.
12
14
2. Specify a vector of logical values. The elements that are extracted are those for which the logical value is T. Thus suppose we want to extract values of x that are greater than 10.
> x>10 # This generates a vector of logical logical (T or F) [1] F T F T T > x[x>10] [1] 11 15 12
Arithmetic relations that may be used in the extraction of subsets of vectors are <, <=, >, >=, ==, and !=. The first four compare magnitudes, == tests for equality, and != tests for inequality.
2.6.4 Factors
A factor is a special type of vector, stored internally as a numeric vector with values 1, 2, 3, k. The value k is 14 the number of levels. An attributes table gives the level for each integer value . Factors provide a compact way to store character strings. They are crucial in the representation of categorical effects in model and graphics formulae. The class attribute of a factor has, not surprisingly, the value factor. Consider a survey that has data on 691 females and 692 males. If the first 691 are females and the next 692 males, we can create a vector of strings that that holds the values thus:
A third more subtle method is available when vectors have named elements. One can then use a vector of names to extract the elements, thus:
> c(Andreas=178, John=185, Jeff=183)[c("John","Jeff")] John Jeff 185
14
13
183
15
(The usage is that rep(female, 691) creates 691 copies of the character string female, and similarly for the creation of 692 copies of male.) We can change the vector to a factor, by entering:
gender <<- factor(gender)
Internally the factor gender is stored as 691 1s, followed by 692 2s. It has stored with it a table that looks like this: 1 female 2 male
Once stored as a factor, the space required for storage is reduced. Whenever the context seems to demand a character string, the 1 is translated into female and the 2 into male. The values female and male are the levels of the factor. By default, the levels are in alphanumeric order, so that female precedes male. Hence: > levels(gender) # Assumes gender is a factor, created as above [1] "female" "male" The order of the levels in a factor determines the order in which the levels appear in graphs that use this information, and in tables. To cause male to come before female, use
gender <<- relevel(gender, ref=male) ref=male)
An alternative is
gender <<- factor(gender, levels=c(male, female))
This last syntax is available both when the factor is first created, or later when one wishes to change the order of levels in an existing factor. Incorrect spelling of the level names will generate an error message. Try
gender <<- factor(c(rep(female,691), rep(male,692))) table(gender) gender <<- factor(gender, levels=c(male, female)) table(gender) gender <<- factor(gender, levels=c(Male, female)) # Erroneous Erroneous - "male" rows now hold missing values table(gender) rm(gender) # Remove gender
The data frame has row labels (access with row.names(Cars93.summary)) Compact, Large, . . . The column names (access with names(Cars93.summary)) are Min.passengers (i.e. the minimum number
16
of passengers for cars in this category), Max.passengers, No.of.cars., and abbrev. The first three columns have mode numeric, and the fourth has mode character. Columns can be vectors of any mode. The column abbrev could equally well be stored as a factor. Any of the following the vector type.
15
will pick out the fourth column of the data frame Cars93.summary, then storing it in
type <<- Cars93.summary$abbrev type <<- Cars93.summary[,4] type <<- Cars93.summary[,abbrev] type <<- Cars93.summary[[4]] # in the fourth list element. # Take the object that is stored
3rd Qu.:37.30
Type data() to get a list of built-in data sets in the libraries that have been loaded .
15 16
Also legal is Cars93.summary[2]. This gives a data frame with the single column Type.
In general forms of list, elements that are of arbitrary type. They may be any mixture of scalars, vectors, functions, etc.
17
The list include all libraries that are in the current environment.
17
The functions mean(), median(), range(), and a number of other functions, take the argument na.rm=T; i.e. remove NAs, then proceed with the calculation. By default, sort() omits any NAs. The function order() places NAs last. Hence:
> x <<- c(1, 20, > order(x) [1] 1 3 2 5 4 > x[order(x)] [1] [1] 1 1 2 20 22 NA 2 20 22 > sort(x) 2, NA, 22)
The functions mean and range, and several of the other functions noted above, have parameters na.rm. For example
> range(rainforest$branch, na.rm=T) [1] 4 120 # Omit NAs, then determine the range
One can specify na.rm=T as a third argument to the function sapply. This argument is then automatically passed to the function that is specified in the second argument position. For example:
Source: Ash, J. and Southern, W. 1982: Forest biomass at Butlers Creek, Edith & Joy London Foundation, New South Wales, Unpublished manuscript. See also Ash, J. and Helman, C. 1990: Floristics and vegetation biomass of a forest catchment, Kioloa, south coastal N.S.W. Cunninghamia, 2(2): 167-182.
18
18
> sapply(rainforest[,sapply(rainforest[,-7], range, na.rm=T) dbh wood bark root rootsk branch [1,] [2,] 4 3 8 105 2 135 0.3 24.0 4 120 56 1530
Chapter 8 has further details on the use of sapply(). There is an example that shows how to use it to count the number of missing values in each column of data.
C. fraseri 12
> table(Barley$Year,Barley$Site)
WARNING: NAs are by default ignored. The action needed to get NAs tabulated under a separate NA category depends, annoyingly, on whether or not the vector is a factor. If the vector is not a factor, specify exclude=NULL. If the vector is a factor then it is necessary to generate a new factor that includes NA as a level. Specify x <<- factor(x,exclude=NULL)
> x_c(1,5,NA,8) > x <<- factor(x) > x [1] 1 5 NA 8 1 5 8 NA 8 1 5 8 NA Levels: [1] 1 5
Thus for Acacia mabellae there are 6 NAs for the variable branch (i.e. number of branches over 2cm in diameter), out of a total of 16 data values.
19
[1] ".GlobalEnv"
"Autoloads"
"package:base"
At this point, just after startup, the search list consists of the workspace (".GlobalEnv"), a slightly mysterious database with the name Autoloads, and the base package or library. Addition of further libraries (also called packages) extends this list. For example:
> library(ts) > search() [1] ".GlobalEnv" "package:ts" "Autoloads" "package:base" # Time series library, included with the distribution
2.11 Exercises
1. For each of the following code sequences, predict the result. Then do the computation: a) answer <<- 0
for (j in 3:5){ answer <<- j+answer }
b) answer<answer<- 10
for (j in 3:5){ answer <<- j+answer }
c) answer <<- 10
for (j in 3:5){ answer <<- j*answer }
2. Look up the help for the function prod(), and use prod() to do the calculation in 1(c) above. Alternatively, how would you expect prod() to work? Try it! 3. Add up all the numbers from 1 to 100 in two different ways: using for and using sum. Now apply the function to the sequence 1:100. What is its action? 4. Multiply all the numbers from 1 to 50 in two different ways: using for and using prod. 5. The volume of a sphere of radius r is given by 4r3/3. For spheres having radii 3, 4, 5, , 20 find the corresponding volumes and print the results out in a table. Use the technique of section 2.1.5 to construct a data frame with columns radius and volume. 6. Use sapply() to apply the function is.factor to each column of the supplied data frame tinting. For each of the columns that are identified as factors, determine the levels. Which columns are ordered factors? [Use is.ordered()].
20
3. Plotting
The functions plot(), points(), lines(), text(), mtext(), axis(), identify() etc. form a suite that plots points, lines and text. To see some of the possibilities that R offers, enter
demo(graphics)
Comment on the appearance that these graphs present. Is it obvious that these points lie on a sine curve? How can one make it obvious? (Place the cursor over the lower border of the graph sheet, until it becomes a doublesided arror. Drag the border in towards the top border, making the graph sheet short and wide.) Here are two further examples.
attach(elasticband) # R now knows where to find distance & stretch plot(distance ~ stretch) plot(ACT ~ Year, data=austpop, type="l") plot(ACT ~ Year, data=austpop, type="b")
19
The points() function adds points to a plot. The lines() function adds lines to a plot . The text() function adds text at specified locations. The mtext() function places text in one of the margins. The axis() function gives fine control over axis ticks and labels. Here is a further possibility
attach(austpop) attach(austpop) plot(spline(Year, ACT), type="l") detach(austpop) # Fit smooth curve through points # In SS-PLUS, specify detach(austpop)
19
Actually these functions differ only in the default setting for the parameter type. type The default setting for points() is type = "p", and for lines() is type = "l". Explicitly setting type = "p" causes either function to plot points, type = "l" gives lines.
21
increases the text and plot symbol size 25% above the default. The addition of mex=1.25 makes room in the margin to accommodate the increased text size. On the first use of par() to make changes to the current device, it is often useful to store existing settings, so that they can be restored later. For this, specify
oldpar <<- par(cex=1.25, mex=1.25)
This stores the existing settings in oldpar, then changes parameters (here cex and mex) as requested. To restore the original parameter settings at some later time, enter par(oldpar). Here is an example:
attach(elasticband) attach(elasticband) oldpar <<- par(cex=1.5, mex=1.5) plot(distance ~ stretch) par(oldpar) detach(elasticband) # Restores the earlier settings
Type in help(par) to get details of all the parameter settings that are available with par().
22
Observe that the row names store labels for each row .
> attach(primates) # Needed if primates is not already attached. > plot(Bodywt, Brainwt, xlim=c(5, 250)) > # Specify xlim so that there is room for the labels > text(x=Bodywt, y=Brainwt, labels=row.names(primates), adj=0) # adj=0 implies left adjusted text > detach(primates)
Gorilla
200
250
primates$Bodywt
Figure 7: Plot of the primate data, with labels on points
Fig. 7 would be adequate for identifying points, but is not a presentation quality graph. We now show how to improve it.
20
Row names can be created in several different ways. They can be assigned directly, e.g.
row.names(primates) <<- c("Potar monkey","Gorilla","Human","Rhesus monkey","Chimp")
When using read.table() to input data, the parameter row.names is available to specify, by number or name, a column that holds the row names.
23
In Fig. 8 we use the xlab (x-axis) and ylab (y-axis) parameters to specify meaningful axis titles. We move the labelling to one side of the points by including appropriate horizontal and vertical offsets. We use chw <<par()$cxy[1] to get a 1-character space horizontal offset, and chh <<- par()$cxy[2] to get a 1character height vertical offset. Ive used pch=16 to make the plot character a heavy black dot. This helps make the points stand out against the labelling.
1500
Gorilla
200
300
The following, added to the plot that results from the above three statements, demonstrates other choices of pch.
24
0 3
1
7 8 15
23
9 16 10 17
456
11 18 12 19 13 20 5 6 7
1 0
2 1
4 14
A variety of color palettes are available. Here is a function that displays some of the possibilities:
view.colours <<- function(){ plot(1, 1, xlim=c(0,14), ylim=c(0,3), ylim=c(0,3), type="n", axes=F, xlab="",ylab="") text(1:6, rep(2.5,6), paste(1:6), col=palette()[1:6], cex=2.5) text(10, 2.5, "Default palette", adj=0) rainchars <<- c("R","O","Y","G","B","I","V") text(1:7, rep(1.5,7), rainchars, col=rainbow(7), cex=2.5) text(10, 1.5, "rainbow(7)", adj=0) cmtxt <<- substring("cm.colors", 1:9,1:9) # Split cm.colors into its 9 characters text(1:9, rep(0.5,9), cmtxt, col=cm.colors(9), cex=3) text(10, 0.5, "cm.colors(9)", adj=0) }
25
!" locator() prints out the co-ordinates of points. One positions the cursor at the location for which coordinates are required, and clicks the left mouse button. A click with the right mouse button signifies that the identification or location task is complete, unless the setting of the parameter n is reached first. For identify() the default setting of n is the number of data points, while for locator() the default setting is n = 500.
3.4.1 identify()
This function requires specification of a vector x, a vector y, and a vector of text strings that are available for use a labels. The data set florida has the votes for the various Presidential candidates, county by county in the state of Florida. We plot the vote for Buchanan against the vote for Bush, then invoking identify() so that we can label selected points on the plot.
attach(florida) plot(BUSH, BUCHANAN, xlab=Bush, ylab=Buchanan) identify(BUSH, BUCHANAN, County) detach(florida)
Click to the left or right, and slightly above or below a point, depending on the preferred positioning of the label. When labelling is terminated (click with the right mouse button), the row numbers of the observations that have been labelled are printed on the screen, in order.
3.4.2 locator()
Left click at the locations whose coordinates are required
attach(florida) locator() detach(florida) # if not already attached plot(BUSH, BUCHANAN, xlab=Bush, ylab=Buchanan)
The function can be used to mark new points (specify type=p) or lines (specify type=l) or both points and lines (specify type=b).
3.5.1 Histograms
The shapes of histograms depend on the placement of the breaks, as Fig. 10 illustrates:
A: B re a k s a t 7 2 .5 , 7 7 .5 , ...
20 20 Frequency
75 80 85 90 95
B : B re a k s a t 7 5 , 8 0 , ...
15
Frequency
10
0
75
10
15
80
85
90
95
100
Total length
Total length
26
Figure 10: The two graphs show the same data, but with a different choice of breakpoints.
0.08
Relative Frequency
Relative Frequency
70 75 80 85 90 95 100
0.04
0.00
0.00
70
0.04
0.08
75
80
85
90
95 100
Total length
Total length
Figure 11: On each of the histograms from Fig. 11 a density plot has been overlaid.
Density plots do not depend on a choice of breakpoints. The choice of width and type of window, controlling the nature and amount of smoothing, does affect the appearance of the plot. The main effect is to make it more or less smooth. The following will give a density plot:
attach(possum) plot(density(totlngth[here]),type="l") plot(density(totlngth[here]),type="l") detach(possum)
Note that in Fig. 10 the y-axis for the histogram is labelled so that the area of a rectangle is the frequency for that rectangle. To get the plot on the left, specify:
attach(possum) here <<- sex == "f" dens <<- density(totlngth[here]) density(totlngth[here]) xlim <<- range(dens$x) ylim <<- range(dens$y) hist(totlngth[here], breaks = 72.5 + (0:5) * 5, probability = T, xlim = xlim, ylim = ylim, xlab="Total length", main="")
27
lines(dens) detach(possum)
3.5.3 Boxplots
We now make a boxplot of the above data:
attach(possum) boxplot(totlngth[here]) detach(possum)
95
90
90.5
u p p er q u artile
med ian
85
85.25
lo wer q u artile
In ter-q u artile ran g e = 90.5 - 85.25 = 5.2 Co mp are 0.75 x In ter-Qu artile Ran g e = 3.9 with s tan d ard d ev iatio n = 4.2
75
80
Ou tlier
Figure 12: Boxplot of female possum lengths, with additional labelling information.
28
Fig. 13 shows the plots. There is one unusually small value. Otherwise the points for the female possum lengths are as close to a straight line as in many of the plots for random normal data.
Pos s ums
95 Sim ulat ed len gt h s 1
S imulated
Sim ulat ed len gt h s 2
S imulated
Sim ulat ed len gt h s 2
S imulated
L en gt h
85
-1
-2
75
-2
-1
-2
-2
-1
-2
-1
-2
-2
-1
S imulated
Sim ulat ed len gt h s Sim ulat ed len gt h s 1 .0 2
S imulated
Sim ulat ed len gt h s 3
S imulated
Sim ulat ed len gt h s 2
S imulated
-0 .5
-2
-2 .0
-2
-2
-1
-2
-1
-2
-1
-2
-2
-1
S imulated
Sim ulat ed len gt h s Sim ulat ed len gt h s 1 .0 2
S imulated
Sim ulat ed len gt h s 1
S imulated
Sim ulat ed len gt h s
S imulated
0 .0
-1
-1
-2
-1 .5
-2
-1
-2
-1
-3
-2
-1
-2
-2
-1
Figure 13: Normal probability plots. If data are from a normal distribution then points should fall, approximately, along a line. The plot in the top left hand corner shows the 43 lengths of female possums. The other plots are for independent normal random samples of size 43.
The idea is an important one. In order to judge whether data are normally distributed, examine a number of randomly generated samples of the same size from a normal distribution. It is a way to train the eye. By default, rnorm() generates random samples from a distribution with mean 0 and standard deviation 1.
21
Data relate to the paper: Telford, R.D. and Cunningham, R.B. 1991: Sex, sport and body-size dependency of hematology in highly trained athletes. Medicine and Science in Sports and Exercise 23: 788-794.
29
3.6.3 Rugplots
By default rug(x) adds, along the x-axis of the current plot, vertical bars showing the distribution of values of x. It can however be particularly useful for showing the actual values along the side of a boxplot. Fig. 14 shows a boxplot of the distribution of height of female athletes, with a rugplot added on the y-axis.
Length
Figure 14: Distribution of heights of female athletes. The bars on the left plot show actual data values.
3.6.5 Dotplots
These can be a good alternative to barcharts. They have a much higher information to ink ratio! Try
150
160
170
180
190
30
data(islands) dotplot(islands)
Unfortunately there are many names, and there is substantial overlap. The following is better, but shrinks the sizes of the points so that they almost disappear: dotplot(islands, cex=0.2)
A rea = r
0
0
20 00
40 00
60 00
80 00
10
20
30
40
50
R ad iu s
Notice that in expression(Area == pi*r^2), there is a double equals sign (==), although what will appear on the plot is Area = pi*r^2, with a single equals sign. The reason for this is that Area == pi*r^2 is a valid mathematical expression, while Area = pi*r^2 is not. See help(plotmath) for detailed information on the plotting of mathematical expressions. There is a further example in chapter 12. The final plot from
demo(graphics)
31
Use graphs from which information can be read directly and easily in preference to those that rely on visual impression and perspective. Thus in scientific papers contour plots are much preferable to surface plots or twodimensional bar graphs. Draw graphs so that reduction and reproduction will not interfere with visual clarity. Explain clearly how error bars should be interpreted SE limits, 95% confidence interval, SD limits, or whatever. Explain what source of `error(s) is represented. It is pointless to present information on a source of error that is of little or no interest, for example analytical error when the relevant source of `error for comparison of treatments is between fruit. Use colour or different plotting symbols to distinguish different groups. Take care to use colours that contrast. The list of references at the end of this chapter has further comments on graphical and other presentation issues.
3.9 Exercises
1. Plot the graph of brain weight (brain) versus body weight (body) for the data set Animals from the MASS library. Label the axes appropriately. [To access this data frame, specify library(mass); data(Animals)] 2. Repeat the plot 1, but this time plotting log(brain weight) versus log(body weight). Use the row labels to label the points with the three largest body weight values. Label the axes in untransformed units. 3. Repeat the plots 1 and 2, but this time place the plots side by side on the one page. 4. The data set huron that accompanies these notes has mean July average water surface elevations, in feet, 22 IGLD (1955) for Harbor Beach, Michigan, on Lake Huron, Station 5014, for 1860-1986 . (Alternatively you can work with the vector LakeHuron from the ts library, that has mean heights for 1875-1772 only.) a) Plot mean.height against year. b) Use the identify function to determine which years correspond to the lowest and highest mean levels. That is, type identify(huron$year,huron$mean.height,labels=huron$year) and use the left mouse button to click on the lowest point and highest point on the plot. To quit, press both mouse buttons simultaneously. c) As in the case of many time series, the mean levels are correlated from year to year. To see how each year's mean level is related to the previous year's mean level, use
lag.plot(huron$mean.height)
This plots the mean level at year i against the mean level at year i-1. 5. Check the distributions of head lengths (hdlngth) in the possum Compare the following forms of display: a) a histogram (hist(possum$hdlngth)); b) a stem and leaf plot (stem(qqnorm(possum$hdlngth)); c) a normal probability plot (qqnorm(possum$hdlngth)); and d) a density plot (plot(density(possum$hdlngth)). What are the advantages and disadvantages of these different forms of display?
23
Source: Great Lakes Water Levels, 1860-1986. U.S. Dept. of Commerce, National Oceanic and AtmosphericAdministration, National Ocean Survey.
23
22
Data relate to the paper: Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. 1995. Morphological variation among populations of the mountain brush tail possum, Trichosurus caninus Ogilby (Phalangeridae: Marsupialia). Australian Journal of Zoology 43: 449-458.
32
6. Try x <<- rnorm(10). Print out the numbers that you get. Look up the help for rnorm. Now generate a sample of size 10 from a normal distribution with mean 170 and standard deviation 4. 7. Use mfrow() to set up the layout for a 3 by 4 array of plots. In the top 4 rows, show normal probability plots (section 3.4.2) for four separate `random samples of size 10, all from a normal distribution. In the middle 4 rows, display plots for samples of size 100. In the bottom four rows, display plots for samples of size 1000. Comment on how the appearance of the plots changes as the sample size changes. 8. The function runif() can be used to generate a sample from a uniform distribution, by default on the interval 0 to 1. Try x <<- runif(10), and print out the numbers you get. Then repeat exercise 6 above, but taking samples from a uniform distribution rather than from a normal distribution. What shape do the points follow? *9. If you find exercise 8 interesting, you might like to try it for some further distributions. For example x <<rchisq(10,1) will generate 10 random values from a chi-squared distribution with degrees of freedom 1. The statement x <<- rt(10,1) will generate 10 random values from a t distribution with degrees of freedom one. Make normal probability plots for samples of various sizes from these distributions. 10. For the first two columns of the data frame hills, examine the distribution using: (a) histograms (b) density plots (c) normal probability plots. Repeat (a), (b) and (c), now working with the logarithms of the data values.
3.10 References
Bell Lab's Trellis Page: http://cm.bell-labs.com/cm/ms/departments/sia/project/trellis/ Becker, R.A., Cleveland, W.S. and Shyu, M. The Visual Design and Control of Trellis Display. Journal of Computational and Graphical Statistics. Cleveland, W. S. 1993. Visualizing Data. Hobart Press, Summit, New Jersey. Cleveland, W. S. 1985. The Elements of Graphing Data. Wadsworth, Monterey, California. Maindonald J H 1992. Statistical design, analysis and presentation issues. New Zealand Journal of Agricultural Research 35: 121-141. Tufte, E. R. 1983. The Visual Display of Quantitative Information. Graphics Press, Cheshire, Connecticut, U.S.A. Tufte, E. R. 1990. Envisioning Information. Graphics Press, Cheshire, Connecticut, U.S.A. Tufte, E. R. 1997. Visual Explanations. Graphics Press, Cheshire, Connecticut, U.S.A. Wainer, H. 1997. Visual Revelations. Springer-Verlag, New York
33
34
elderly f
elderly m
120 100
+ + + + + + + + + + + ++ + csoa young f
120 100 80 60 40 20
+ + ++ + + + + ++ + + + ++ + + ++ + ++ + + + + + ++ + + + + + young m
80 60 40 20
+ +++ + + + + ++ + ++ + + + + + + + + + + + + + + + + + + + ++
50 100 150 200
+ + + + + + ++ + + + + + +
it
Figure 16: csoa versus it, for each combination of females/males and elderly/young. The two targets (low, + = high contrast) are shown with different symbols.
In a simplified version of Fig. 16 above, we might plot csoa against it for each combination of sex and agegp. For this simplified version, it would be enough to type:
Data relate to the paper: Burns, N. R., Nettlebeck, T., White, M. and Willson, J. 1999. Effects of car window tinting on visual performance: a comparison of elderly and young drivers. Ergonomics 42: 428-443.
24
35
Here is the statement used to get Fig. 16. The two different symbols distinguish between low contrast and high contrast targets.
xyplot(csoa~it|sex*agegp, data=tinting, panel=panel.superpose, groups=target)
If colour is available, different colours will be used for the different groups. A striking feature is that the very high values, for both csoa and it, occur only for elderly males. It is apparent that the long response times for some of the elderly males occur, as we might have expected, with the low contrast target. We now put smooth curves through the data, separately for the two target types:
xyplot(csoa~it|sex*agegp, data=tinting, panel=panel.superpose, groups=target, type="s")
The relationship between csoa and it seems much the same for both levels of contrast. Finally, we do a plot (Fig. 17) that uses different symbols (in black and white) for different levels of tinting. The longest times are for the high level of tinting.
xyplot(csoa~it|sex*agegp, xyplot(csoa~it|sex*agegp, data=tinting, panel=panel.superpose, groups=tint)
50 100 150 200
elderly f
elderly m > +> + > + + + + > + > + >> > + + > > + >+ + > ++ > > + >+ young m > > + >
120 100 80 60 40 20
> young f
120 100 80 60 40 20
+ +> > > + > + > + > + + > >> > + > + + + + >> > + + > > + +>
50 100 150 200
it
Figure 17: csoa versus it, for each combination of females/males and elderly/young. The different levels of tinting (no, +=low, >=high) are shown with different symbols.
36
The second command uses different colours for the different colours for males and females. The third command adds a smooth. The fourth command uses different symbols for males and females, and a smooth. Where conditioning is on a continuous variable, coplot() will break it down into ranges that, if default settings are used, overlap. The parameter number controls the number of ranges, and overlap controls the fraction of overlap. For example
coplot(time ~ distance | climb, data=hills, overlap=0.5, number=3)
By default overlap is 0.5, i.e. each successive pair of categories have around half their values in common. The panel function plots what appears in any panel. Users can supply their own panel function. For an example of such a function, examine panel.smooth().
4.3 Exercises
1. The following data gives milk volume (g/day) for smoking and nonsmoking mothers : Smoking Mothers: 621, 793, 593, 545, 753, 655, 895, 767, 714, 598, 693 Nonsmoking Mothers: 947, 945, 1086, 1202, 973, 981, 930, 745, 903, 899, 961 Present the data (i) in side by side boxplots; (ii) using a dotchart form of display. 2. Repeat the plot as in exercise 1, but this time including a scatterplot smooth on each panel. 3. For the possum data set, generate the following plots: a) histograms of hdlngth use hist(); b) normal probability plots of hdlngth use qqnorm(); c) density plots of hdlngth use plot(density()). Investigate the effect of varying the density bandwidth (bw). 4. The following exercises relate to the data frame possum that accompanies these notes: (a) Using the coplot function, explore the relation between hdlngth and totlngth, taking into account sex and Pop. (b) Construct a contour plot of chest versus belly and totlngth. (c) Construct box and whisker plots for hdlngth, using site as a factor. (d) Construct normal probability plots for hdlgth, for each separate level of sex and Pop. Is there evidence that the distribution of hdlgth varies with the level of these other factors. 6. The frame airquality that is in the base library has columns Ozone, Solar.R, Wind, Temp, Month and Day. Plot Ozone against Solar.R for each of three temperature ranges, and each of three wind ranges.
25
Data are from the paper ``Smoking During Pregnancy and Lactation and Its Effects on Breast Milk Volume'' (Amer. J. of Clinical Nutrition).
25
37
38
5. Linear (Multiple Regression) Models and Analysis of Variance 5.1 The Model Formula in Straight Line Regression
We begin with the straight line regression example that appeared earlier, in section 2.1.4. First we plot the data:
plot(distance ~ stretch, data=elasticband)
Here distance ~ stretch is a model formula. Other model formulae will appear in the course of this chapter. Fig. 18 shows the plot:
distance
120
42
140
160
180
44
46
48 stretch
50
52
54
Figure 18: Plot of distance versus stretch for the elastic band data, with fitted least squares line
The output from the regression is an lm object, which we have called elastic.lm . Now examine a summary of the regression results. Notice that the output documents the model formula that was used:
> options(digits=4) > summary(elastic.lm) Call: lm(formula = distance ~ stretch, data = elasticband) elasticband) Residuals: 1 2.107 2 -0.321 3 18.000 4 5 6 13.321 7 -7.214 1.893 -27.786
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) stretch stretch -63.57 4.55 74.33 1.54 -0.86 2.95 0.431 0.032
39
Residual standard error: 16.3 on 5 degrees of freedom Multiple RR-Squared: 0.635, Adjusted RR-squared: 0.562 pp-value: 0.0319 F-statistic: 8.71 on 1 and 5 degrees of freedom,
Various functions are available for extracting information that you might want from the list. This is better than manipulating the list directly. Examples are:
> coef(elastic.lm) coef(elastic.lm) (Intercept) -63.571 1 2.1071 2 -0.3214 stretch 4.554 3 18.0000 4 5 6 13.3214 7 -7.2143
The function most often used to inspect regression output is summary(). It extracts the information that users are most likely to want. For example, in section 5.1, we had summary(elastic.lm) There is a plot method for lm objects that gives the diagnostic information shown in Fig. 19.
Residuals vs Fitted
10 20 Stan d ard ized Res id u als 10 20
3 6
N ormal Q -Q plot
6 3
Res id u als
-10
-30
-30
-10 0
5
130
150
170
-1.0
0.0
0.5 1.0
Fitted v alues
Th eo retical Qu an tiles
Scale-Location plot
0.8 5 Co ok's d is tan ce
0.6
0.2
0.4
130
150
170
0.0
1
Fitted v alues
Ob s . n u mb er
40
By default the first, second and fourth plot use the row names to identify the three most extreme residuals. [If explicit row names are not given for the data frame, then the row numbers are used.]
Essentially, the model matrix relates to the part of the model that appears to the right of the equals sign. The straight line model is y = a + b x + residual which we write as y = 1 a + x b + residual The parameters that are to be estimated are a and b. Fitted values are given by multiplying each column of the model matrix by its corresponding parameter, i.e. the first column by a and the second column by b, and adding. Another name is predicted values. The aim is to reproduce, as closely as possible, the values in the y-column. The residuals are the differences between the values in the y-column and the fitted values. Least squares regression, which is the form of regression that we describe in this course, chooses a and b so that the sum of squares of the residuals is as small as possible. The function model.matrix() prints out the model matrix. Thus:
> model.matrix(distance ~ stretch, data=elasticband) (Intercept) stretch 1 2 3 4 5 6 7 [1] 0 1 1 1 1 1 1 1 1 46 54 48 50 44 42 52
attr(,"assign")
41
The following are the fitted values and residuals that we get with the estimates of a (= -63.6) and b ( = 4.55) that result from least squares regression: X Stretch (mm) -63.6 4.55 1 1 1 1 1 1 1 46 54 48 50 44 42 52
y
(Fitted) 1 -63.6 + 4.55 Stretch -63.6 + 4.55 46 = 145.7 -63.6 + 4.55 54 = 182.1 -63.6 + 4.55 48 = 154.8 -63.6 + 4.55 50 = 163.9 -63.6 + 4.55 44 = 136.6 -63.6 + 4.55 42 = 127.5 -63.6 + 4.55 52 = 173.0
y
(Observed) Distance (mm) 148 182 173 166 109 141 166
y y
(Residual) Observed Fitted 148-145.7 = 2.3 182-182.1 = -0.1 173-154.8 = 18.2 166-163.9 = 2.1 109-136.6 = 27.6 141-127.5 = 13.5 166-173.0 = -7.0
We might alternatively fit the simpler (no intercept) model. For this we have
where e is a random variable with mean 0. The X matrix then consists of a single column, the xs.
42
> formds <<- formula(paste(nam[1],"~",nam[2])) > lm(formds, data=elasticband) Call: lm(formula = formds, data = elasticband) Coefficients: (Intercept) 26.3780 distance distance 0.1395
Note that graphics formulae can be manipulated in exactly the same way as model formulae.
26
# if needed # if needed
70
90
h ard
50
50
150
300
120
160
200
240
Figure 20: Scatterplot matrix for the Rubber data frame from the mass library.
There is a negative correlation between loss and hardness. We proceed to regress loss on hard and tens.
The original source is O.L. Davies (1947) Statistical Methods in Research and Production. Oliver and Boyd, Table 6.1 p. 119.
26
120
160
t en s
200
240
50
150
lo ss
300
43
Rubber.lm <<- lm(loss~hard+tens, data=Rubber) > options(digits=3) options(digits=3) > summary(Rubber.lm) Call: lm(formula = loss ~ hard + tens, data = Rubber) Residuals: Min 1Q Median 3.82 3Q 19.75 Max 65.98 -79.38 -14.61 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) hard tens 885.161 -6.571 -1.374 61.752 0.583 0.194 14.33 -11.27 -7.07 3.8e3.8e-14 1.0e1.0e-11 1.3e1.3e-07
Residual standard error: 36.5 on 27 degrees of freedom Multiple RR-Squared: 0.84, F-statistic: Adjusted RR-squared: 0.828 pp-value: 1.77e1.77e-011 71 on on 2 and 27 degrees of freedom,
> logbooks.lm2<logbooks.lm2<-lm(weight~thick+height,data=logbooks)
44
> summary(logbooks.lm2)$coef Estimate Std. Error t value Pr(>|t|) (Intercept) thick height -1.263 0.313 2.114 3.552 0.472 0.678 -0.356 0.662 3.117 0.7303 0.5243 0.0124
> logbooks.lm3<logbooks.lm3<-lm(weight~thick+height+width,data=logbooks) > summary(logbooks.lm3)$coef Estimate Std. Error t value Pr(>|t|) (Intercept) thick height width -0.719 0.719 0.465 0.154 1.877 3.216 0.434 1.273 1.070 -0.224 1.070 0.121 1.755 0.829 0.316 0.907 0.117
So is weight proportional to thick * height * width? The correlations between thick, height and width are so strong that if one tries to use more than one of them as a explanatory variables, the coefficients are ill-determined. They contain very similar information, as is evident from the scatterplot matrix. The regressions on height height and width give plausible results, while the coefficient of the regression on thick is entirely an artefact of the way that the books were selected. The design of the data collection really is important for the interpretation of coefficients from a regression equation. Even though regression equations from observational data may work quite well for predictive purposes, the individual coefficients may be misleading. This is more than an academic issue, as the analyses in 27 Lalonde (1986) demonstrate . They had data from experimental treatment and control groups, and also from two comparable non-experimental controls. The regression estimate of the treatment effect, when comparison was with one of the non-experimental controls, was statistically significant but with the wrong sign! The regression should be fitted only to that part of the data where values of the covariates overlap substantially. Dehejia and Wahba demonstrate the use of scores (propensities) that may be used both to identify subsets that are defensibly comparable. Propensities values are then the only covariate in the equation that estimates the treatment effect.
Dehejia and Wahba (1999) revisit Lalondes data, demonstrating the use of a methodology that was able to reproduce results similar to the experimental results. Data are from McLeod, C. C. (1982) Effect of rates of seeding on barley grown for grain. New Zealand Journal of Agriculture 10: 133-136. Summary details are in Maindonald, J. H. (1992).
28
27
45
18.0
19.0
20.0
21.0
60
80
100
120
140
Seeding rate
Figure 21: Number of grain per head versus seeding rate, for the barley seeding rate data, with fitted quadratic curve.
We will need an X-matrix with a column of ones, a column of values of rate, and a column of values of rate2. For this, both rate and I(rate^2) must be included in the model formula.
> seedrates.lm2 <<- lm(grain ~ rate+I(rate^2), data=seedrates) data=seedrates) > summary(seedrates.lm2) Call: lm(formula = grain ~ rate + I(rate^2), data = seedrates) Residuals: 1 2 3 4 5 0.04571 -0.12286 Coefficients: Estimate Std. Error t value value Pr(>|t|) (Intercept) 24.060000 rate I(rate^2) -0.066686 0.000171 0.455694 0.009911 0.000049 52.80 -6.73 3.50 0.00036 0.02138 0.07294 0.09429 -0.00286 -0.01429
Residual standard error: 0.115 on 2 degrees of freedom Multiple RR-Squared: 0.996, F-statistic: > hat <<- predict(seedrates.lm2) > lines(spline(seedrates$rate, hat)) > # Placing the spline fit through the fitted points allows a smooth curve. > # For this to work work the values of seedrates$rate must be ordered. Adjusted Adjusted RR-squared: 0.992 pp-value: 0.0039 256 on 2 and 2 degrees of freedom,
46
4 5 [1] 0 1 2
1 1
125 150
15625 22500
attr(,"assign")
This example demonstrates a way to extend linear models to handle specific types of non-linear relationships. We can use any transformation we wish to form columns of the model matrix. We could, if we wished, add an x3 column. Once the model matrix has been formed, we are limited to taking linear combinations of columns.
> anova(seedrates.lm2,seedrates.lm1) anova(seedrates.lm2,seedrates.lm1) Analysis of Variance Table Model 1: grain ~ rate + I(rate^2) Model 2: grain ~ rate Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F) 1 2 0.026286 2 3 0.187000 -1 -0.160714 12.228 0.0729 0.07294 4 The F-value is large, but on this evidence there are too few degrees of freedom to make a totally convincing case for preferring a quadratic to a line. However the paper from which these data come gives an independent estimate of the error mean square (0.17 on 35 d.f.) based on 8 replicate results that were averaged to give each value for number of grains per head. If we compare the change in the sum of squares (0.1607, on 1 df) with a mean square of 0.172 (35 df), the F-value is now 5.4 on 1 and 35 degrees of freedom, and we have p=0.024 . The increase in the number of degrees of freedom more than compensates for the reduction in the F-statistic.
> # However we have an independent estimate of the error mean > # square. The estimate is 0.17^2, on 35 df. > 11-pf(0.16/0.17^2, 1, 35) [1] 0.0244
Finally note that R2 was 0.972 for the straight line model. This may seem good, but given the accuracy of these data it was not good enough! The statistic is an inadequate guide to whether a model is adequate. Even for any one context, R2 will in general increase as the range of the values of the dependent variable increases. (R2 is larger when there is more variation to be explained.) A predictive model is adequate when the standard errors of predicted values are acceptably small, not when R2 achieves some magic threshold.
47
The extrapolation has deliberately been taken beyond the range of the data, in order to show how the confidence bounds spread out. Confidence bounds for a fitted line spread out more slowly, but are even less believable!
To formulate this as a regression model, we take kWh as the dependent variable, and the factor insulation as the explanatory variable.
Data are from Hand, D. J.; Daly, F.; Lunn, A. D.; Ostrowski, E., eds. (1994). A Handbook of Small Data Sets. Chapman and Hall.
29
48
> insulation <<- factor(c(rep("without", 8), rep("with", 7))) > # 8 without, then 7 with > kWh <<- c(10225, 10689, 14683, 6584, 8541, 12086, 12467, + 12669, 9708, 6700, 4307, 10315, 8017, 8162, 8022) > insulation.lm <<- lm(kWh ~ insulation) > summary(insulation.lm, corr=F) Call: lm(formula = kWh ~ insulation) Residuals: Min -4409 1Q Median -979 132 132 3Q 1575 Max 3690
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) insulation 7890 3103 874 1196 9.03 2.59 5.8e5.8e-07 0.022
Residual standard error: 2310 on 13 degrees of freedom Multiple RR-Squared: 0.341, Adjusted RR-squared: 0.29 pp-value: 0.0223 F-statistic: 6.73 on 1 and 13 degrees of freedom,
The p-value is 0.022, which may be taken to indicate (p < 0.05) that we can distinguish between the two types of houses. But what does the intercept of 7890 mean, and what does the value for insulation of 3103 mean? To interpret this, we need to know that the factor levels are, by default, taken in alphabetical order, and that the initial level is taken as the baseline. So with comes before without, and with is the baseline. Hence: Average for Insulated Houses = 7980 To get the estimate for uninsulated houses take 7980 + 3103 = 10993. The standard error of the difference is 1196.
....
Type in
model.matrix(kWh~insulation) model.matrix(kWh~insulation)
49
Another possibility is to use what are called the sum contrasts. With the sum contrasts the baseline is the mean over all factor levels. The effect for the first level is omitted; the user has to calculate it as minus the sum 30 of the remaining effects. Here is the output from use of the `sum contrasts :
> options(contrasts = c("contr.sum", "contr.poly"), digits digits = 2) # Try the `sum contrasts > insulation <<- factor(insulation, levels=c("without", "with")) # Make `without' the baseline > insulation.lm <<- lm(kWh ~ insulation) > summary(insulation.lm, corr=F) Call: lm(formula = kWh ~ insulation) Residuals: Min -4409 1Q Median -979 132 3Q 1575 Max 3690
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) insulation 9442 1551 598 598 15.78 2.59 7.4e7.4e-10 0.022
Residual standard standard error: 2310 on 13 degrees of freedom Multiple RR-Squared: 0.341, Adjusted RR-squared: 0.29 pp-value: 0.0223 F-statistic: 6.73 on 1 and 13 degrees of freedom,
Here is the interpretation: average of (mean for without, mean for with) = 9442 To get the estimate for uninsulated houses (the first level), take 9442 + 1551 = 10993 The `effects sum to one. So the effect for the second level (`with) is -1551. Thus to get the estimate for insulated houses (the first level), take 9442 - 1551 = 7980. The sum contrasts are sometimes called analysis of variance contrasts. You can set the choice of contrasts for each factor separately, with a statement such as:
insulation <<- C(insulation, contr=treatment)
Also available are the Helmert contrasts. These are not at all intuitive and rarely helpful, even though S-PLUS 31 uses them as the default. Novices should avoid them .
The second string element, i.e. "contr.poly", is the default setting for factors with ordered levels. [One uses the function ordered() to create ordered factors.] The interpretation of the helmert contrasts is simple enough when there are just two levels. With >2 levels, the helmert contrasts give parameter estimates which in general do not make a lot of sense, basically because the
31
30
50
Residual standard error: 0.111 on 14 degrees of freedom Multiple RR-Squared: 0.838, Adjusted RR-squared: 0.827 pp-value: 6.51e6.51e-007 F-statistic: 72.6 on 1 and 14 degrees of freedom,
baseline keeps changing, to the average for all previous factor levels. You do better to use either the treatment contrasts, or the sum contrasts. With the sum contrasts the baseline is the overall mean. S-PLUS makes helmert contrasts the default, perhaps for reasons of computational efficiency. This was an unfortunate choice.
51
> model.matrix(cet.lm2) (Intercept) factor(species) logweight 1 2 . . . . 8 . . . . 16 [1] 0 1 2 attr(,"contrasts") [1] "contr.treatment" 1 0 3.951 attr(,"assign") 1 0 3.989 1 1 1 1 3.555 3.738
Enter summary(cet.lm2) to get an output summary, and plot(cet.lm2) to plot diagnostic information for this model. For model C, the statement is:
> cet.lm3 <<- lm(logheart ~ factor(species) + logweight + factor(species):logweight, data=dolphins)
52
27 10.4921
> summary.lm(PlantGrowth.aov) Call: aov(formula = weight ~ group) Residuals: Min 1Q Median 3Q 0.2627 Max 1.3690 -1.0710 -0.4180 -0.0060 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) grouptrt1 grouptrt2 5.0320 5.0320 -0.3710 0.4940 0.1971 0.2788 0.2788 25.527 -1.331 1.772 <2e<2e-16 0.1944 0.0877
Residual standard error: 0.6234 on 27 degrees of freedom Multiple RR-Squared: 0.2641, Adjusted RR-squared: 0.2096 pp-value: 0.01591 F-statistic: statistic: 4.846 on 2 and 27 degrees of freedom, > help(cabbages) > data(cabbages) > names(cabbages) [1] "Cult" "Date" "HeadWt" "VitC" > coplot(HeadWt~VitC|Cult+Date,data=cabbages) # From the MASS library
Examination of the plot suggests that cultivars differ greatly in the variability in head weight. Variation in the vitamin C levels seems relatively consistent between cultivars.
> VitC.aov<VitC.aov<-aov(VitC~Cult+Date,data=cabbages) > summary(VitC.aov) Df Cult Date Residuals 2 Sum Sq Mean Sq F value 909.30 454.65 47.06 Pr(>F) 1 2496.15 2496.15 53.0411 1.179e1.179e-09 9.6609 0.0002486 56 2635.40
53
3 1394.51
Error: Within Df Sum Sq Mean Sq F value Pr(>F) Residuals 36 438.58 > coef(kiwishade.aov) (Intercept) : (Intercept) 96.5327 block:shade : blocknorth 0.993125 Within : numeric(0) blockwest shadeAug2Dec shadeAug2Dec shadeDec2Feb shadeFeb2May -3.430000 3.030833 -10.281667 -7.428333 12.18
5.9 Exercises
1. Here are two sets of data that were obtained the same apparatus, including the same rubber band, as the data frame elasticband. For the data set elastic1, the values are: stretch (mm): 46, 54, 48, 50, 44, 42, 52 distance (cm): 183, 217, 189, 208, 178, 150, 249. For the data set elastic2, the values are: stretch (mm): 25, 45, 35, 40, 55, 50 30, 50, 60 distance (cm): 71, 196, 127, 187, 249, 217, 114, 228, 291.
32
Data relate to the paper: Snelgar, W.P., Manson. P.J., Martin, P.J. 1992. Influence of time of shading on flowering and yield of kiwifruit vines. Journal of Horticultural Science 67: 481-487. Further details, including a diagram showing the layout of plots and vines and details of shelter, are in Maindonald (1992). The two papers have different shorthands (e.g. Sept-Nov versus Aug-Dec) for describing the time periods for which the shading was applied.
54
Using a different symbol and/or a different colour, plot the data from the two data frames elastic1 and elastic2 on the same graph. Do the two sets of results appear consistent. 2. For each of the data sets elastic1 and elastic2, determine the regression of stretch on distance. In each case determine (i) fitted values and standard errors of fitted values and (ii) the R2 statistic. Compare the two sets of results. What is the key difference between the two sets of data? 3. Use the method of section 5.7 to determine, formally, whether one needs different regression lines for the two data frames elastic1 and elastic2. 4. Using the data frame cars (in the base library), plot distance (i.e. stopping distance) versus speed. Fit a line to this relationship, and plot the line. Then try fitting and plotting a quadratic curve. Does the quadratic curve give a useful improvement to the fit? If you have studied the dynamics of particles, can you find a theory that would tell you how stopping distance might change with speed? 5. Using the data frame hills (in library MASS), regress time on distance and climb. What can you learn from the diagnostic plots that you get when you plot the lm object? Try also regressing log(time) on log(distance) and log(climb). Which of these regression equations would you prefer? 6. Using the data frame beams (in the data sets accompanying these notes), carry out a regression of strength on SpecificGravity and Moisture. Carefully examine the regression diagnostic plot, obtained by supplying the name of the lm object as the first parameter to plot(). What does this indicate? 7. Type
hosp<hosp<-rep(c(RNC,Hunter,Mater),2) hosp fhosp<fhosp<-factor(hosp) factor(hosp) levels(fhosp)
Now repeat the steps involved in forming the factor fhosp, this time keeping the factor levels in the order RNC, Hunter, Mater. Use contrasts(fhosp) to form and print out the matrix of contrasts. Do this using helmert contrasts, treatment contrasts, and sum contrasts. Using an outcome variable
y <<- c(2,5,8,10,3,9)
fit the model lm(y~fhosp), repeating the fit for each of the three different choices of contrasts. Comment on what you get. For which choice(s) of contrasts do the parameter estimates change when you re-order the factor levels? 8. In section 5.7 check the form of the model matrix (i) for fitting two parallel lines and (ii) for fitting two arbitrary lines when one uses the sum contrasts. Repeat the exercise for the helmert contrasts. 9. In the data set cement (MASS library), examine the dependence of y (amount of heat produced) on x1, x2, x3 and x4 (which are proportions of four constituents). Begin by examining the scatterplot matrix. As the explanatory variables are proportions, do they require transformation, perhaps by taking log(x/(100-x))? What alternative strategies one might use to find an effective prediction equation? 10. In the data set pressure (base library), examine the dependence of pressure on temperature. [Transformation of temperature makes sense only if one first converts to degrees Kelvin. Consider transformation of pressure. A logarithmic transformation is too extreme; the direction of the curvature changes. What family of transformations might one try? 11. Modify the code in section 5.5.3 to fit: (a) a line, with accompanying 95% confidence bounds, and (b) a cubic curve, with accompanying 95% pointwise confidence bounds. Which of the three possibilities (line, quadratic, curve) is most plausible? Can any of them be trusted? *12. Repeat the analysis of the kiwishade data (section 5.8.2), but replacing Error(block:shade) with block:shade. Comment on the output that you get from summary(). To what extent is it potentially misleading? Also do the analysis where the block:shade term is omitted altogether. Comment on that analysis.
5.10 References
Atkinson, A. C. 1986. Comment: Aspects of diagnostic regression analysis. Statistical Science 1, 397402.
55
Atkinson, A. C. 1988. Transformations Unmasked. Technometrics 30: 311-318. Cook, R. D. and Weisberg, S. 1999. Applied Regression including Computing and Graphics. Wiley. Dehejia, R.H. and Wahba, S. 1999. Causal effects in non-experimental studies: re-evaluating the evaluation of training programs. Journal of the American Statistical Association 94: 1053-1062. Harrell, F. E., Lee, K. L., and Mark, D. B. 1996. Tutorial in Biostatistics. Multivariable Prognostic Models: Issues in Developing Models, Evaluating Assumptions and Adequacy, and Measuring and Reducing Errors. Statistics in Medicine 15: 361-387. Lalonde, R. 1986. Evaluating the economic evaluations of training programs. American Economic Review 76: 604-620. Maindonald J H 1992. Statistical design, analysis and presentation issues. New Zealand Journal of Agricultural Research 35: 121-141. Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York. Weisberg, S., 2nd edn, 1985. Applied Linear Regression. Wiley. Williams, G. P. 1983. Improper use of regression equations in the earth sciences. Geology 11: 195-197
56
6. Multivariate and Tree-Based Methods 6.1 Multivariate EDA, and Principal Components Analysis
Principal components analysis is often a useful exploratory tool for multivariate data. The idea is to replace the initial set of variables by a small number of principal components that together may explain most of the variation in the data. The first principal component is the component (linear combination of the initial variables) that explains the greatest part of the variation. The second principal component is the component that, among linear combinations of the variables that are uncorrelated with the first principal component, explains the greatest part of the remaining variation, and so on. The measure of variation used is the sum of the variances of variables, perhaps after scaling the variables so that they each have variance one. An analysis that works with the unscaled variables, and hence with the variancecovariance matrix, gives a greater weight to variables that have a large variance. The common alternative scaling variables so that they each have variance equal to one is equivalent to working with the correlation matrix. With biological measurement data, it is usually desirable to begin by taking logarithms. The standard deviations then measure the logarithm of relative change. Because all variables measure much the same quantity (i.e. relative variability), and because the standard deviations are typically fairly comparable, scaling to give equal variances is unnecessary. The data set possum that accompanies these notes has nine morphometric measurements on each of 102 33 mountain brushtail possums, trapped at seven sites from southern Victoria to central Queensland . It is good practice to begin by examining relevant scatterplot matrices. This may draw attention to gross errors in the data. A plot in which the sites and/or the sexes are identified will draw attention to any very strong structure in the data. For example one site may be quite different from the others, for some or all of the variables. Taking logarithms of these data does not make much difference to the appearance that they present when plotted. This is because the ratio of largest to smallest value is relatively small, never more than 1.6, for all variables. Here are some of the scatterplot matrix possibilities:
pairs(possum[,6:14], col=palette()[as.integer(possum$sex)]) pairs(possum[,6:14], pairs(possum[,6:14], col=palette()[as.integer(possum$site)]) here<here<-!is.na(possum$footlgth) print(sum(!here)) # We need to exclude missing values # Check how many values are missing
We now look at particular views of the data that we get from a principal components analysis:
library(mva) # Load xx-variate analysis library # Principal components possum.prc <<- princomp(log(possum[here,6:14])) # by populations and sex, identified by site coplot(possum.prc$scores[,2] coplot(possum.prc$scores[,2] ~ possum.prc$scores[,1]|possum$Pop[here]+possum$sex[here], col=palette()[as.integer(possum$site)])
Fig. 22, which uses different plot symbols for different sites, used the code:
coplot(possum.prc$scores[,2] ~ possum.prc$scores[,1]|possum$Pop[here]+possum$sex[here], possum.prc$scores[,1]|possum$Pop[here]+possum$sex[here], pch=as.integer(possum$site))
Data relate to the paper: Lindenmayer, D. B., Viggers, K. L., Cunningham, R. B., and Donnelly, C. F. 1995. Morphological variation among columns of the mountain brushtail possum, Trichosurus caninus Ogilby (Phalangeridae: Marsupiala). Australian Journal of Zoology 43: 449-458.
33
57
Given : possum$Pop[here]
0 .5 1 .0 1 .5 2 .0 2 .5
o t h er
Vic
-0 .3
-0 .1
0 .1
0 .3
0 .0
0 .1
2 .5 -0.2
-0 .3 -0 .1 0 .1 0 .3
possum.prc$scores[, 1]
Fig. 22: Second principal component versus first principal component, by population and by sex, for the possum data.
34
0.5
Given : possum$sex[here]
possum.prc$scores[, 2]
-0 .2
0.0
0 .1
1.0
1 .5
2 .0
58
information for previous patients, which future patients will remain free of disease symptoms for twelve months or more. Here are calculations for the possum data frame, using the lda() function from the Venables & Ripley MASS library. Our interest is in whether it is possible, on the basis of morphometric measurements, to distinguish animals from different sites. A cruder distinction is between populations, i.e. sites in Victoria (an Australian state) as opposed to sites in other states (New South Wales or Queensland). Because it has little on the distribution of variable values, I have not thought it necessary to take logarithms. I discuss this further below.
> library(mass) # Only if not already attached. > here<here<- !is.na(possum$footlgth) !is.na(possum$footlgth) > possum.lda <<- lda(site ~ hdlngth+skullw+totlngth+ + taillgth+footlgth+earconch+eye+chest+belly,data=possum, + subset=here) > options(digits=4) > possum.lda$svd [1] 15.7578 > > plot(possum.lda, plot(possum.lda, dimen=3) > # Scatterplot matrix for scores on 1st 3 canonical variates, as in Fig.23
-4 -2 0 2 5 6 6 6 5 66 6 5 5 6 66 3 5 5 7 5 73 65 56 74 57 7 7 5 4 4 7 7 7 5 7 6 37 7 7 7 7447 3 47 3 4 3 3 7 2 1 11 2 1 1 12 11 1 1 1 22 11 11 1 1 2 221 22 1 1 1 1 1 2 2 11 11 1 1 1 1 4 3 4 34 1 4 4 4 1 1 1 1 1 1 6 5 31 1 51 1 151 77 1 5 1 11 5 3 71 6 5 1 6 5 1 7 7 4 11 5 5 7 1 3 7 3 3 1 573 1 56 21 1 717 16 1 6 7 2 2 1 7 7 1 2 1 6 6 7 6 2 2 2 2 6 65 7 77 2 6 6 7 2 2 5 1 1 11 1 11 1 1 11 11 2 11 1 12 1 12 1 2 1 21 1 11 1 1 1 1 2 1 2 22 2 2 2 1 -8 -6 -4 -2 1 1 6 56 3 5 6 3 5 5 3 6 76 656 6 3 7 6 5 6 57 7 65 5 7 3 5 7 43 43 6 47 7 77 5 4 774 5 7 7 4 4 7 7 7 7 0 2 4 6 2 5 6 6 11 5 3 53 1 15 5 66 166 3 5 61 7 111 6 11 3 7 16 1 1 6 15 71 1 5 111 17 21 2 2 5 1 6 21 1 5 1 1 35 677 31 1 4 34 1 1 22 2 74 7 7 7 2 5 7 7 4 4 5 7 7 7 4 4 7 7 7 1 -3 -2 -1 2
3.9372
1 1 111 1 1 1 1 11 1 1 1 1 1 1 11 1 1 1 1 11 2 1 1 1 1 12 2 2 1 1 22 2 2 2 2
4 44
4 4 4 37 6 55 77 5 55 36 5 6 5 77 47 5 33 7 3 3 5 756 6 7 6 7 77 7 7 6666 6 7 5 7 75 6 6
LD2
-2
-4
2 2
LD3
2 2
The singular values are the ratio of between to within group sums of squares, for the canonical variates in turn. Clearly canonical variates after the third will have little if any discriminatory power. One can use predict.lda() to get (among other information) scores on the first few canonical variates. Note that there may be interpretative advantages in taking logarithms of biological measurement data. The standard against which patterns of measurement are commonly compared is that of allometric growth, which implies a linear relationship between the logarithms of the measurements. Differences between different sites
-3
-2
-1
-8
-6
-4
-2
59
are then indicative of different patterns of allometric growth. The reader may wish to repeat the above analysis, but working with the logarithms of measurements. Where there are two groups, logistic regression is often effective. A source of code for handling more general supervised classification problems is Hastie and Tibshiranis mda (mixture discriminant analysis) library. There is a brief overview of this library in the Venables and Ripley `Complements, referred to in section 13.2 .
To use these models effectively, you also need to know about approaches to pruning trees, and about crossvalidation. Methods for reduction of tree complexity that are based on significance tests at each individual node (i.e. branching point) typically choose trees that over-predict. The Atkinson and Therneau rpart (recursive partitioning) library is closer to CART than is the S-PLUS tree library. It integrates cross-validation with the algorithm for forming trees.
6.5 Exercises
1. Using the data set painters (MASS library), apply principal components analysis to the scores for Composition, Drawing, Colour, and Expression. Examine the loadings on the first three principal components. Plot a scatterplot matrix of the first three principal components, using different colours or symbols to identify the different schools. 2. The data set Cars93 is in the MASS library. Using the columns of continuous or ordinal data, determine scores on the first and second principal components. Investigate the comparison between (i) USA and non-USA cars, and (ii) the six different types (Type) of car. Now create a new data set in which binary factors become columns of 0/1 data, and include these in the principal components analysis. 3. Repeat the calculations of exercises 1 and 2, but this time using the function lda() from the MASS library to derive canonical discriminant scores, as in section 6.3. 4. The MASS library has the Aids2 data set, containing de-identified data on the survival status of patients diagnosed with AIDS before July 1 1991. Use tree-based classification (rpart()) to identify major influences on survival. 5. Investigate discrimination between plagiotropic and orthotropic species in the data set leafshape .
35
6.6 References
Chambers, J. M. and Hastie, T. J. 1992. Statistical Models in S. Wadsworth and Brooks Cole Advanced Books and Software, Pacific Grove CA.
Data relate to the paper: King. D.A. and Maindonald, J.H. 1999. Tree architecture in relation to leaf dimensions and tree stature in temperate and tropical rain forests. Journal of Ecology 87: 1012-1024.
35
60
Everitt, B. S. and Dunn, G. 1992. Applied Multivariate Data Analysis. Arnold, London. Friedman, J., Hastie, T. and Tibshirani, R. (1998). Additive logistic regression: A statistical view of boosting. Available from the internet. Ripley, B. D. 1996. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge UK. Therneau, T. M. and Atkinson, E. J. 1997. An Introduction to Recursive Partitioning Using the RPART Routines. This is one of two documents included in: http://www.stats.ox.ac.uk/pub/SWin/rpartdoc.zip Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York.
61
62
The following demonstrates a third possibility, for vectors that have named elements:
> c(Andreas=178, John=185, Jeff=183)[c("John","Jeff")] John Jeff 185 183
If instead one wants four 2s, then four 3s, then four 5s, enter rep(c(2,3,5), c(4,4,4)).
> rep(c(2,3,5),c(4,4,4)) # An alternative is rep(c(2,3,5), each=4) [1] 2 2 2 2 3 3 3 3 5 5 5 5
Note further that, in place of c(4,4,4) we could write rep(4,3). So a further possibility is that in place of rep(c(2,3,5), c(4,4,4)) we could enter rep(c(2,3,5), rep(4,3)). In addition to the above, note that the function rep() has an argument length.out, meaning keep on repeating the sequence until the length is length.out length.out.
Below, we will meet the notion of class, which is important for some of the more sophisticated language features of R. The logical, numeric and character vectors just given have class NULL, i.e. they have no class. There are special types of numeric vector which do have a class attribute. Factors are the most important example. Although often used as a compact way to store character strings, factors are, technically, numeric vectors. The class attribute of a factor has, not surprisingly, the value factor.
36
63
# TRUE for when NA appears, and otherwise FALSE TRUE # All elements are set to NA
WARNING: This is chiefly for those who may move between R and S-PLUS. In important respects, Rs behaviour with missing values is more intuitive than that of S-PLUS. Thus in R
y[x>2] <<- x[x>2]
gives the result that the nave user might expect, i.e. replace elements of y with corresponding elements of x wherever x>2. Wherever x>2 gives the result NA, no action is taken. In R, any NA in x>2 yields a value of NA for y[x>2] on the left of the equation, and a value of NA for x[x>2] on the right of the equation. In S-PLUS, the result on the right is the same, i.e. an NA. However, on the left, elements that have a subscript NA drop out. The vector on the left to which values will be assigned has, as a result, fewer elements than the vector on the right. Thus the following has the effect in R that the nave user might expect, but not in S-PLUS:
x <<- c(1,6,2,NA,10) y <<- c(1,4,2,3,0) y[x>2] <<- x[x>2] y
The safe way, in both S-PLUS and R, is to use !is.na(x) to limit the selection, on one or both sides as necessary, to those elements of x that are not NAs. We will have more to say on missing values in the section on data frames that now follows.
64
Notice that the data frame has abbreviations for site names, while variety names are given in full. We will extract the data for 1932, at the D site.
> Duluth1932 <<- Barley[Barley$Year=="1932" & Barley$Site=="D", + c("Variety","Yield")] > Duluth1932 Variety Yield 56 Manchuria 57 58 59 60 Svansota Velvet Trebi Peatland 67.7 66.7 67.4 67.4 91.8 94.1
The first column holds the row labels, which in this case are the numbers of the rows that have been extracted. In place of c(Variety,Yield) we could have written, more simply, c(2,4).
and similarly for any other library. In order to bring any of these data frames into the working directory, specifically request it. (Ensure though that the relevant library is attached.) Thus to bring in the data set airquality from the base library, type in
data(airquality)
The default Windows distribution includes the libraries BASE, EDA, STEPFUN (empirical distributions), and TS (time series). Other libraries must be explicitly installed. For remaining sections of these notes, it will be useful to have the MASS library installed. The current Windows version is bundled in the file VR61-6.zip, which you can download from the directory of contributed packages at any of the CRAN sites. The base library is automatically attached at the beginning of the session. To attach any other installed library, use the library() (or, equivalently package()) command.
Then
65
will create the data frame primates, from a file on the a: drive. The text strings in the first column will become the first column in the data frame. Suppose that primates is a data frame with three columns species name, body weight, and brain weight. You can give the columns names by typing in:
names(primates)<ames(primates)<-c(Species,"Bodywt","Brainwt")
4 Rhesus monkey
Specify header=TRUE if there is an initial how of header information. If the number of headers is one less than the number of columns of data, then the first column will be used, providing entries are unique, for row labels.
7.4.1 Idiosyncrasies
The function read.table() is straightforward for reading in rectangular arrays of data that are entirely numeric. When, as in the above example, one of the columns contains text strings, the column is by default 37 stored as a factor with as many different levels as there are unique text strings . Problems may arise when small mistakes in the data cause R to interpret a column of supposedly numeric data as character strings, which are automatically turned into factors. For example there may be an O (oh) somewhere where there should be a 0 (zero), or an el (l) where there should be a one (1). If you use any missing value symbols other than the default (NA), you need to make this explicit see section 7.3.2 below. Otherwise any appearance of such symbols as *, period(.) and blank (in a case where the separator is something other than a space) will cause to whole column to be treated as character data. Users who find this default behaviour of read.table() confusing may wish to use the parameter setting 38 as.is = TRUE. If the column is later required for use as a factor in a model or graphics formula, it may be necessary to make it into a factor at that time. Some functions do this conversion automatically.
read.table()
The function read.table() expects missing values to be coded as NA, unless you set na.strings to recognise other characters as missing value indicators. If you have a text file that has been output from SAS, you will probably want to set na.strings=c("."). There may be multiple missing value indicators, e.g. na.strings=c(NA,".",*,""). The "" will ensure that empty cells are entered as NAs.
Storage of columns of character strings as factors is efficient when a small number of distinct strings are each repeated a large number of times. Specifying as.is = T prevents columns of (intended or unintended) character strings from being converted into factors. One way to get mixed text and numeric data across from Excel is to save the worksheet in a .csv text file with comma as the separator. If for example file name is myfile.csv and is on drive a:, use
39 38
37
66
Printing the contents of the column with the name country gives the names, not the codes. As in most operations with factors, R does the translation invisibly. There are though annoying exceptions that can make the use of factors tricky. To be sure of getting the country names, specify
as.character(islandcities$country)
By default, R sorts the level names in alphabetical order. If we form a table that has the number of times that each country appears, this is the order that is used:
> table(islandcities$country) Australia Cuba Indonesia Japan Philippines Taiwan United Kingdom 3 1 4 6 2 1 2
This order of the level names is purely a convenience. We might prefer countries to appear in order of latitude, from North to South. We can change the order of the level names to reflect this desired order:
> lev <<- levels(islandcities$country) > lev[c(7,4,6,2,5,3,1)] [1] "United Kingdom" "Japan" [5] "Philippines" > table(country) United Kingdom Japan Taiwan Cuba Philippines Indonesia Australia 2 6 1 1 2 4 3 "Indonesia" "Taiwan" "Australia" "Cuba"
In ordered factors, i.e. factors with ordered levels, there are inequalities that relate factor levels. Factors have the potential to cause a few surprises, so be careful! Here are two points to note: 1. 2. 3. When a vector of character strings becomes a column of a data frame, R by default turns it into a factor. Enclose the vector of character strings in the wrapper function I() if it is to remain character. There are some contexts in which factors become numeric vectors. To be sure of getting the vector of text strings, specify e.g. as.character(country) as.character(country). To extract the numeric levels 1, 2, 3, , specify as.numeric(country).
read.table("a:/myfile.csv", sep=",") to read the data into R. This copes with any spaces which may appear in text strings. [But watch that none of the cell entries include commas.] Factors are vectors which have mode numeric and class factor. They have an attribute levels that holds the level names.
40
67
> ordf.stress<"medium" TRUE FALSE FALSE TRUE > ordf.stress>="medium" [1] FALSE TRUE FALSE
Later we will meet the notion of inheritance. Ordered factors inherit the attributes of factors, and have a further ordering attribute. When you ask for the class of an object, you get details both of the class of the object, and of any classes from which it inherits. Thus:
> class(ordf.stress) [1] "ordered" "factor"
7.7 Lists
Lists make it possible to collect an arbitrary set of R objects together under a single name. You might for example collect together vectors of several different modes and lengths, scalars, matrices or more general arrays, functions, etc. Lists can be, and often are, a rag-tag of different objects. We will use for illustration the list object that R creates as output from an lm calculation. For example, suppose that we create a linear model (lm) object elastic.lm (c. f. sections 1.1.4 and 2..1.4) by specifying
elastic.lm elastic.lm <<- lm(distance~stretch, data=elasticband data=elasticband) elasticband)
It is readily verified that elastic.lm consists of a variety of different kinds of objects, stored as a list. You can get the names of these objects by typing in
> names(elastic.lm) [1] "coefficients" "coefficients" [9] "xlevels" "residuals" "call" "effects" "qr" "terms" "rank" "df.residual" "model" [5] "fitted.values" "assign"
We can alternatively ask for the sublist whose only element is the vector elastic.lm$coefficients. For this, specify elastic.lm[coefficients] or elastic.lm[1]. There is a subtle difference in the result that is printed out. The information is preceded by $coefficients, meaning list element with name coefficients.
> elastic.lm[1] elastic.lm[1] $coefficients (Intercept) -63.571429 stretch stretch 4.553571
68
places columns of xx, in order, into the vector x. In the example above, we get back the elements 1, 2, . . . , 6. Names may be assigned to the rows and columns of a matrix. We give details below. Matrices have the attribute dimension. Thus
> dim(xx) [1] 2 3
In fact a matrix is a vector (numeric or character) whose dimension attribute has length 2. Now set
> x34 <<- matrix(1:12,ncol=4) > x34 [,1] [,2] [,3] [,4] [1,] [2,] [3,] 1 2 3 4 5 6 7 8 9 10 11 12
x34[x34[-2,2,-3]
The dimnames() function assigns and/or extracts matrix row and column names. The dimnames() function gives a list, in which the first list element is the vector of row names, and the second list element is the vector of column names. This generalises in the obvious way for use with arrays, which we now discuss.
69
7.8.1 Arrays
The generalisation from a matrix (2 dimensions) to allow > 2 dimensions gives an array. A matrix is a 2dimensional array. Consider a numeric vector of length 24. So that we can easily keep track of the elements, we will make them 1, 2, .., 24. Thus
x <<- 1:24
Then
dim(x) <<- c(4,6)
Now try
> dim(x) <<-c(3,4,2) > x , , 1 [,1] [,2] [,3] [,4] [1,] [2,] [3,] , , 2 [,1] [,2] [,3] [,4] [1,] [2,] [3,] 13 14 15 16 17 18 19 20 21 22 23 24 1 2 3 4 5 6 7 8 9 10 11 12 12
7.10 Exercises
1. Generate the numbers 101, 102, , 112, and store the result in the vector x. 2. Generate four repeats of the sequence of numbers (4, 6, 3).
70
3. Generate the sequence consisting of eight 4s, then seven 6s, and finally nine 3s. 4. Create a vector consisting of one 1, then two 2s, three 3s, etc., and ending with nine 9s. 5. Determine, for each of the columns of the data frame airquality (base library), the median, mean, upper and lower quartiles, and range. [Specify data(airquality) to bring the data frame airquality into the working directory.] 6. For each of the following calculations, decide what you would expect, and then check to see if you were right! a)
answer <<- c(2, 7, 1, 5, 12, 3, 4) for (j in 2:length(answer)){ answer[j] <<- max(answer[j],answer[jmax(answer[j],answer[j-1])}
b)
answer <<- c(2, 7, 1, 5, 12, 3, 4) for (j in 2:length(answer)){ answer[j] answer[j] <<- sum(answer[j],answer[jsum(answer[j],answer[j-1])}
7. In the built-in data frame airquality (a) extract the row or rows for which Ozone has its maximum value; and (b) extract the vector of values of Wind for values of Ozone that are above the upper quartile. 8. Refer to the Eurasian snow data that is given in Exercise 1.6 . Find the mean of the snow cover (a) for the odd-numbered years and (b) for the even-numbered years. 9. Determine which columns of the data frame Cars93 (MASS library) are factors. For each of these factor columns, print out the levels vector. Which of these are ordered factors? 10. Use summary() to get information about data in the data frames airquality, attitude (both in the base library), and cpus (MASS library). Write brief notes, for each of these data sets, on what you have been able to learn. 11. From the data frame mtcars (MASS library) extract a data frame mtcars6 that holds only the information for cars with 6 cylinders. 12. From the data frame Cars93 (MASS library) extract a data frame which holds only information for small and sporty cars. 13. Store the numbers obtained in exercise 2, in order, in the columns of a 3 x 4 matrix. 14. Store the numbers obtained in exercise 3, in order, in the columns of a 6 by 4 matrix. Extract the matrix consisting of rows 3 to 6 and columns 3 and 4, of this matrix.
71
72
Numeric vectors will be sorted in numerical order. Character vectors will be sorted in alphanumeric order. The function match() can be used in all sorts of clever ways to pick out subsets of data. For example:
> x <<- rep(1:5,rep(3,5)) > x [1] 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 > two4 <<- match(x,c(2,4), nomatch=0) > two4 [1] 0 0 0 1 1 1 0 0 0 2 2 2 0 0 0 > # We can use this to pick out the 2s and the 4s > as.logical(two4) [1] FALSE FALSE FALSE [13] FALSE FALSE FALSE > x[as.logical(two4)] [1] 2 2 2 4 4 4 TRUE TRUE TRUE TRUE FALSE FALSE FALSE TRUE TRUE TRUE
73
To find the position at which the first space appears, we might do the following: nblank <<- sapply(Cars93$Make, function(x){n <<- nchar(x); a <<- substring(x, 1:n, 1:n); m <<- match(" ", a,nomatch=1); m})
8.4.1 apply()
The function apply() can be used on data frames as well as matrices. Here is an example:
> apply(airquality,2,mean) Ozone Solar.R NA NA Wind 9.96 Wind 9.96 # All elements must be numeric! Temp 77.88 Temp 77.88 Month 6.99 Month 6.99 Day 15.80 Day 15.80
The use of apply(airquality,1,mean) will give means for each row. These are not, for these data, useful information!
8.4.2 sapply()
The function sapply() can be useful for getting information about the columns of a data frame. Here we use it to count that number of missing values in each column of the built-in data frame airquality.
> sapply(airquality, function(x)sum(is.na(x))) function(x)sum(is.na(x))) Ozone Solar.R 37 7 Wind 0 Temp 0 Month 0 Day 0
Here are several further examples that use the data frame moths that accompanies these notes:
> sapply(moths,is.factor) meters FALSE A FALSE P habitat FALSE TRUE # Determine which columns are factors
> # How many levels does each factor have? > sapply(moths, function(x)if(!is.factor(x))return(0) else length(levels(x))) meters 0 A 0 P habitat 0 8
*8.5 tapply()
The arguments are a variable, a list of factors, and a function that operates on a vector to return a single value. For each combination of factor levels, the function is applied to corresponding values of the variable. The output is an array with as many dimensions as there are factors. Where there are no data values for a particular combination of factor levels, NA is returned. Often one wishes to get back, not an array, but a data frame with one row for each combination of factor levels. For example, we may have a data frame with two factors and a numeric variable, and want to create a new data
74
frame with all possible combinations of the factors, and the cell means as the response. Here is an example of how to do it. First, use tapply() to produce an array of cell means. The function dimnames(), applied to this array, returns a list whose first element holds the row names (i.e. for the level names for the first factor), and whose second element holds the column names. [Further dimensions are possible.] We pass this list (row names, column names) to expand.grid(), which returns a data frame with all possible combinations of the factor levels. Finally, stretch the array of means out into a vector, and append this to the data frame. Here is an example using the data set cabbages from the MASS library.
> data(cabbages) > names(cabbages) [1] "Cult" $Cult [1] "c39" "c52" $Date [1] "d16" "d20" "d21" $HeadWt NULL $VitC NULL > attach(cabbages) > cabbages.tab <<- tapply(HeadWt, list(Cult, Date), mean) > cabbages.tab d16 d20 d21 c39 3.18 2.80 2.74 c52 2.26 3.11 1.47 > cabbages.nam <<- dimnames(cabbages.tab) > cabbages.nam [[1]] [1] "c39" "c52" [[2]] [1] "d16" "d20" "d21" > > + > cabbages.df Cult Date Means 1 2 3 4 5 6 c39 c52 c39 c52 c39 c52 d16 d16 d20 d20 d21 d21 3.18 2.26 2.80 3.11 3.11 2.74 1.47 ## We now stretch the array of means out into a vector, and create ## a new column of cabbages.df, named Means, that holds the means. Date=factor(cabbages.nam[[2]])) # There are 2 dimensions, therefore 2 list elements # Two varieties by three planting dates "Date" "HeadWt" "VitC" > sapply(cabbages, levels)
75
If there are no data for some combinations of factor levels, one might want to omit the corresponding rows.
8.6 Splitting Vectors and Data Frames Down into Lists split()
As an example,
split(cabbages$HeadWt, split(cabbages$HeadWt, cabbages$Date)
returns a list with three elements, the first named d16 and containing values of HeadWt where Date has the level d16, and similarly for the remaining lists with names d20 and d21. You need to use split() in this way in order to do side by side boxplots. The function boxplot() takes as its first element a list in which the first list element is the vector of values for the first boxplot, the second list element is the vector of values for the second boxplot, and so on. You can use split to split up a data frame into a list of data frames. For example
split(cabbages[,split(cabbages[,-1], cabbages$Date) # Split remaining columns # by levels of Date
We proceed thus to add a column that has the abbreviations to the data frame. Here however our demands are simple, and we can proceed thus:
new.Cars93 <<- merge(x=Cars93,y=Cars93.summary[,4,drop=F], by.x="Type",by.y="row.names")
This creates a data frame that has the abbreviations in the additional column with name abbrev. If there had been rows with missing values of Type, these would have been omitted from the new data frame. One can avoid this by making sure that Type has NA as one of its levels, in both data frames.
8.8 Dates
There are two libraries for working with dates the date library and the chron library. We demonstrate the use of the date library. The function as.date() will convert a character string into a dates object. By default, dates are stored using January 1 1960 as origin. This is important when you use as.integer to convert a date into an integer value.
> library(date) [1] 1Jan60 > as.date("1/12/60","dmy") [1] 1Dec60 > as.date("1/12/60","dmy")as.date("1/12/60","dmy")-as.date("1/1/60","dmy") [1] 335 > as.date("31/12/60","dmy") [1] 31Dec60 # library must be installed > as.date("1/1/60", order="dmy")
76
> as.date("31/12/60","dmy")as.date("31/12/60","dmy")-as.date("1/1/60","dmy") [1] 365 > as.integer(as.date("1/1/60","dmy")) [1] 0 > as.integer(as.date("1/1/2000","dmy")) [1] 14610 > as.integer(as.date("29/2/2000","dmy")) [1] 14669 > as.integer(as.date("1/3/2000","dmy")) [1] 14670
A wide variety of different formats are possible. Among the legal formats are 8-31-2000 (or 31-8-2000 if you specify order=dmy), 8/31/2000 (cf 31/8/2000), or August 31 2000. Observe that one can subtract two dates and get the time between them in days. There are several functions (including date.ddmmmyy()) for printing out dates in various different formats.
8.9 Exercises
1. 2. 3. For the data frame Cars93, get the information provided by summary() for each level of Type. (Use split().) Determine the number of cars, in the data frame Cars93, for each Origin and Type. In the data frame claims: (a) determine the number of rows of information for each age category (age) and car type (type); (b) determine the total number of claims for each age category and car type; (c) determine, for each age category and car type, the number of rows for which data are missing; (d) determine, for each age category and car type, the total cost of claims. Remove all the data frames and other objects that you have added to the working directory. [If you have a vector that holds the names of the objects that were in the directory when you started, the function additions() will give the names of objects that have been added.] Determine the number of days, according to R, between the following dates: a) b) January 1 in the year 1700, and January 1 in the year 1800 January 1 in the year 1998, and January 1 in the year 2000
4.
5.
77
78
The function returns the value (fahrenheit(fahrenheit-32)*5/9. More generally, a function returns the value of the last statement of the function. Unless the result from the function is assigned to a name, the result is printed. Here is a function that prints out the mean and standard deviation of a set of numbers:
> mean.and.sd <<- function(x=1:10){ + av <<- mean(x) + sd <<- sqrt(var(x)) + c(mean=av, SD=sd) + } > > # Now invoke the function > mean.and.sd() mean SD 5.500000 3.027650 > mean.and.sd(hills$climb) mean SD 1815.314 1619.151
Earlier, we encountered the function sapply() that can be used to repeat a calculation on all columns of a data frame. [More generally, the first argument of sapply() may be a list.] To apply faclev() to all columns of the data frame moths we can specify
> sapply(moths, sapply(moths, faclev)
We can alternatively give the definition of faclev directly as the second argument of sapply, thus
> sapply(moths, function(x)if(!is.factor(x))return(0) else length(levels(x)))
79
Finally, we may want to do similar calculations on a number of different data frames. So we create a function check.df() that encapsulates the calculations. Here is the definition of check.df().
check.df <<- function(df=moths) sapply(df, function(x)if(!is.factor(x))return(0) else length(levels(x)))
to get the names of objects that have been added since the start of the session.
80
9.3.1 Graphs
Use graphs freely to shed light both on computations and on data. One of Rs big pluses is its tight integration of computation and graphics.
The multiplication by 1 causes (guesses<.2), which is calculated as TRUE or FALSE, to be coerced to 1 (TRUE) or 0 (FALSE). The vector correct.answers correct.answers thus contains the results of the student's guesses. A 1 is recorded each time the student correctly guesses the answer, while a 0 is recorded each time the student is wrong. One can thus write an R function that simulates a student guessing at a True-False test consisting of some arbitrary number of questions. We leave this as an exercise.
81
9.5 Exercises
1. Use the round function together with runif() to generate 100 random integers between 0 and 99. Now look up the help for sample(), and use it for the same purpose. 2. Write a function that will take as its arguments a list of response variables, a list of factors, a data frame, and a function such as mean or median. It will return a data frame in which each value for each combination of factor levels is summarised in a single statistic, for example the mean or the median. 3. The supplied data frame milk has columns four and one. Seventeen people rated the sweetness of each of two samples of a milk product on a continuous scale from 1 to 7, one sample with four units of additive and the other with one unit of additive. Here is a function that plots, for each patient, the four result against the one result, but insisting on the same range for the x and y axes.
plot.one <<- function() { xyrange <<- range(milk) par(pin=c(6.75, 6.75)) abline(0,1) } # Calculates the range of all values # in the data frame # Set plotting area = 6.75 in. by 6.75 in. # Line where four = one plot(four, one, data=milk, xlim=xyrange, ylim=xyrange, pch=16)
Rewrite this function so that, given the name of a data frame and of any two of its columns, it will plot the second named column against the first named column, showing also the line y=x. 4. Write a function that prints, with their row and column labels, only those elements of a correlation matrix for which abs(correlation) >= 0.9. 5. Write your own wrapper function for one-way analysis of variance that provides a side by side boxplot of the distribution of values by groups. If no response variable is specified, the function will generate random normal data (no difference between groups) and provide the analysis of variance and boxplot information for that. 6. Write a function that adds a text string containing documentation information as an attribute to a dataframe. 7. Write a function that computes a moving average of order 2 of the values in a given vector. Apply the above function to the data (in the data set huron that accompanies these notes) for the levels of Lake Huron. Repeat for a moving average of order 3. 8. Find a way of computing the moving averages in exercise 3 that does not involve the use of a for loop. 9. Create a function to compute the average, variance and standard deviation of 1000 randomly generated uniform random numbers, on [0,1]. (Compare your results with the theoretical results: the expected value of a uniform random variable on [0,1] is 0.5, and the variance of such a random variable is 0.0833.) 10. Write a function that generates 100 independent observations on a uniformly distributed random variable on the interval [3.7, 5.8]. Find the mean, variance and standard deviation of such a uniform random variable. Now modify the function so that you can specify an arbitrary interval. 11. Look up the help for the sample() function. Use it to generate 50 random integers between 0 and 99, sampled without replacement. (This means that we do not allow any number to be sampled a second time.) Now, generate 50 random integers between 0 and 9, with replacement. 12. Write an R function that simulates a student guessing at a True-False test consisting of 40 questions. Find the mean and variance of the student's answers. Compare with the theoretical values of .5 and .25. 13. Write an R function that simulates a student guessing at a multiple choice test consisting of 40 questions, where there is chance of 1 in 5 of getting the right answer to each question. Find the mean and variance of the student's answers. Compare with the theoretical values of .2 and .16. 14. Write an R function that simulates the number of working light bulbs out of 500, where each bulb has a probability .99 of working. Using simulation, estimate the expected value and variance of the random variable X, which is 1 if the light bulb works and 0 if the light bulb does not work. What are the theoretical values?
82
15. Write a function that does an arbitrary number n of repeated simulations of the number of accidents in a year, plotting the result in a suitable way. Assume that the number of accidents in a year follows a Poisson distribution. Run the function assuming an average rate of 2.8 accidents per year. 16. Write a function that simulates the repeated calculation of the coefficient of variation (= the ratio of the mean to the standard deviation), for independent random samples from a normal distribution. 17. Write a function that, for any sample, calculates the median of the absolute values of the deviations from the sample median. *18. Generate random samples from normal, exponential, t (2 d. f.), and t (1 d. f.), thus: a) xn<xn<-rnorm(100) b) xe<xe<-rexp(100) c) xt2<xt2<-rt(100, df=2) d) xt2<xt2<-rt(100, df=1) Apply the function from exercise 17 to each sample. Compare with the standard deviation in each case. *19. The vector x consists of the frequencies 5, 3, 1, 4, 6 The first element is the number of occurrences of level 1, the second is the number of occurrences of level 2, and so on. Write a function that takes any such vector x as its input, and outputs the vector of factor levels, here 1 1 1 1 1 2 2 2 3 . . . [Youll need the information that is provided by cumsum(x). Form a vector in which 1s appear whenever the factor level is incremented, and is otherwise zero. . . .] *20. Write a function that calculates the minimum of a quadratic, and the value of the function at the minimum. *21. A between times correlation matrix, has been calculated from data on heights of trees at times 1, 2, 3, 4, . . . Write a function that calculates the average of the correlations for any given lag. *22. Given data on trees at times 1, 2, 3, 4, . . ., write a function that calculates the matrix of average relative growth rates over the several intervals. Apply your function to the data frame rats that accompanies these notes. [The relative growth rate may be defined as
1 dw d log w . Hence its is reasonable to calculate the = w dt dt log w2 log w1 .] average over the interval from t1 to t2 as t 2 t1
83
84
log( ) = a + b1 x1 1
Here is an expected proportion, and
log(
) = logit( ) is log(odds). 1
y = g (a + b1 x1 ) + =
exp(a + b1 x1 ) + 1 + exp(a + b1 x1 )
Here g(.) undoes the logit transformation. We can add more explanatory variables: a + b1x1 + . . . + bpxp. Use glm() to fit generalized linear models. Additive Model
y = 1 ( x1 ) + 2 ( x 2 ) + .... + p ( x p ) +
41
This may be generalized in various ways. Models which have this form may be nested within other models which have this basic form. Thus there may be `predictions and `errors at different levels within the total model.
85
y = 1 ( x1 ) +
Some of
z1 = 1 ( x1 ), z 2 = 2 ( x2 ),..., z p = p ( x p )
the usual linear model terms. The constant term gets absorbed into one or more of the
s.
y = g (1 ( x1 ) + 2 ( x2 ) + .... + p ( x p )) +
Generalized Additive Models are a generalisation of Generalized Linear Models. For example, g(.) may be the function that undoes the logit transformation, as in a logistic regression model. Some of
z1 = 1 ( x1 ), z 2 = 2 ( x2 ),..., z p = p ( x p )
the usual linear model terms. We can transform to get the model
y = g ( z1 + z 2 + ...z p ) +
Notice that even if p = 1, we may still want to retain both
y = g (1 ( x1 )) +
The reason is that g(.) is a specific function, such as the inverse of the logit function. The function 1 (.) does any further necessary smoothing, in case g(.) is not quite the right transformation. One wants g(.) to do as much of possible of the task of transformation, with 1 (.) giving the transformation any necessary additional flourishes. At the time of writing, R has no specific provision for generalized additive models. The fitting of spline (bs() or ns()) terms in a linear model or a generalized linear model will often do what is needed. 10.2 Logistic Regression We will use a logistic regression model as a starting point for discussing Generalized Linear Models. With proportions that range from less than 0.1 to 0.99, it is not reasonable to expect that the expected proportion will be a linear function of x. Some such transformation (`link function) as the logit is required. A good way to think about logit models is that they work on a log(odds) scale. If p is a probability (e.g. that horse A will win the race), then the corresponding odds are p/(1-p), and log(odds) = log(
86
logit(Proportion), i. e. log(Odds)
-6
-4
0.0
0.2
0.4
0.6
0.8
1.0
Proportion
Figure 24: The logit or log(odds) transformation. Shown here is a plot of log(odds) versus proportion. Notice how the range is stretched out at both ends. The logit or log(odds) function turns expected proportions into values that may range from - to +. It is not satisfactory to use a linear model to predict proportions. The values from the linear model may well lie outside the range from 0 to 1. It is however in order to use a linear model to predict logit(proportion). The logit function is an example of a link function. There are various other link functions that we can use with proportions. One of the commonest is the complementary log-log function.
_____________________________________________
I am grateful to John Erickson (Anesthesia and Critical Care, University of Chicago) and to Alan Welsh (Centre for Mathematics & its Applications, Australian National University) for allowing me use of these data.
42
0.001
0.1
-2
0.75
0.99
87
Table 1: Patients moving (0) and not moving (1), for each of six different alveolar concentrations.
1.0
0.8
Proportion
0.2
0.4
0.6
0.0 1.0
1.5
2.0
2.5
Concentration
Figure 25: Plot, versus concentration, of proportion of patients not moving. The horizontal line is the estimate of the proportion of moves one would expect if the concentration had no effect.
We fit two models, the logit model and the complementary log-log model. We can fit the models either directly to the 0/1 data, or to the proportions in Table 1. To understand the output, you need to know about deviances. A deviance has a role very similar to a sum of squares in regression. Thus we have:
Regression degrees of freedom sum of squares mean sum of squares (divide by d.f.) We prefer models with a small mean residual sum of squares.
Logistic regression degrees of freedom deviance mean deviance (divide by d.f.) We prefer models with a small mean deviance.
If individuals respond independently, with the same probability, then we have Bernoulli trials. Justification for assuming the same probability will arise from the way in which individuals are sampled. While individuals will certainly be different in their response the notion is that, each time a new individual is taken, they are drawn at random from some larger population. Here is the R code:
> anaes.logit <<- glm(nomove ~ conc, family = binomial(link = logit), + data = anesthetic)
88
> summary(anaes.logit) Call: glm(formula = nomove ~ conc, family = binomial(link = logit), data = anesthetic) Deviance Residuals: Min 1Q Median 3Q Max -1.77 -0.744 0.0341 0.687 2.07 Coefficients: Value Std. Std. Error t value (Intercept) -6.47 conc 5.57 2.42 2.04 -2.68 2.72
(Dispersion Parameter for Binomial family taken to be 1 ) Null Deviance: 41.5 on 29 degrees of freedom Residual Deviance: 27.8 on 28 degrees of freedom Number of Fisher Scoring Iterations: 5 Correlation of Coefficients: (Intercept) conc -0.981
Proportion
0.01 0.0
0.1
0.4
0.8
0.99
0.5
1.0
1.5
2.0
2.5
Concentration
Figure 26: Plot, versus concentration, of log(odds) [= logit(proportion)] of patients not moving. The line is the estimate of the proportion of moves, based on the fitted logit model.
With such a small sample size it is impossible to do much that is useful to check the adequacy of the model. You can also try plot(anaes.logit) and plot.gam(anaes.logit).
89
I am grateful to Dr Edward Linacre, Visiting Fellow, Geography Department, Australian National University, for making these data available.
43
90
methods(summary)
to get a list of the summary methods that are available. You may want to mix and match, e.g. summary.lm() on an aov or glm object. The output may not be what you might expect. So be careful!
10.8 Exercises
1. Fit a Poisson regression model to the data in the data frame moths that Accompanies these notes. Allow different intercepts for different habitats. Use log(meters) as a covariate.
10.9 References
Dobson, A. J. 1983. An Introduction to Statistical Modelling. Chapman and Hall, London. Hastie, T. J. and Tibshirani, R. J. 1990. Generalized Additive Models. Chapman and Hall, London. McCullagh, P. and Nelder, J. A., 2nd edn., 1989. Generalized Linear Models. Chapman and Hall. Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York.
91
92
93
(Intr) shdA2D shdD2F shadeAug2Dec -0.53 shadeDec2Feb -0.53 0.50 shadeFeb2May -0.53 0.50 0.50 Standardized WithinWithin-Group Residuals: Min Q1 Med Q3 -2.4153887 -0.5981415 -0.0689948 0.7804597 Number of Observations: 48 Number of Groups: block plot %in% block 3 12
> anova(kiwishade.lme) numDF denDF (Intercept) shade 1 3 6 FF-value pp-value <.0001 0.0012 22.211 36 5190.552
Max 1.5890938
This was a balanced design, which is why section 5.8.2 could use aov() for an analysis. We can get an output summary that is helpful for showing how the error mean squares match up with standard deviation information given above thus:
> intervals(kiwishade.lme) Approximate 95% confidence intervals Fixed effects: lower (Intercept) shadeAug2Dec -1.53909 est. 3.030833 -7.428333 upper 7.600757 -5.711743 -2.858410 96.62977 100.202500 103.775232 103.775232
shadeDec2Feb -14.85159 -10.281667 shadeFeb2May -11.99826 Random Effects: Level: block lower Level: plot lower est. est.
upper
sd((Intercept)) 0.3702555 1.478639 5.905037 WithinWithin-group standard error: lower est. upper 2.770678 3.490378 4.397024
We are interested in the three estimates. By squaring the standard deviations and converting them to variances we get the information in the following table: Variance component block plot residual (within group) 2.0192 = 4.076 1.479 = 2.186 3.490 =12.180
2 2
Notes Three blocks 4 plots per block 4 vines (subplots) per plot
94
The above allows us to put together the information for an analysis of variance table. We have: Variance component block 4.076 Mean square for anova table 12.180 + 4 2.186 + 16 4.076 = 86.14 plot 2.186 12.180 + 4 2.186 = 20.92 residual (within group) 12.180 12.18 d.f. 2 (3-1) 6 (3-1) (2-1) 34(4-1)
Now find see where these same pieces of information appeared in the analysis of variance table of section 5.8.2:
> kiwishade.aov<kiwishade.aov<-aov(yield~block+shade+Error(block:shade),data=kiwishade) aov(yield~block+shade+Error(block:shade),data=kiwishade) > summary(kiwishade.aov) Error: block:shade Df block shade Residuals 2 6 Sum Sq Mean Sq F value 172.35 125.57 86.17 20.93 Pr(>F) 4.1176 0.074879
3 1394.51
A reasonable guess is that first order interactions may be all we need, i.e.
it2.lme<it2.lme<-lme(log(it)~(tint+target+agegp+sex)^2, random=~1|id, data=tinting,method="ML")
Data relate to the paper: Burns, N. R., Nettlebeck, T., White, M. and Willson, J. 1999. Effects of car window tinting on visual performance: a comparison of elderly and young drivers. Ergonomics 42: 428-443.
44
95
Finally, there is the very simple model, allowing only for main effects:
it1.lme<it1.lme<-lme(log(it)~(tint+target+agegp+sex), random=~1|id, data=tinting,method="ML")
Note that we have fitted all these models by maximum likelihood. This is so that we can do the equivalent of an analysis of variance comparison. Here is what we get:
> anova(itstar.lme,it2.lme,it1.lme) Model df itstar.lme it2.lme it1.lme 1 26 3 8 AIC BIC logLik Test L.Ratio pp-value 6.11093 0.7288 0.7288 0.0065 8.146187 91.45036 21.926906 1.138171 26.77022 7.430915 2 vs 3 22.88105
The model that limits attention to first order interactions is adequate. We will need to examine the first order interactions individually. For this we re-fit the model used for it2.lme, but now with method="REML".
it2.reml<it2.reml<-update(it2.lme,method="REML")
tint.L.targethicon -0.09193 tint.Q.targethicon -0.00722 tint.L.agegp tint.Q.agegp tint.L.sex tint.Q.sex targethicon.agegp targethicon.sex agegp.sex -0.13075 -0.06972 0.09794 -0.00542 0.13887 -0.07785 0.33164
0.0461 145 0.0482 145 0.0492 145 0.0520 145 0.0492 145 0.0520 145 0.0584 145 0.0584 0.0584 145 0.3261 22
96
> library(mass)
# if needed # if needed
> data(michelson)
> michelson$Run <<- as.numeric(michelson$Run) as.numeric(michelson$Run) # Ensure Run is a variable > mich.lme1 <<- lme(fixed = Speed ~ Run, data = michelson, random = ~ Run| Expt, correlation = corAR1(form = ~ 1 | Expt)) ~ 1 | Expt), weights = varIdent(form = > summary(mich.lme1) Linear mixedmixed-effects model fit by by REML Data: michelson AIC BIC logLik -546 1113 1142
Random effects: Formula: ~Run | Expt Structure: General positivepositive-definite StdDev Corr (Intercept) Run Residual 46.49 (Intr) 3.62 -1 121.29
Correlation Structure: AR(1) Formula: ~1 | Expt Parameter estimate(s): Phi 0.527 Variance function: Structure: Different standard deviations per stratum Formula: ~1 | Expt Parameter estimates: 1 2 3 4 5 1.000 0.340 0.646 0.543 0.501 Fixed effects: Speed ~ Run Value Std.Error DF tt-value pp-value (Intercept) Run Correlation: (Intr) Run -0.934 Standardized WithinWithin-Group Residuals: Min Q1 Med 0.109 Q3 0.740 Max 1.810 -2.912 -0.606 868 -2 30.51 94 2.42 94 28.46 -0.88 <.0001 0.381
97
There are (at least) two types of method time domain methods and frequency domain methods. In the time domain models may be conventional short memory models where the autocorrelation function decays quite rapidly to zero, or the relatively recently developed long memory time series models where the autocorrelation function decays very slowly as observations move apart in time. A characteristic of long memory models is that there is variation at all temporal scales. Thus in a study of wind speeds it may be possible to characterise windy days, windy weeks, windy months, windy years, windy decades, and perhaps even windy centuries. R does not yet have functions for fitting the more recently developed long memory models. The function stl() decomposes a times series into a trend and seasonal components, etc. The functions ar() (for autoregressive models) and associated functions, and arima0() ( autoregressive integrated moving average models) fit standard types of time domain short memory models. Note also the function gls() in the nlme library, which can fit relatively complex models that may have autoregressive, arima and various other types of dependence structure. The function spectrum() and related functions is designed for frequency domain or spectral analysis.
11.4 Exercises
1. Use the function acf() to plot the autocorrelation function of lake levels in successive years in the data set huron. Do the plots both with type=correlation and with type=partial.
11.5 References
Chambers, J. M. and Hastie, T. J. 1992. Statistical Models in S. Wadsworth and Brooks Cole Advanced Books and Software, Pacific Grove CA. Diggle, Liang & Zeger 1996. Analysis of Longitudinal Data. Clarendon Press, Oxford. Everitt, B. S. and Dunn, G. 1992. Applied Multivariate Data Analysis. Arnold, London. Hand, D. J. & Crowder, M. J. 1996. Practical longitudinal data analysis. Chapman and Hall, London. Little, R. C., Milliken, G. A., Stroup, W. W. and Wolfinger, R. D. (1996). SAS Systems for Mixed Models. SAS Institute Inc., Cary, New Carolina. Pinheiro, J. C. and Bates, D. M. 2000. Mixed effects models in S and S-PLUS. Springer, New York. Venables, W. N. and Ripley, B. D., 2nd edn 1997. Modern Applied Statistics with S-Plus. Springer, New York.
98
Here fac has the class ordered, which inherits from the parent class factor. The function print.ordered(), which is the function that is called when you invoke print() with an ordered factor, makes use of the fact that ordered inherits from factor. > print.ordered function (x, quote = FALSE) { if (length(x) <= 0) cat("ordered(0)\ cat("ordered(0)\n") else print(levels(x)[x], quote = quote) cat("Levels: ", paste(levels(x), collapse = " < "), "\ "\n") invisible(x) } Note that it is a convenience for print.ordered() to call print.factor(). The function print.glm() does not call print.lm(), even though glm objects inherit from lm objects.
If the argument is a function, we may want to get at the arguments to the function. Here is how one can do it
deparse.args <<function (a) { s <<- substitute (a)
99
if(mode(s) == "call"){ # the first element of a 'call' is the function called # so we don't deparse that, just the arguments. print(paste(The function is: lapply (s[(s[-1], function (x) paste (deparse(x), collapse = "\ "\n")) } else stop ("argument is not a function call") } , s[1],(), collapse=))
For example:
> deparse.args(list(x+y, foo(bar))) [1] "The function is: [[1]] [1] "x + y" [[2]] [1] "foo(bar)" list ()"
stores this unevaluated expression in my.exp my.exp . The actual contents of my.exp are a little different from what is printed out. R gives you as much information as it thinks helpful. Note that expression(mean(x+y)) is different from expression(mean(x+y)), as is obvious when the expression is evaluated. A text string is a text string is a text string, unless one explicitly changes it into an expression or part of an expression. Lets see how this works in practice
> x <<- 101:110 > y <<- 21:30 > my.exp <<- expression(mean(x+y)) > my.txt <<- expression("mean(x+y)") expression("mean(x+y)") > eval(my.exp) [1] 131 > eval(my.txt) [1] "mean(x+y)"
What if we already have mean(x+y) stored in a text string, and want to turn it into an expression? The answer is to use the function parse(), but indicate that the parameter is text rather than a file name. Thus
> parse(text="mean(x+y)") expression(mean(x + y))
100
Here is a function that creates a new data frame from an arbitrary set of columns of an existing data frame. Once in the function, we attach the data frame so that we can leave off the name of the data frame, and use only the column names
make.new.df <<- function(old.df = austpop, colnames = c("NSW", "ACT")) { attach(old.df) attach(old.df) on.exit(detach(old.df)) argtxt <<- paste(colnames, collapse = ",") exprtxt <<- paste("data.frame(", argtxt, ")", sep = "") expr <<- parse(text = exprtxt) df <<- eval(expr) names(df) <<- colnames df }
The function do.call() do.call() makes it possible to supply the function name and the argument list in separate text strings. When do.call is used it is only necessary to use parse() in generating the argument list. For example
make.new.df <<function(old.df = austpop, colnames = c("NSW", "ACT")) { attach(old.df) on.exit(detach(old.df)) argtxt <<- paste(colnames, collapse = ",") listexpr <<- parse(text=paste("list(", argtxt, ")", sep = "")) "")) df <<- do.call(data.frame, eval(listexpr)) names(df) <<- colnames df }
101
xname xname <<- all.vars(expr) if(length(xname) > 1)stop(paste("There are multiple variables, i.e.",paste(xname, collapse=" & "), "on the right of the equation")) if(length(list(...))==0)assign(xname, 1:10) else { nam <<- names(list(...)) if(nam!=xname)stop("Clash if(nam!=xname)stop("Clash of variable names") assign("x", list(...)[[1]]) assign(xname, x) } y <<- eval(expr) yexpr <<- parse(text=left)[[1]] xexpr <<- parse(text=xname)[[1]] plot(x, y, ylab = yexpr, xlab = xexpr, type="n") lines(spline(x,y)) mainexpr <<- parse(text=paste(left, "==", "==", right)) title(main = mainexpr) }
Try
plotcurve() plotcurve("ang=asin(sqrt(p))", p=(1:49)/50)
mygrep(for)
102
Look in the directory contrib for libraries. New libraries are being added all the time. So it pays to check the CRAN site from time to time. Also, watch for announcements on the electronic mailing lists r-help and r-announce.
Chambers, J. M. 1998. Programming with Data. A Guide to the S Language. Springer-Verlag, New York.
This is a book for specialists. It describes a new version of the S language that is the basis for version 5 of S-PLUS.
Chambers, J. M. and Hastie, T. J. 1992. Statistical Models in S. Wadsworth and Brooks Cole Advanced Books and Software, Pacific Grove CA.
This is the basic reference on R and S-PLUS model formulae and models.
Everitt, B. S. 1994. A Handbook of Statistical Analyses using S-PLUS. Chapman and Hall, London.
The choice of analysis methods may seem idiosyncratic. It has little on the more recently developed methodology.
Harrell, F. 1997. An Introduction to S-PLUS and the Hmisc and Design Libraries.
The latest version of this manual is available from http://hesweb1.med.virginia.edu/biostat/s/index.html Chapters 1-4 and 9-10 are a good introduction to S-PLUS, likely to be particularly helpful to anyone who comes to R or SPLUS from SAS. The examples in this manual are largely medical.
Krause, A. and Olsen, M. 1997. The Basics of S and S-PLUS. Springer 1997.
This is an introductory book, at about the same level as Spector.
Venables, W.N., Smith, D.M. and the R Development Core Team. An Introduction to R. Notes on R: A Programming Environment for Data Analysis and Graphics.
[A current version is available from CRAN sites. This is derived from an original set of notes written by Bill Venables and Dave Smith for the S and S-PLUS environments.
Venables, W. N. and Ripley, B. D., 3nd edn 1999. Modern Applied Statistics with S-PLUS. Springer, New York.
This has become a text book for the use of S-PLUS and R for applied statistical analysis. It assumes a fair level of statistical sophistication. Explanation is careful, but often terse. Together with the Complements it gives brief introductions to extensive libraries of functions that have been written or adapted by Ripley, Venables, and a number of other statisticians. Supplementary material (`Complements) is available from http://www.stats.ox.ac.uk/pub/MASS3/.
103
The supplementary material is extensive, and is continually supplemented. The present version of the statistical `Complements has extensive information on new libraries that have come from third party sources.
Venables, W.N. and Ripley, B.D. 2000. S Programming. Springer 2000. This is a terse and careful introduction to the dialects of the S language, including R. R Development Core Team 1999. An Introduction to R.
This document is available from the CRAN sites noted in section 13.1.
See also the code designed to accompany Cook and Weisbergs book Applied Regression Including Computing and Graphics (Wiley 1999), available from
http://www.stat.umn.edu/arc
104
Section 2.8 1. The value of answer is (a) 12, (b) 22, (c) 600. 2. prod(c(10,3:5)) prod(c(10,3:5)) 3(i) bigsum <<- 0; for (i in 1:100) {bigsum <<- bigsum+i }; bigsum 3(ii) sum(1:100) 4(i) bigprod <<- 1; for (i in 1:50) {bigprod <<- bigprod*i }; bigprod 4(ii) prod(1:50) 5. radius <<- 3:20; volume <<- 4*pi*radius^3/3
sphere.data <<- data.frame(radius=radius, data.frame(radius=radius, volume=volume) 6. sapply(tinting, is.factor)
105
Section 3.7 1. 2.
plot(Animals$body, Animals$brain, pch=1, xlab="Body weight (kg)",ylab="Brain weight (g)") plot(log(Animals$body),log(Animals$brain),pch=1, xlab="Body weight (kg)", ylab="Brain weight (g)", axes=F) brainaxis <<- 10^seq(10^seq(-1,4) bodyaxis <<-10^seq(10^seq(-2,4) axis(1, at=log(bodyaxis), lab=bodyaxis) axis(2, at=log(brainaxis), lab=brainaxis) box() identify(log(Animals$body), log(Animals$brain), labels=row.names(Animals))
Section 7.9 1. x <<- seq(101,112) or x <<- 101:112 2. rep(c(4,6,3),4) 3. c(rep(4,8),rep(6,7),rep(3,9)) or rep(c(4,6,3),c(8,7,9)) 4. rep(seq(1,9),seq(1,9)) or rep(1:9, 1:9) 5. Use summary(airquality) to get this information. 6(a) 2 7 7 5 12 12 4 6(b) 2 9 8 6 17 15 7 7. airquality[airquality$Ozone == max(airquality$Ozone),]
airquality$Wind[airquality$Ozone > quantile(airquality$Ozone, quantile(airquality$Ozone, .75)]
8. mean(snow$snow.cover[seq(2,10,2)])
mean(snow$snow.cover[seq(1,9,2)])
9. sapply(claims, is.factor)
levels(Cars93$Manufacturer), etc. To check which are ordered factors, type in sapply(claims, is.ordered)
mtcars6<mtcars6<-mtcars[mtcars$cyl==6,] Cars93[Cars93$Type==Small|Cars93$Type==Sporty,] mat34 <<- matrix(rep(c(4,6,3),4), nrow=3, ncol=4) mat64 <<- matrix(c(rep(4,8),rep(6,7),rep(3,9)), nrow=6, ncol=4)
106