Analysis of Epidemiological Data Using R
Analysis of Epidemiological Data Using R
Author: Virasakdi Chongsuvivatwong [email protected] Editor: Edward McNeil [email protected] Epidemiology Unit Prince of Songkla University THAILAND
Preface
Data analysis is very important in epidemiological research. The capacity of computing facilities has been steadily increasing, moving state of the art epidemiological studies along the same direction of computer advancement. Currently, there are many commercial statistical software packages widely used by epidemiologists around the world. For developed countries, the cost of software is not a major problem. For developing countries however, the real cost is often too high. Several researchers in developing countries thus eventually rely on a pirated copy of the software. Freely available software packages are limited in number and readiness of use. EpiInfo, for example, is free and useful for data entry and simple data analysis. Advanced data analysts however find it too limited in many aspects. For example, it is not suitable for data manipulation for longitudinal studies. Its regression analysis facilities cannot cope with repeated measures and multi-level modelling. The graphing facilities are also limited. A relatively new and freely available software called R is promising. Supported by leading statistical experts worldwide, it has almost everything that an epidemiological data analyst needs. However, it is difficult to learn and to use compared with similar statistical packages for epidemiological data analysis such as Stata. The purpose of this book is therefore to bridge this gap by making R easy to learn for researchers from developing countries and also to promote its use. My experience in epidemiological studies spans over twenty years with a special fondness of teaching data analysis. Inspired by the spirit of the open-source software philosophy, I have spent a tremendous effort exploring the potential and use of R. For four years, I have been developing an add-on package for R that allows new researchers to use the software with enjoyment. More than twenty chapters of lecture notes and exercises have been prepared with datasets ready for self-study. Supported by WHO, TDR and the Thailand Research Fund, I have also run a number of workshops for this software in developing countries including Thailand, Myanmar, North Korea and Maldives where R and Epicalc was very much welcomed. With this experience, I hereby propose that the use of this software should be encouraged among epidemiological researchers, especially for those who cannot afford to buy expensive commercial software packages. R is an environment that can handle several datasets simultaneously. Users get access to variables within each dataset either by copying it to the search path or by including the dataset name as a prefix. The power of R in this aspect is a drawback in data manipulation. When creating a variable or modifying an existing one, without prefixing the dataset name, the new variable is isolated from its parental dataset. If prefixing is the choice, the original data is changed but not the copy in the search path. Careful users need to remove the copy in the search path and recopy the new dataset into it. The procedure in this aspect is clumsy. Not being
tidy will eventually end up with too many copies in the search path overloading the system or confusing the analyst on where the variable is actually located. Epicalc presents a concept solution for common types of work where the data analyst works on one dataset at a time using only a few commands. In Epicalc the user can virtually eliminate the necessity of specifying the dataset and can avoid overloading of the search path very effectively and efficiently. In addition to make tidying of memory easy to accomplished, Epicalc makes it easy to recognize the variables by adopting variable labels or descriptions which have been prepared from other software such as SPSS or Stata or locally prepared by Epicalc itself R has very powerful graphing functions that the user has to spend time learning. Epicalc exploits this power by producing a nice plot of the distribution automatically whenever a single variable is summarised. A breakdown of the first variable by a second categorical variable is also simple and graphical results are automatically displayed. This automatic graphing strategy is also applied to oneway tabulation and two-way tabulation. Description of the variables and the value or category labels are fully exploited with these descriptive graphs. Additional epidemiological functions added in by Epicalc include calculation of sample size, matched 1:n (n can vary) tabulation, kappa statistics, drawing of ROC curve from a table or from a logistic regression results, population pyramid plots from age and sex and follow-up plots. R has several advanced regression modelling functions such as multinomial logistic regression, ordinal logistic regression, survival analysis and multi-level modelling. By using Epicalc nice tables of odds ratios and 95% CI are produced, ready for simple transferal into a manuscript document with minimal further modification required. Although use of Epicalc implies a different way of working with R from conventional use, installation of Epicalc has no effect on any existing or new functions of R. Epicalc functions only increase efficiency of data analysis and makes R easier to use. This book is essentially about learning R with an emphasis on Epicalc. Readers should have some background in basic computer usage. With R, Epicalc and the supplied datasets, the users should be able to go through each lesson learning the concepts of data management, related statistical theories and the practice of data analysis and powerful graphing. The first four chapters introduce R concepts and simple handling of important basic elements such as scalars, vectors, matrices, arrays and data framesChapter 5 deals with simple data exploration. Date and time variables are defined and dealt with in Chapter 6 and fully exploited in a real dataset in Chapter 7. Descriptive statistics and one-way tabulations are automatically accompanied by corresponding graphs making it rather unlikely that important information is overlooked. Finally, time plots of exposure and disease onsets are plotted with a series of demonstrating
ii
commands. Chapter 8 continues to investigate the outbreak by two-way tabulation. Various kinds of risk assessment, such as the risk ratio and protective efficacy, are analysed with numeric and graphic results. Chapter 9 extends the analysis of the dataset to deal with levels of association or odds ratios. Stratified tabulation, the Mantel-Haenzsel odds ratio, and test of homogeneity of odds ratios are explained in detail. All results are complemented by simultaneous plots. With these graphs, the concept of confounding is made more understandable. Before proceeding further, the reader has a thorough exercise of data cleaning and standard data manipulation in Chapter 10. Simple looping commands are introduced to increase the efficiency of data management. Subsequently, and from time to time in the book, readers will learn how to develop these loops to create powerful graphs. Scatter plots, simple linear regression and analysis of variance are presented in Chapter 11. Stratified scatter plots to enhance the concept of confounding and interaction for continuous outcome variables are given in Chapter 12. Curvilinear models are discussed in Chapter 13. Linear modelling is extended to generalized linear modelling in Chapter 14. For binary outcome variables, Chapter 15 introduces logistic regression with additional comparison with stratified cross-tabulation learned in Chapter 9. The concept of a matched case control study is discussed in Chapter 16 with matched tabulation for 1:1 and 1:n matching. Finally, conditional logistic regression is applied. Chapter 17 introduces polytomous logistic regression using a case-control study in which one type of case series is compared with two types of control groups. Ordinal logistic regression is applied for ordered outcomes in Chapter 18. For a cohort study, with grouped exposure datasets, Poisson regression is used in Chapter 19. Extra-Poisson regression for overdispersion is also discussed. This includes modeling the outcome using the negative binomial error distribution. Multi-level modelling and longitudinal data analysis are discussed in Chapter 20. For cohort studies with individual follow-up times, survival analysis is discussed in Chapter 21 and the Cox proportional hazard model is introduced in Chapter 22. Chapter 23 deals with day-to-day work in calculation of sample sizes and finally the technique of documentation that all professional data analysts must master is explained in Chapter 24. The book ends with a few suggested strategies for handling large datasets in chapter 25. At the end of each chapter some references are given for further reading. Most chapters also end with some exercises to practice on. Solutions to these are given at the end of the book.
iii
Colour
It is assumed that the readers of this book will simultaneously practice the commands and see the results on the screen. The explanations in the text sometimes describe the colour of graphs that appear in black and white in this book (the reason for this is purely for reducing the printing costs). The electronic copy of the book, however, does include colour.
iv
Table of Contents
Chapter 1: Starting to use R __________________________________________ 1 Installation _____________________________________________________ 1 Text Editors ____________________________________________________ 3 Starting R Program_______________________________________________ 3 R libraries & packages ____________________________________________ 4 On-line help ____________________________________________________ 6 Using R _______________________________________________________ 8 Exercises _____________________________________________________ 13 Chapter 2: Vectors ________________________________________________ 14 Concatenation__________________________________________________ 15 Subsetting a vector with an index vector _____________________________ 16 Missing values _________________________________________________ 21 Exercises _____________________________________________________ 22 Chapter 3: Arrays, Matrices and Tables________________________________ 23 Arrays________________________________________________________ 23 Matrices ______________________________________________________ 27 Tables________________________________________________________ 27 Lists _________________________________________________________ 29 Exercises _____________________________________________________ 31 Chapter 4: Data Frames____________________________________________ 32 Data entry and analysis __________________________________________ 34 Data frames included in Epicalc____________________________________ 35 Reading in data_________________________________________________ 35 The 'use' command in Epicalc _____________________________________ 42 Exercises _____________________________________________________ 44 Chapter 5: Simple Data Exploration __________________________________ 45 Data exploration using Epicalc ____________________________________ 45 Exercises _____________________________________________________ 57 Chapter 6: Date and Time __________________________________________ 58 Computation functions related to date _______________________________ 58
Reading in a date variable ________________________________________ 60 Dealing with time variables _______________________________________ 62 Exercises _____________________________________________________ 69 Chapter 7: An Outbreak Investigation: Describing Time ___________________ 70 Paired plot ____________________________________________________ 76 Exercise ______________________________________________________ 79 Chapter 8: An Outbreak Investigation: Risk Assessment ___________________ 80 Recoding missing values _________________________________________ 80 Exploration of age and sex ________________________________________ 83 Comparison of risk: Risk ratio and attributable risk ____________________ 85 Dose-response relationship _______________________________________ 87 Exercise ______________________________________________________ 88 Chapter 9: Odds Ratios, Confounding and Interaction ____________________ 89 Odds and odds ratio _____________________________________________ 89 Confounding and its mechanism ___________________________________ 91 Interaction and effect modification _________________________________ 94 Exercise ______________________________________________________ 96 Chapter 10: Basic Data Management _________________________________ 97 Data cleaning __________________________________________________ 97 Identifying duplication ID ________________________________________ 97 Missing values _________________________________________________ 99 Recoding values using Epicalc____________________________________ 102 Labelling variables with 'label.var'_________________________________ 104 Adding a variable to a data frame _________________________________ 107 Collapsing categories ___________________________________________ 109 Chapter 11: Scatter Plots & Linear Regression _________________________ 111 Scatter plots __________________________________________________ 112 Components of a linear model ____________________________________ 114 Regression line, fitted values and residuals __________________________ 117 Checking normality of residuals __________________________________ 118 Exercise _____________________________________________________ 120
vi
Chapter 12: Stratified linear regression _______________________________ 121 Exercise _____________________________________________________ 128 Chapter 13: Curvilinear Relationship ________________________________ 129 Stratified curvilinear model ______________________________________ 133 Modelling with a categorical independent variable ____________________ 135 Exercise _____________________________________________________ 136 Chapter 14: Generalized Linear Models ______________________________ 137 Model attributes _______________________________________________ 138 Attributes of model summary_____________________________________ 139 Covariance matrix _____________________________________________ 139 References ___________________________________________________ 142 Exercise _____________________________________________________ 142 Chapter 15: Logistic Regression_____________________________________ 143 Distribution of binary outcome ___________________________________ 143 Logistic regression with a binary independent variable _________________ 147 Interaction ___________________________________________________ 149 Interpreting the odds ratio _______________________________________ 151 Changing the referent level ______________________________________ 158 References ___________________________________________________ 158 Exercise _____________________________________________________ 159 Chapter 16: Matched Case Control Study _____________________________ 160 1:n matching__________________________________________________ 162 Logistic regression for 1:1 matching _______________________________ 163 Conditional logistic regression____________________________________ 165 References ___________________________________________________ 166 Exercises ____________________________________________________ 166 Chapter 17: Polytomous Logistic Regression___________________________ 168 Polytomous logistic regression using R _____________________________ 170 Exercises ____________________________________________________ 174 Chapter 18: Ordinal Logistic Regression ______________________________ 176 References ___________________________________________________ 179
vii
Exercise _____________________________________________________ 179 Chapter 19: Poisson and Negative Binomial Regression __________________ 180 Modelling with Poisson regression ________________________________ 183 Goodness of fit test_____________________________________________ 184 Incidence density (ID) __________________________________________ 186 Negative binomial regression_____________________________________ 188 References ___________________________________________________ 191 Exercise _____________________________________________________ 191 Chapter 20: Introduction to Multi-level Modelling ______________________ 192 Random intercepts model________________________________________ 195 Model with random slopes _______________________________________ 199 Exercises ____________________________________________________ 204 Chapter 21: Survival Analysis ______________________________________ 205 Survival object in R ____________________________________________ 208 Life table ____________________________________________________ 209 Kaplan-Meier curve ____________________________________________ 210 Cumulative hazard rate _________________________________________ 211 References ___________________________________________________ 214 Exercises ____________________________________________________ 215 Chapter 22: Cox Regression ________________________________________ 216 Testing the proportional hazard assumption__________________________ 217 Stratified Cox regression ________________________________________ 220 References ___________________________________________________ 222 Exercises ____________________________________________________ 223 Chapter 23: Sample size calculation__________________________________ 224 Field survey __________________________________________________ 224 Comparison of two proportions ___________________________________ 227 Comparison of two means _______________________________________ 231 Lot quality assurance sampling ___________________________________ 232 Power determination for comparison of two proportions________________ 234 Power for comparison of two means _______________________________ 235
viii
Exercises ____________________________________________________ 236 Chapter 24: Documentation ________________________________________ 237 Crimson Editor________________________________________________ 238 Tinn-R ______________________________________________________ 239 Saving the output text___________________________________________ 242 Saving a graph ________________________________________________ 243 Chapter 25: Strategies of Handling Large Datasets______________________ 245 Simulating a large dataset _______________________________________ 245 Solutions to Exercises _____________________________________________ 249 Index __________________________________________________________ 269 Epicalc Functions ________________________________________________ 272 Epicalc Datasets _________________________________________________ 273
ix
This chapter concerns first use of R, covering installation, input of data, output of files, creation of command files and additional documentation. NOTE: This book is written for the Windows operating system.
Installation
R is distributed under the terms of the GNU General Public License. It is freely available for use and distribution under the terms of this license. The latest version of R and Epicalc and their documentation can be downloaded from CRAN (the Comprehensive R Archive Network). The master web site is http://cran.rproject.org/ but there are mirrors all around the world, and users should download the software from the nearest site. The set-up file for R is around 28Mb. To run the installation simply double-click this file and follow the instructions. After installation, a shortcut icon of R should appear on the desktop. Right-click this R icon to change its start-up properties. Replace the default 'Start in' folder with your own working folder. This is the folder where you want R to work. Otherwise, the input and output of files will be done in the program folder, which is not a good practice. You can create multiple shortcut icons with different start-in folders for each project you are working on. Suppose the work related to this book will be stored in a folder called 'C:\RWorkplace'. The 'Properties' of the icon should have the 'Start in:' text box filled with 'C:\RWorkplace' (do not type the single quote signs ' and '. They are used in this book to indicate objects or technical names). R detects the main language of the operating system in the computer and tries to use menus and dialog boxes in that language. For example, if you are running R on a Windows XP in the Chinese language, the menus and dialog boxes will appear in Chinese. Since this book is written in English, it is advised to set the language to be English so that the responses on your computer will be the same as those in this book. In the 'Shortcut' tab of the R icon properties, add ' Language=en' at the end of the 'Target'. Include a space before the word 'Language'.
So, the Target text box for R-2.4.0 version icon would be: "C:\Program Files\R\R-2.4.0\bin\Rgui.exe" Language=en To use this book efficiently, a specialised text editor such as Crimson Editor or Tinn-R must be installed on your computer. In addition, the Epicalc package needs to be installed and loaded.
Tinn-R
Tinn-R is probably the best text file editor to use in conjunction with the R program. It is specifically designed for working with R script files. In addition to syntax highlighting of R code, Tinn-R can interact with R using specific menus and tool bars. This means that sections of commands can be highlighted and sent to the R console (sourced) with a single button click. Tinn-R can be downloaded from the Internet at: www.sciviews.org/Tinn-R.
Starting R Program
After modifying the start-up properties of the R icon, double-click the R icon on the desktop. The program should then start and the following output is displayed on the R console.
R version 2.4.0 (2006-10-03) Copyright (C) 2006 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R in publications. Type 'demo()' for some demos, 'help ()' for on-line help, or 'help.start()' for a HTML browser interface to help. Type 'q()' to quit R. >
The output shown above was produced from R version 2.4.0, released on October
03, 2006. The second paragraph declares and briefly explains the warranty and license. The third paragraph gives information about contributors and how to cite R in publications. The fourth paragraph suggests a few commands for first-time users to try. In this book, R commands begin with the ">" sign, similar to what is shown at the R console window. You should not type the ">". Just type the commands. Within this document both the R commands and output lines will be in Courier New font whereas the explanatory text are in Times New Roman. Epicalc commands are shown in italic, whereas standard R commands are shown in normal font style. The first thing to practice is to quit the program. Click the cross sign at the far right upper corner of the program window or type the following at the R console:
> q()
A dialog box will appear asking "Save workspace image?" with three choices: "Yes", "No" and "Cancel". Choose "Cancel" to continue working with R. If you choose "Yes", two new files will be created in your working folder. Any previous commands that have been typed at the R console will be saved into a file called '.Rhistory' while the current workspace will be saved into a file called '.Rdata'. Notice that these two files have no prefix. In the next session of computing, when R is started in this folder, the image of the working environment of the last saved R session will be retrieved automatically, together with the command history. Continued use of R in this fashion (quitting and saving the unnamed workspace image) will result in these two files becoming larger and larger. Usually one would like to start R afresh every time so it is advised to always choose "No" when prompted to save the workspace. Alternatively you may type:
> q("no")
to quit without saving the workspace image and prevent the dialog box message appearing. Note that before quitting R you can save your workspace image by typing
> save.image("C:/RWorkplace/myFile.RData")
where 'myFile' is the name of your file. Then when you quit R you should answer "No" to the question.
There are about 25 packages supplied with R (called standard or recommended packages) and many more are available through the CRAN web site. Only 7 of these packages are loaded into memory when R is executed. To see which packages are currently loaded into memory you can type:
> search() [1] ".GlobalEnv" [4] "package:graphics" [7] "package:datasets" "package:methods" "package:stats" "package:grDevices" "package:utils" "Autoloads" "package:base"
The list shown above is in the search path of R. When R is told to do any work, it will look for a particular object for it to work with from the search path. First, it will look inside '.GlobalEnv', which is the global environment. This will always be the first search position. If R cannot find what it wants here, it then looks in the second search position, in this case "package:methods", and so forth. Any function that belongs to one of the loaded packages is always available during an R session.
Epicalc package
The Epicalc package can be downloaded from the web site http://cran.r-project.org. On the left pane of this web page, click 'Packages'. Move down along the alphabetical order of various packages to find 'epicalc'. The short and humble description is 'Epidmiological calculator'. Click 'epicalc' to hyperlink to the download page. On this page you can download the Epicalc source code(.tar.gz), and the two binary versions for MacIntosh (.tgz) and Windows (.zip) versions, along with the documentation (.pdf). The Epicalc package is updated from time to time. The version number is in the suffix. For example, Epicalc_2.4.0.9 is the binary file for use on the Windows operating system and the version of Epicalc is 2.4.0.9. A newer version is created to have the bugs (errors in the programme) fixed, to improve the features of existing functions (commands) and to include new functions. The file epicalc.version.zip ('version' increases with time) is a compressed file containing the fully compiled Epicalc package. Installation of this package must be done within R itself. Usually there is only one session of installation needed unless you want to overwrite the old package with a newer one of the same name. You will also need to reinstall this package if you install a new version of R. To install Epicalc, click 'Packages' on the menu bar at the top of the window. Choose 'Install packages from local zip files...". When the navigating window appears, browse to find the file and open it. Successful installation will result in:
> utils:::menuInstallLocal() package 'epicalc' successfully unpacked and MD5 sums checked updating HTML package descriptions
Installation is now complete; however functions within Epicalc are still not available until the following command has been executed:
> library(epicalc)
Note the use of lowercase letters. When the console accepts the command quietly, we can be reasonably confident that the command has been accepted. Otherwise, errors or warnings will be reported. A common warning is a report of a conflict. This warning is, most of the time, not very serious. This just means that an object (usually a function) with the same name already exists in the working environment. In this case, R will give priority to the object that was loaded more recently. The command library(epicalc) must be typed everytime a new session of R is run.
Updating packages
Whenever a new version of a package is released it is advised to keep up to date by removing (unloading) the old one and loading the new one. To unload the Epicalc package, you may type the following at the R console:
> detach(package:epicalc)
After typing the above command, you may then install the new version of the package as mentioned in the previous section. If there are any problems, you may need to quit R and start afresh.
RProfile.site
Whenever R is run it will execute the commands in the RProfile.site file, which is located in the C:\Program Files\R\R-2.4.0\etc folder. By including the command library(epicalc) in the RProfile.site file, every time R is run, the Epicalc package will be automatically loaded and ready for use. You may edit this file and insert the command above. Your Rprofile.site file should look something like this:
library(epicalc) # # # # Things you might want to change options(papersize="a4") options(editor="notepad") options(pager="internal")
On-line help
On-line help is very useful when using software, especially for first time users.
Self-studying is also possible from the on-line help of R, although with some difficulty. This is particularly true for non-native speakers of English, where manuals can often be too technical or wordy. It is advised to combine the use of this book as a tutorial and on-line help as a reference manual. On-line help documentation comes in three different versions in R. The default version is to show help information in a separate window within R. This format is written in a simple markup language that can be read by R and can be converted to LaTeX, which is used to produce the printed manuals. The other versions, which can be set in the Rprofile.site file mentioned previously, are HTML (htmlhelp=TRUE) and compiled HTML (chmhelp=TRUE). The later version is Windows specific and if chosen, help documentation will appear in a Windows help viewer. Each help format has its own advantages and you are free to choose the format you want. For self study, type
> help.start()
The system will open your web browser from the main menu of R. 'An Introduction to R' is the section that all R users should try to read first. Another interesting section is 'Packages'. Click this to see what packages you have available. If the Epicalc package has been loaded properly, then this name should also appear in the list. Click 'Epicalc' to see the list of the functions available. Click each of the functions one by one and you will see the help for that individual function. This information can also be obtained by typing 'help(myFun)' at the R console, where 'myFun' is the name of the function. To get help on the 'help' function you can type
> help (help)
Replace the dots with the keyword you want to search for. This function also allows you to search on multiple keywords. You can use this to refine a query when you get too many responses. Very often the user would want to know how to get other statistical analysis functions that are not available in a currently installed package. A better option would be to search from the CRAN website using the 'search' feature located on the left side of the web page and Google will do a search within CRAN. The results would be quite extensive and useful. The user then can choose the website to go to for further learning. Now type
> search()
You should see "package:epicalc" in the list. If the Epicalc package has not been loaded, then the functions contained inside will not be available for use. Having the Epicalc package in the search path means we can use all commands or functions in that package. Other packages can be called when appropriate. For example, the package 'survival' is necessary for survival analysis. We will encounter this in the corresponding section. The order of the search path is sometimes important. For Epicalc users, it is recommended that any additional library should be called early in the session of R, i.e. before reading in and attaching to a data frame. This is to make sure that the active dataset will be in the second search position. More details on this will be discussed in Chapter 4.
Using R
A basic but useful purpose of R is to perform simple calculations.
> 1+1 [1] 2
When you type '1+1' and hit the <Enter> key, R will show the result of the calculation, which is equal to 2. For the square root of 25:
> sqrt(25) [1] 5
The wording in front of the left round bracket is called a 'function'. The entity inside the bracket is referred to as the function's 'argument'. Thus in the last example, 'sqrt()' is a function, and when imposed on 25, the result is 5. To find the value of e:
> exp(1) [1] 2.718282
Exponentiation of 1 results in the value of e, which is about 2.7. Similarly, the exponential value of -5 or e-5 would be
> exp(-5) [1] 0.006738
Syntax of R commands
R will compute only when the commands are syntactically correct. For example, if the number of closed brackets is fewer than the number of opened ones and the <Enter> key is pressed, the new line will start with a '+' sign, indicating that R is waiting for completion of the command. After the number of closed brackets equals the number of opened ones, computation is carried out and the result appears.
However, if the number of closed brackets exceeds the number of opened ones, the result is a syntax error, or computer grammatical.
> log(3.2)) Error: syntax error
R objects
In the above simple calculation, the results are immediately shown on the screen and are not stored. To perform a calculation and store the result in an object type:
> a = 3 + 5
We can check whether the assignment was successful by typing the name of the newly created object:
> a [1] 8
For ordinary users, there is no obvious difference between the use of = and <-. The difference applies at the R programming level and will not be discussed here. Although <- is slightly more awkward to type than =, the former technique is recommended to avoid any confusion with the comparison operator (==). Notice that there is no space between the two components of the assignment operator <-. Now create a second object 'b'.
> b <- sqrt(36)
We can also compute the value on the left and assign the result to a new object 'c' on the right.
> a + 3*b -> c > c [1] 26
R does not recognise '3b'. The * symbol is needed, which indicates multiplication.
What is typed in is syntactically correct. The problem is that 'qwert' has not been defined. A dot can also be used as a delimiter for an object name.
> baht.per.dollar <- 40 > baht.per.dollar [1] 40
In conclusion, when one types in anything at the R console, the program will try to show the value of that object. If the signs = or <- or -> are encountered, the value will be stored to the object on the left of = and <- or the right hand side of ->.
10
But
> 3*2 == 3^2 [1] FALSE
Note that we need two equals signs to check equality but only one for assignment.
> 3*2 < 3^2 [1] TRUE
Note that
> (FALSE & TRUE) == (TRUE & FALSE) [1] TRUE
11
> TRUE == 1 [1] TRUE > FALSE == 0 [1] TRUE > (3*3 == 3^2) + (9 > 8) [1] 2
Each of the values in the brackets is TRUE, which is equal to 1. The addition of two TRUE objects results in a value of 2. However,
> 3*3 == 3^2 + 9 > 8 Error: syntax error in "3*3 == 3^2 + 9 >"
This is due to the complicated sequence of the operation. Therefore, it is always better to use brackets in order to specify the exact sequence of computation. Let's leave R for the time being. Answer "Yes" to the question: "Save work space image?". Please remember that answering "No" is the preferred response in this book as we recommend typing
> q("no")
to end each R session. Responding "Yes" here is just an exercise in understanding the concept of workspace images, which follows in chapter 2.
References
An Introduction to R. ISBN 3-900051-12-7. R Language Definition. ISBN 3-900051-13-5. Both references above can be downloaded from the CRAN web site.
12
Exercises
Problem 1. The formula for sample size of a descriptive survey is
n=
1.96 2
(1 )
where n is the sample size, is the prevalence in the population (not to be confused with the constant pi), and is half the width of the 95% confidence interval (precision). Compute the required sample size if the prevalence is estimated to be 30% of the population and the 95% confidence interval is not farther from the estimated prevalence by more than 5%.
Problem 2. Change the above prevalence to 5% and suppose each side of the 95% confidence interval is not farther from the estimated prevalence by more than 2%.
Problem 3. The term 'logit' denotes 'loge{P/(1-P)}' where P is the risk or prevalence of a disease. Compute the logits from the following prevalences: 1%, 10%, 50%, 90% and 100%.
13
Chapter 2: Vectors
In the previous chapter, we introduced simple calculations and storage of the results of the calculations. In this chapter, we will learn slightly more complicated issues.
This means that R has restored commands from the previous R session (or history) and the objects stored form this session. Press the up arrow key and you will see the previous commands (both correct and incorrect ones). Press <Enter> following the command; the results will come up as if you continued to work in the previous session.
> a [1] 8 > A [1] "Prince of Songkla University"
Both 'a' and 'A' are retained from the previous session.
Note: The image saved from the previous session contains only objects in the '.GlobalEnv', which is the first position in the search path. The whole search path is not saved. For example, any libraries manually loaded in the previous session need to be reloaded. However, the Epicalc library is automatically loaded every time we start R (from the setting of the Rprofile.site file that we modified in the previous chapter). Therefore, under this setting, regardless of whether the workspace image has been saved in the previous session or not, Epicalc will always be in the search path.
If you want to remove the objects in the environment and the history, quit R
14
without saving. Go to the 'start in' folder and delete the two files .Rhistory and .Rdata. Then restart R. There should be no message indicating restoration of previously saved workspace and no history of previous commands.
Concatenation
Objects of the same type, i.e. numeric with numeric, string with string, can be concatenated. In fact, a vector is an object containing concatenated, atomised (no more divisible) objects of the same type. To concatenate, the function 'c()' is used with at least one atomised object as its argument. Create a simple vector having the integers 1, 2 and 3 as its elements.
> c(1,2,3) [1] 1 2 3
This vector has three elements: 1, 2 and 3. Press the up arrow key to reshow this command and type a right arrow to assign the result to a new object called 'd'. Then have a look at this object.
> c(1,2,3) -> d > d
The function 'rep' is used to replicate values of the argument. For sequential
15
In this case 'seq' is a function with three arguments 'from', 'to' and 'by'. The function can be executed with at least two parameters, 'from' and 'to', since the 'by' parameter has a default value of 1 (or -1 if 'to' is less than 'from').
> seq(10, 23) [1] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 > seq(10, -3) [1] 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3
The order of the arguments 'from', 'to' and 'by' is assumed if the words are omitted. When explicitly given, the order can be changed.
> seq(by=-1, to=-3, from=10)
This rule of argument order and omission applies to all functions. For more details on 'seq' use the help feature.
In fact, the vector does not end with 100, but rather 94, since a further step would result in a number that exceeds 100.
> x[5] [1] 31
The number inside the square brackets '[]' is called a subscript. It denotes the position or selection of the main vector. In this case, the value in the 5th position of the vector 'x' is 31. If the 4th, 6th and 7th positions are required, then type:
> x[c(4,6,7)] [1] 24 38 45
Note that in this example, the object within the subscript can be a vector, thus the concatenate function 'c()' is needed, to comply with the R syntax. The following would not be acceptable:
> x[4,6,7] Error in x[4, 6, 7] : incorrect number of dimensions
16
A minus sign in front of the subscript vector denotes removal of the elements of 'x' that correspond to those positions specified by the subscript vector. Similarly, a string vector can be subscripted.
> B[2] [1] "Prince of Songkla University"
The function 'trunc()' means to truncate or remove the decimals. The condition that 'x' divided by 2 is equal to its truncated value is true iff (if and only if) 'x' is an even number. The same result can be obtained by using the 'subset' function.
> subset(x, x/2==trunc(x/2))
If only odd numbers are to be chosen, then the comparison operator can simply be changed to !=, which means 'not equal'.
> subset(x, x/2!=trunc(x/2)) [1] 3 17 31 45 59 73 87
The operator ! prefixing an equals sign means 'not equal', thus all the chosen numbers are odd. Similarly, to choose the elements of 'x' which are greater than 30 type:
> x[x>30] [1] 31 38 45 52 59 66 73 80 87 94
Max. 20.0
17
# variance
Non-numeric vectors
Let's create a string vector called 'person' containing 11 elements.
> person <- c("A","B","C","D","E","F","G","H","I","J","K")
Character types are used for storing names of individuals. To store the sex of the person, initially numeric codes are given: 1 for male, 2 for female, say.
> sex <- c(1,2,1,1,1,1,1,1,1,1,2) > class (sex) [1] "numeric" > sex1 <- as.factor (sex) # Creating sex1 from sex
The function 'as.factor' coerces the argument 'sex' to be a 'factor', which is a categorical data type in R.
> sex1 [1] 1 2 1 1 1 1 1 1 1 1 2 Levels: 1 2
18
To sort 'age':
> sort(age) [1] 10 15 21 23 25 40 48 56 59 60 80
The function 'sort' creates a vector with the elements in ascending order. However, the original vector is not changed.
> median(age) [1] 40
The median of the ages is 40. To get other quantiles, the function 'quantile' can be used.
> quantile(age) 0% 25% 50% 75% 100% 10.0 22.0 40.0 57.5 80.0
By default (if other arguments omitted), the 0th, 25th, 50th, 75th and 100th percentiles are displayed. To obtain the 30th percentile of age, type:
> quantile(age, prob = .3) 30% 23
This creates 3 distinct groups, which we can call 'children', 'adults' and 'elderly'. Note that the minimum and maximum of the arguments in 'cut' are the outer most boundaries.
> is.factor (agegr) [1] TRUE > attributes (agegr) $levels [1] "(0,15]" "(15,60]" $class [1] "factor"
"(60,100]"
The object 'agegr' is a factor, which stores the values as integers. We can check the correspondence of 'age' and 'agegr' using the 'data.frame' function, which combines (but not saves) the two variables in a data frame and displays the result.
> data.frame(age, agegr) age agegr 1 10 (0,15]
19
2 3 4 5 6 7 8 9 10 11
23 (15,60] 48 (15,60] 56 (15,60] 15 (0,15] 25 (15,60] 40 (15,60] 21 (15,60] 60 (15,60] 59 (15,60] 80 (60,100]
Note that the 5th person, who is 15 years old, is classified into the first group and the 9th person, who is 60 years old, is in second group. The label of each group uses a square bracket to end the bin indicating that the last number is included in the group (inclusive cutting). A round bracket in front of the group is exclusive or not including that value.
> table(agegr) agegr (0,15] (15,60] (60,100] 2 8 1
There are two children, eight adults and one elderly person.
> summary(agegr) # same result as the preceding command > class (agegr) [1] "factor"
The age group vector is a factor or categorical vector. It can be transformed into a simple numeric vector using the 'unclass' function, which is explained in more detail in chapter 3.
> agegr1 <- unclass(agegr) > summary(agegr1) Min. 1st Qu. Median Mean 3rd Qu. 1.000 2.000 2.000 1.909 2.000 > class (agegr1) [1] "integer"
Max. 3.000
Categorical variables, for example sex, race and religion should always be factored. Age group in this example is a factor although it has an ordered pattern. Declaring a vector as a factor is very important, particularly when performing regression analysis, which will be discussed in future chapters. The unclassed value of a factor is used when the numeric (or integer) values of the factor are required. For example, if we are have a dataset containing a 'sex' variable, classed as a factor, and we want to draw a scatter plot in which the colours of the dots are to be classified by the different levels of 'sex', the colour argument to the plot function would be 'col = unclass(sex)'. This will be demonstrated in future chapters.
20
Missing values
Missing values usually arise from data not being collected. For example, missing age may be due to a person not giving his or her age. In R, missing values are denoted by 'NA', abbreviated from 'Not Available'. Any calculation involving NA will result in NA.
> b > b [1] > c > c [1] <- NA * 3 NA <- 3 + b NA
As an example of a missing value of a person in a vector series, type the following commands:
> height <- c(100,150,NA,160) > height [1] 100 150 NA 160 > weight <- c(33, 45, 60,55) > weight [1] 33 45 60 55
Among four subjects in this sample, all weights are available but one height is missing.
> mean(weight) [1] 48.25 > mean(height) [1] NA
We can get the mean weight but not the mean height, although the length of this vector is available.
> length(height) [1] 4
In order to get the mean of all available elements, the NA elements should be removed.
> mean(height, na.rm=TRUE) [1] 136.6667
The term 'na.rm' means 'not available (value) removed', and is the same as when it is omitted by the function 'na.omit()'.
> length(na.omit(height)) [1] 3 > mean(na.omit(height)) [1] 136.6667
Thus 'na.omit()' is an independent function omitting missing values in the argument object. 'na.rm = TRUE' is an internal argument of descriptive statistics for a vector.
21
Exercises
Problem 1. Compute the value of 12 + 22 + 32 . + 1002
Problem 2. Let 'y' be a series of integers running from 1 to 1,000. Compute the sum of the elements of 'y' which are multiples of 7.
Problem 3. The heights (in cm) and weights (in kg) of 11 family members are shown below:
Niece Son GrandPa Daughter Yai GrandMa Aunty Uncle Mom Dad ht 120 172 163 158 153 148 160 170 155 167 wt 22 52 71 51 51 60 50 67 53 64
Create a vector 'ht' in the second column of the dataset having the first column as its 'names' attribute. Compute the body mass index (BMI) of each person where BMI = weight / height2. Identify the persons who have the lowest and highest BMI and calculate the standard deviation of the BMI.
22
Real data for analysis rarely comes as a vector. In most cases, they come as a dataset containing many rows or records and many columns or variables. In R, these datasets are called 'data frames'. Before going into data frames, let us go through something simpler such as arrays, matrices and tables. Gaining concepts and skills in handing these types of objects will empower the user to manipulate the data very effectively and efficiently in the future.
Arrays
An array may generally mean something finely arranged. In mathematics and computing, an array consists of values arranged in rows and columns. A dataset is basically an array. Most statistical packages can handle only one dataset or array at a time. R has a special ability to handle several arrays and datasets simultaneously. This is because R is an object-oriented program. Moreover, R interprets rows and columns in a very similar manner.
9 10
Folding a vector to make an array is simple. Just declare or re-dimension the number of rows and columns as follows:
> dim(a) <- c(2,5) > a [,1] [,2] [,3] [,4] [,5] [1,] 1 3 5 7 9 [2,] 2 4 6 8 10
The numbers in the square brackets are the row and column subscripts. The command 'dim(a) <- c(2,5)' folds the vector into an array consisting of 2 rows
23
and 5 columns.
The command a[,] and a[] both choose all rows and all columns of 'a' and thus are the same as 'a'. An array may also have 3 dimensions.
> b <- 1:24 > dim(b) <- c(3,4,2) # or b <- array (1:24, c(3,4,2)) > b , , 1 [1,] [2,] [3,] , , 2 [1,] [2,] [3,] [,1] [,2] [,3] [,4] 13 16 19 22 14 17 20 23 15 18 21 24 [,1] [,2] [,3] [,4] 1 4 7 10 2 5 8 11 3 6 9 12
The first value of the dimension refers to the number of rows, followed by number of columns and finally the number of strata. Elements of this three-dimensional array can be extracted in a similar way.
> b[1:3,1:2,2] [,1] [,2] [1,] 13 16 [2,] 14 17 [3,] 15 18
Vector binding
Apart from folding a vector, an array can be created from vector binding, either by column (using the function 'cbind') or by row (using the function 'rbind'). Let's
24
Suppose a second person also buys fruit but in different amounts to the first person.
> fruit2 <- c(1, 5, 3, 4)
To bind 'fruits' with 'fruits2', which are vectors of the same length, type:
> Col.fruit <- cbind(fruit, fruit2)
Transposition of an array
Array transposition means exchanging rows and columns of the array. In the above example, 'Row.fruits' is a transposition of 'Col.fruits' and vice versa.
> t(Col.fruit) > t(Row.fruit)
25
Suppose 'fruit3' is created but with one more kind of fruit added:
> fruit3 <- c(20, 15, 3, 5, 8) > cbind(Col.fruit, fruit3) fruit fruit2 fruit3 orange 5 1 20 banana 10 5 15 durian 1 3 3 mangosteen 20 4 5 Warning message: number of rows of result is not a multiple of vector length (arg 2) in: cbind(Col.fruit, fruit3)
Note that 'fruit4' is shorter than the length of the first vector argument. In this situation R will automatically recycle the element of the shorter vector, inserting the first element of 'fruits4' into the fourth row, with a warning.
String arrays
Similar to a vector, an array can consist of character string objects.
> Thais <- c("Somsri", "Daeng", "Somchai", "Veena") > dim(Thais) <- c(2,2); Thais [,1] [,2] [1,] "Somsri" "Somchai" [2,] "Daeng" "Veena"
Note that the elements are folded in colum-wise, not row-wise, sequence.
26
For a single vector, thre are many ways to identify the order of a specific element. For example, to find the order of Hat Yai in the city vector, the following four commands all give the same result.
> > > > (1:length(cities))[cities=="Hat Yai"] (1:3)[cities=="Hat Yai"] subset(1:3, cities=="Hat Yai") which(cities=="Hat Yai")
Note that when a character vector is binded with a numeric vector, the numeric vector is coerced into a character vector, since all elements of an array must be of the same type.
> cbind(cities,postcode) cities postcode [1,] "Bangkok" "10000" [2,] "Hat Yai" "90110" [3,] "Chiangmai" "50000"
Matrices
A matrix is a two-dimensional array. It has several mathematical properties and operations that are used behind statistical computations such as factor analysis, generalized linear modelling and so on. Users of statistical packages do not need to deal with matrices directly but some of the results of the analyses are in matrix form, both displayed on the screen that can readily be seen and hidden as a 'returned object' that can be used later. For exercise purposes, we will examine the covariance matrix, which is an object returned from a regression analysis in a future chapter.
Tables
A table is an array emphasizing the relationship between values among cells. Usually, a table is a result of an analysis, e.g. a cross-tabulation between to categorical variables (using function 'table') or by making a table of statistics of a subgroup (using function 'tapply'). Suppose six patients who are male, female, male, female and female attend a clinic. If the code is 1 for male and 2 for female, then to create this in R type:
> sex <- c(1,2,2,1,2,2)
Similarly, if we characterize the ages of the patients are being either young or old and the first three patients are young, the next two are old and the last one is young, and the codes for this age classification are 1 for young and 2 for old, then we can create this in R by typing.
> age <- c(1,1,1,2,2,1)
27
Suppose also that these patients had one to six visits to the clinic, respectively.
> visits <- c(1,2,3,4,5,6) > table1 <- table(sex, age); table1 age sex 1 2 1 1 1 2 3 1
Note that 'table1' gives counts of each combination of the vectors 'sex' and 'age' while 'table2' (below) gives the sum of the number of 'visits' based on the four different combinations of 'sex' and 'age'.
> table2 <- tapply(visits, list(Sex=sex, Age=age), FUN=sum) > table2 Age Sex 1 2 1 1 4 2 11 5
Although 'table1' has class 'table', the class of 'table2' is still a 'matrix'. One can convert it simply using the function 'as.table'.
> table2 <- as.table(table2)
In contrast, applying 'summary' to a non-table array produces descriptive statistics of each column.
> is.table(Col.fruits) [1] FALSE > summary(Col.fruits) fruit fruit2 Min. : 1.0 Min. :1.00 1st Qu.: 4.0 1st Qu.:2.50
28
> fruits.table <- as.table(Col.fruits) > summary(fruits.table) Number of cases in table: 49 Number of factors: 2 Test for independence of all factors: Chisq = 6.675, df = 3, p-value = 0.08302 Chi-squared approximation may be incorrect > fisher.test(fruits.table) Fisher's Exact Test for Count Data data: fruits.table p-value = 0.07728 alternative hypothesis: two.sided
Lists
An array forces all cells from different columns and rows to be the same type. If any cell is a character then all cells will be coerced into a character. A list is different. It can be a mixture of different types of objects compounded into one entity. It can be a mixture of vectors, arrays, tables or any object type.
> list1 <- list(a=1, b=fruit, c=cities) > list1 $a [1] 1 $b [1]
5 10
1 20
$c [1] "Bangkok"
"Hat Yai"
"Chiangmai"
Note that the arguments of the function 'list()' consist of a series of new objects being assigned a value from existing objects or values. When properly displayed, each new name is prefixed with $. The creation of a list is not a common task in ordinary data analysis. However, a list is sometimes required in the arguments to some functions. Removing objects from the computer memory also requires a list as the argument to the function 'rm'.
> rm(list=c("list1", "fruit"))
This is equivalent to
> rm(list1); rm(fruit)
A list may also be returned from the results of an analysis, but appears under a
29
special 'class'.
> sample1 <- rnorm(10)
The 'qqnorm' function plots the sample quantiles, or the observed sorted values, against the theoretical quantiles, or the corresponding expected values if the data were perfectly normally distributed. It is used here for the sake of demonstration of the 'list' function only.
> list2 <- qqnorm(sample1)
0.655
1.000
0.375 -0.123
0.0645 2.4158
0.9595 -0.1103
The command 'qqnorm(sample1)' is used as a graphical method for checking normality. While it produces a graph on the screen, it also gives a list of the x and y coordinates, which can be saved and used for further calculation. Similarly, 'boxplot(sample1)' returns another list of objects to facilitate plotting of a boxplot.
30
Exercises Problem 1.
Demonstrate a few simple ways to create the array below
[1,] [2,] [,1][,2][,3][,4][,5][,6][,7][,8][,9][,10] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 7 18 19 20
Problem 2.
Extract from the above array the odd numbered columns.
Problem 3.
Cross-tabulation between status of a disease and a putative exposure have the following results:
Non-diseased 20 22
Create the table in R and perform chi-squared and Fisher's exact tests.
31
In the preceding chapter, examples were given on arrays and lists. In this chapter, data frames will be the main focus. For most researchers, these are sometimes called 'datasets'. However, a dataset can contain more than one data frame. It is the real data that most researchers have to work with.
32
Excel, a very commonly used spreadsheet program, the data can be saved as '.csv' (comma separated values) format. This is actually the best method to interface between Excel spreadsheet data files and R. Simply open the Excel file and 'save as' the csv format. As an example the file 'csv1.xls' is originally an Excel spreadsheet. After 'save as' into csv format, the output file is called 'csv1.csv', the contents of which is:
"name","sex","age" "A","F",20 "B","M",30 "C","F",40
Note that the characters are enclosed in quotes and the delimiters (variable separators) are commas. Sometimes the file may not contain quotes, as in the file 'csv2.csv'.
name,sex,age A,F,20 B,M,30 C,F,40
For both files, the R command to read in the dataset is the same.
> a <- read.csv("csv1.csv", as.is=TRUE) > a name sex age 1 A F 20 2 B M 30 3 C F 40
The argument 'as.is=TRUE' keeps all characters as they are. Had this not been specified, the characters would have been coerced into factors. The variable 'name' should not be factored but 'sex' should. The following command should therefore be typed:
> a$sex <- factor (a$sex)
Note firstly that the object 'a' has class data frame and secondly that the names of the variables within the data frame 'a' must be referenced using the dollar sign notation. If not, R will inform you that the object 'sex' cannot be found. For files with white space (spaces and tabs) as the separator, such as in the file 'data1.txt, the command to use is 'read.table'.
> a <- read.table("data1.txt", header=TRUE, as.is=TRUE)
To read in such a file, the function 'read.fwf' is preferred. The first line, which is the header, must be skipped. The width of each variable and the column names must be specified by the user.
33
> a <- read.fwf("data2.txt", skip=1, width=c(1,1,2), col.names = c("name", "sex", "age"), as.is=TRUE)
The function 'rm' stands for 'remove'. The command above removes objects in the workspace. To see what objects are currently in the workspace type:
> ls() character(0)
The command 'ls()' shows a list of objects in the current workspace. The name(s) of objects have class character. The result 'character(0)' means that there are no ordinary objects in the environment. If you do not see 'character(0)' in the output but something else, it means those objects were left over from the previous R session. This will happen if you agreed to save the workspace image before quitting R. To avoid this, quit R and delete the file '.Rdata', which is located in your working folder, or rename it if you would like to keep the workspace from the previous R session. Alternatively, to remove all objects in the current workspace without quitting R, type
34
> zap()
This command will delete all ordinary objects from R's memory. Ordinary objects include data frames, vectors, arrays, etc. Function objects are spared deletion.
You will see names and descriptions of several data frames in various packages, such as 'datasets' and 'epicalc'. In this book, most of the examples use data frames from the Epicalc package.
Reading in data
Let's try to load an Epicalc dataset.
> data(Familydata)
The command 'data' loads a dataset into the R workspace. If there is no error you can recheck the objects available in memory.
> ls() [1] "Familydata"
To get the names of the variables (in column order) of the data frame, you can type:
> names(Familydata) [1] "code" "age" "ht" "wt" "money" "sex"
35
The function 'summary' is an R base function. It gives summary statistics of each variable. For a continuous variable such as 'age', 'wt', 'ht' and 'money', nonparametric descriptive statistics such as minimum, first quartile, median, third quartile and maximum, as well as the mean (parametric) are shown. There is no information on the standard deviation or the number of observations. For categorical variables, such as sex, a frequency tabulation is displayed. The first variable 'code' is a character variable. There is therefore no summary for this variable. Compare this result with the version of summary statistics using the function 'summ' from the Epicalc package.
> summ(Familydata) Anthropometric and financial data of a hypothetical family No. of observations = 11 Var. name code age ht wt money sex Obs. 11 11 11 11 11 mean 45.73 157.18 54.18 1023.18 1.364 median s.d. 47 160 53 500 1 24.11 14.3 12.87 1499.55 0.505 min. 6 120 22 5 1 max. 80 172 71 5000 2
1 2 3 4 5 6
The function 'summ' gives a more concise output, showing one variable per line. The number of observations and standard deviations are included in the report replacing the first and third quartile values in the original 'summary' function from the R base library. Descriptive statistics for factor variables use their unclassed values. The values 'F' and 'M' for the variable 'sex' have been replaced by the codes
36
1 and 2, respectively. This is because R interprets factor variables in terms of levels, where each level is stored as an integer starting from 1 for the first level of the factor. Unclassing a factor variable converts the categories or levels into integers. More discussion about factors will appear later. From the output above the same statistic from different variables are lined up into the same column. Information on each variable is completed without any missing as the number of observations are all 11. The minimum and maximum are shown close to each other enabling the range of the variable to be easily determined. In addition, summary statistics for each variable is possible with both choices of functions. The results are similar to summary statistics of the whole dataset. Try the following commands:
> > > > summary(Familydata$age) summ(Familydata$age) summary(Familydata$sex) summ(Familydata$sex)
Note that 'summ', when applied to a variable, automatically gives a graphical output. This will be examined in more detail in subsequent chapters.
Codebook
The function 'summ' gives summary statistics of each variable, line by line. This is very useful for numeric variables but less so for factors, especially those with more than two levels. Epicalc has another function that gives summary statistics for a numeric variable and a frequency table with level labels and codes for factors.
> codebook(Familydata)
Unlike results from the 'summ' function, 'codebook' deals with each variable in the data frame with more details. If a variable label exists, it is given in the output. For factors, the name of the table for the label of the levels is shown and the codes for the levels are displayed in the column, followed by frequency and percentage of the distribution. The function is therefore very useful. The output can be used to write a table of baseline data of the manuscript coming out from the data frame. Note that the label table for codes of a factor could easily be done in the phase of preparing data entry using Epidata with setting of the '.chk' file. If the data is exported in Stata format, then the label table of each variable will be exported along with the dataset. The label tables are passed as attributes in the corresponding data frame. The Epicalc 'codebook' command fully utilizes this attribute allowing users to see and document the coding scheme for future referencing.
37
Note that subscripting the data frame 'Familydata' with '$' and the variable name will extract only that variable. This is because a data frame is also a kind of list (see the previous chapter).
> typeof(Familydata)
To extract more than one variable, we can use either the index number of the variable or the name. For example, if we want to display only the first 3 records of ht, wt and sex, then we can type:
> Familydata[1:3,c(3,4,6)] ht wt sex 1 120 22 F 2 172 52 M 3 163 71 M We could also type: > Familydata[1:3,c("ht","wt","sex")] ht wt sex 1 120 22 F 2 172 52 M 3 163 71 M
The condition in the subscript can be a selection criteria, such as selecting the females.
> Familydata[Familydata$sex=="F",] code age ht wt money sex 1 K 6 120 22 5 F 4 I 18 158 51 200 F 5 C 69 153 51 300 F 6 B 72 148 60 500 F 7 G 46 160 50 500 F 8 H 42 163 55 600 F 10 F 47 155 53 2000 F
Note that the conditional expression must be followed by a comma to indicate selection of all columns. In addition, two equals signs are needed in the conditional expression. Recall that one equals sign represents assignment. Another method of selection is to use the 'subset()' function.
> subset(Familydata, sex=="F")
38
Note that the commands to select a subset do not have any permanent effect on the data frame. The user must save this into a new object if further use is needed.
The data frame is now changed with a new variable 'log10money' added. This can be checked by the following commands.
> names(Familydata) > summ(Familydata) Anthropometric and financial data of a hypothetic family No. of observations = 11 Var. name code age ht wt money sex log10money Obs. 11 11 11 11 11 11 mean 45.73 157.18 54.18 1023.18 1.364 2.51 median 47 160 53 500 1 2.7 s.d. 24.11 14.3 12.87 1499.55 0.505 0.84 min. 6 120 22 5 1 0.7 max. 80 172 71 5000 2 3.7
1 2 3 4 5 6 7
Note again that this only displays the desired subset and has no permanent effect on
39
the data frame. The following command permanently removes the variable and returns the data frame back to its original state.
> Familydata$log10money <- NULL
Assigning a NULL value to a variable in the data frame is equivalent to removing that variable. At this stage, it is possible that you may have made some typing mistakes. Some of them may be serious enough to make the data frame 'Familydata' distorted or even not available from the environment. You can always refresh the R environment by removing all objects and then read in the dataset afresh.
> zap() > data(Familydata)
The general explanation of 'search()' is given in Chapter 1. Our data frame is not in the search path. If we try to use a variable in a data frame that is not in the search path, an error will occur.
> summary(age) Error in summary(age) : Object "age" not found
The search path now contains the data frame in the second position.
> search () [1] ".GlobalEnv" [4] "package:datasets" [7] "package:splines" [10] "package:utils" [13] "Autoloads" "Familydata" "package:epicalc" "package:graphics" "package:foreign" "package:base" "package:methods" "package:survival" "package:grDevices" "package:stats"
Since 'age' is in 'Familydata', which is now in the search path, computation of statistics on 'age' is now possible.
40
Median 47.00
Max. 80.00
Attaching a data frame to the search path is similar to loading a package using the 'library' function. The attached data frame, as well as the loaded packages, are actually read into R's memory and are resident in memory until they are detached. This is true even if after the original data frame has been removed from the memory.
> rm(Familydata) > search ()
The data frame 'Familydata' is still in the search path allowing any variable within the data frame to be used.
> age [1] 6 16 80 18 69 72 46 42 58 47 49
Loading the same library over and over again has no effect on the search path but attaching the same data frame is possible and may eventually overload the system resources.
> data(Familydata) > attach(Familydata) The following object (s) are masked from Familydata ( position 3 ) : age code ht money sex wt
These variables are already in the second position of the search path. Attaching again creates conflicts in variable names.
> search () [1] ".GlobalEnv" [4] "package:methods" [7] "package:survival" [10] "package:grDevices" [13] "package:stats" "Familydata" "package:datasets" "package:splines" "package:utils" "Autoloads" "Familydata" "package:epicalc" "package:graphics" "package:foreign" "package:base"
The search path now contains two objects named 'Familydata' in positions 2 and 3. Both have more or less the same set of variables with the same names. Every time a command is typed in and the <Enter> key is pressed, the system will first check whether it is an object in the global environment. If not, R checks whether it is a component of the remaining search path, that is, a variable in an attached data frame or a function in any of the loaded packages. Repeatedly loading the same library does not add to the search path because R knows that the contents in the library do not change during the same session. However, a data frame can change at any time during a single session, as seen in the previous section where the variable 'log10money' was added and later removed. The data frame attached at position 2 may well be different to the object of the same name in another search position. Confusion arises if an independent object (e.g. vector) is created outside the data frame (in the global environment) with the same
41
name as the data frame or if two different data frames in the search path each contain a variable with the same name. The consequences can be disastrous. In addition, all elements in the search path occupy system memory. The data frame 'Familydata' in the search path occupies the same amount of memory as the one in the current workspace. Doubling of memory is not a serious problem if the data frame is small. However, repeatedly attaching to a large data frame may cause R to not execute due to insufficient memory. With these reasons, it is a good practice firstly, to remove a data frame from the search path once it is not needed anymore. Secondly, remove any objects from the environment using 'rm(list=ls())' when they are not wanted anymore. Thirdly, do not define a new object (say vector or matrix) that may have the same name as the data frame in the search path. For example, we should not create a new vector called 'Familydata' as we already have the data frame 'Familydata' in the search path. Detach both versions of 'Familydata' from the search path.
> detach(Familydata) > detach(Familydata)
Note that the command 'detachAllData()' in Epicalc removes all attachments. The command 'zap()' does the same, but in addition removes all non-function objects simultaneously. In other words, the cmmand 'zap()' is equivalent to 'rm(list=lsNoFunction())' followed by 'detachAllData()'.
The command 'use()' reads in a data file from Dbase (.dbf), Stata (.dta), SPSS (.sav), EpiInfo (.rec) and comma separated value (.csv) formats, as well as those that come pre-supplied with R packages. The 'Familydata' data frame comes with Epicalc. If you want to read a dataset from a Stata file format, such as "family.dta", simply type use("family.dta") without typing the 'data' command above. The dataset is copied into memory in a default data frame called '.data'. If '.data' already exists, it will be overwritten by the new data frame. The original 'Familydata' object, however, remains.
42
In fact, all the datasets in Epicalc were originally in one of the file formats of .dta, .rec, .csv or .txt. These datasets in their orginal format can be downloaded from http://medipe.psu.ac.th/~edward. If you download the files and set the working directory for R to the default folder "C:\RWorkplace", you do not need to type 'data(Familydata)' and 'use(Familydata)', but instead simply type:
use("family.dta")
The original Stata file will be read into R and saved as '.data'. If successful, it will make no difference whether you type 'data(Familydata)' followed by 'use(Familydata)' or simply 'use("family.dta")'. In most parts of the book, we chose to tell you to type 'data(Familydata)' and 'use(Familydata)' instead of 'use("family.dta") because the dataset is already in Epicalc package which is readily available when you practice Epicalc to this points. However, putting "filename.extension" as the argument such as 'use("family.dta")' in this chapter or 'use("timing.dta")' in the next chapter, and so forth, may give you a real sense of reading actual files instead of the approach that is used in this book. The command 'use()' also automatically puts the data frame, '.data', into the search path. Type:
> search ()
You will see that '.data' is in the second position of the search path. Type:
> ls()
You will see only the 'Familydata' object, and not '.data' because the name of this object starts with a dot and is classified as a hidden object. In order to show that '.data' is really in the memory, Type
> ls(all=TRUE)
The object 'Familydata' is gone but '.data' is still there. However, the attachment to the search path is now lost
> search ()
The advantage of 'use()' is not only that it saves time by making 'attach' and 'detach' unnecessary, but '.data' is placed in the search path as well as being made the default data frame. Thus 'des()' is the same as 'des(.data)', 'summ()' is
43
and
'codebook()'
is
equivalent
to
The sequence of commands 'zap()', 'data(datafile)', 'use(datafile)', 'codebook ()' 'des()' and 'summ()' is recommended for starting an analysis of almost all datasets in this book. A number of other commands from the Epicalc package work based on this strategy of making '.data' the default data frame and exclusively attached to the search path (all other data frames will be detached, unless the argument 'clear=FALSE' is specified in the 'use' function). For straightforward data analysis, the command 'use()' is sufficient to create this setting. In many cases where the data that is read in needs to be modified, it is advised to rename or copy the final data frame to '.data'. Then detach from the old '.data' and re-attach to the most updated one in the search path. This strategy does not have any effect on the standard functions of R. The users of Epicalc can still use the other commands of R while still enjoying the benefit of Epicalc.
Exercises
With several datasets provided with the book, use the last six commands (zap, data, use, codebook, des, summ) to have a quick look at them.
44
Anthropometric and financial data of a hypothetical family No. of observations = 11 Variable Class Description 1 code character 2 age integer Age(yr) 3 ht integer Ht(cm.) 4 wt integer Wt(kg.) 5 money integer Pocket money(B.) 6 sex factor
The first line after the 'des()' command shows the data 'label', which is the descriptive text for the data frame. This is usually created by the software that was used to enter the data, such as Epidata or Stata. Subsequent lines show variable names and individual variable descriptions. The variable 'code' is a character string while 'sex' is a factor. The other variables have class integer. A character variable is not used for statistical calculations but simply for labelling purposes or for record identification. Recall that a factor is what R calls a categorical or group variable. The remaining integer variables ('age', 'ht', 'wt' and 'money') are intuitively continuous variables. The variables 'code' and 'sex' have no variable descriptions due to omission during the preparation of the data prior to data entry.
45
> summ() Anthropometric and financial data of a hypothetical family No. of observations = 11 Var. name code age ht wt money sex Obs. 11 11 11 11 11 mean 45.73 157.18 54.18 1023.18 1.364 median s.d. 47 160 53 500 1 24.11 14.3 12.87 1499.55 0.505 min. 6 120 22 5 1 max. 80 172 71 5000 2
1 2 3 4 5 6
As mentioned in the previous chapter, the command 'summ' gives summary statistics of all variables in the default data frame, in this case '.data'. Each of the six variables has 11 observations, which means that there are no missing values in the dataset. Since the variable 'code' is class 'character' (as shown from the 'des()' command above), information about this variable is not shown. The ages of the subjects in this dataset range from 6 to 80 (years). Their heights range from 120 to 172 (cm), and their weights range from 22 to 71 (kg). The variable 'money' ranges from 5 to 5,000 (baht). The mean and median age, height and weight are quite close together indicating relatively non-skewed distributions. The variable 'money' has a mean much larger than the median signifying that the distribution is right skewed. The last variable, 'sex', is a factor. However, the statistics are based on the unclassed values of this variable. We can see that there are two levels, since the minimum is 1 and the maximum is 2. For factors, all values are stored internally as integers, i.e. only 1 or 2 in this case. The mean of 'sex' is 1.364 indicating that 36.4 percent of the subjects have the second level of the factor (in this case it is male). If a factor has more than two levels, the mean will have no useful interpretation.
> codebook () Anthropometric and financial data of a hypothetical family code : A character vector ================== age : Age(yr) obs. mean median s.d. min. max. 11 45.727 47 24.11 6 80 ================== ht : Ht(cm.) obs. mean median s.d. min. max. 11 157.182 160 14.3 120 172 ================== wt : Wt(kg.) obs. mean median s.d. min. max. 11 54.182 53 12.87 22 71 ================== money : Pocket money(B.) obs. mean median s.d. min. max. 11 1023.182 500 1499.55 5 5000 ==================
46
sex : Label table: sex1 code Frequency Percent F 1 7 63.6 M 2 4 36.4 ==================
The output combines variable description with summary statistics for all numeric variables. For 'sex', which is a factor, the original label table is named 'sex1' where 1 = F and 2 = M. There are 7 females and 4 males in this family. We can also have a look at individual variables in more detail with the same commands 'des()' and 'summ()' by placing the variable name inside the brackets.
> des(code) 'code' is a variable found in the following source(s): Var. source .data Var. order Class # records Description 1 character 11
The output tells us that'code' is in '.data'. Suppose we create an object, also called 'code', but positioned freely outside the hidden data frame.
> code <- 1 > des(code) 'code' is a variable found in the following source(s): Var. source .GlobalEnv .data Var. order Class numeric 1 character # records Description 1 11
The output tells us that there are two 'codes'. The first is the recently created object in the global environment. The second is the variable inside the data frame, '.data'. To avoid confusion, we will delete the recently created object 'code'.
> rm(code)
After removal of 'code' from the global environment, the latest 'des()' command will describe the old 'code' variable, which is the part of '.data', and remains usable. Using 'des()' with other variables shows similar results. Now try the following command:
> summ(code)
47
20
40
60
80
The results are similar to what we saw from 'summ'. However, since the argument to the 'summ' command is a single variable a graph is also produced showing the distribution of age. The main title of the graph contains a description of the variable after the words "Distribution of". If the variable has no description, the variable name will be presented instead. Now try the following commands:
> abc <- 1:20 > summ(abc) Obs. mean 20 10.5
median 10.5
s.d. 5.916
min. 1
max. 20
48
The object 'abc' has a perfectly uniform distribution since the dots form a straight line. The graph produced by the command 'summ' is called a sorted dot chart. A dot chart has one axis (in this case the X-axis) representing the range of the variable. The other axis, the Y-axis, labelled 'Subject sorted by X-axis values', represents each subject or observation sorted by the values of the variable. For the object 'abc', the smallest number is 1, which is plotted at the bottom left, then 2, 3, 4 etc. The final observation is 20, which is plotted at the top right. The values increase from one observation to the next higher value. Since this increase is steady, the line is perfectly straight. To look at a graph of age again type:
> summ(age) > axis(side=2, 1:length(age))
The 'axis' command adds tick marks and value labels on the specified axis (in this case, 'side=2' denotes the Y-axis). The ticks are placed at values of 1, 2, 3, up to 11 (which is the length of the vector age). The ticks are omitted by default since if the vector is too long, the ticks would be too congested. In this session, the ticks will facilitate discussion.
Distribution of Age( yr)
11 Subject sorted by X-axis values 1 2 3 4 5 6 7 8 9 10
20
40
60
80
To facilitate further detailed consideration, the sorted age vector is shown with the graph.
> sort(age) [1] 6 16 18 42 46 47 49 58 69 72 80
49
The relative increment on the X-axis from the first observation (6 years) to the second one (16 years) is larger than from the second to the third (18 years). Thus we observe a steep increase in the Y-axis for the second pair. From the 3rd observation to the 4th (42 years), the increment is even larger than the 1st one; the slope is relatively flat. In other words, there is no dot between 20 and 40 years. The 4th, 5th, 6th and 7th values are relatively close together, thus these give a relatively steep increment on the Y-axis.
> summ(ht) Obs. mean median s.d. min. max. 11 157.182 160 14.303 120 172 > axis(side=2, 1:length(ht)) > sort(ht) [1] 120 148 153 155 158 160 163 163 167 170 172
The distribution of height as displayed in the graph is interesting. The shortest subject (120cm) is much shorter than the remaining subjects. In fact, she is a child whereas all the others are adults. There are two persons (7th and 8th records) with the same height (163cm). The increment on the Y-axis is hence vertical.
Distribution of Ht( cm.)
11 Subject sorted by X-axis values 1 120 2 3 4 5 6 7 8 9 10
130
140
150
160
170
50
Distribution of Wt(kg.)
11 Subject sorted by X-axis values 1 2 3 4 5 6 7 8 9 10
30
40
50
60
70
There is a higher level of clustering of weight than height from the 2nd to 7th observations; these six persons have very similar weights. From the 8th to 11th observations, the distribution is quite uniform. For the distribution of the money variable, type:
> summ(money)
Money has the most skewed distribution. The first seven persons carry less than 1,000 baht. The next two persons carry around 2,000 baht whereas the last carries 5,000 baht, far away (in the X-axis) from the others. This is somewhat consistent with a theoretical exponential distribution.
51
The graph shows that four out of eleven (36.4%, as shown in the textual statistics) are male. When the variable is factor and has been labelled, the values will show the name of the group.
Distribution of sex
2 F
10
F M
52
Distribution of sex
7 7 Frequency 5 6
4 4 0 1 2 3
Since there are two sexes, we may simply compare the distributions of height by sex.
> summ(ht, by=sex) For sex = F Obs. mean median 7 151 155 For sex = M Obs. mean 4 168
max. 163
median 168.5
s.d. 3.916
min. 163
max. 172
120
130
140
150
160
170
53
Dotplot
In addition to 'summ()' and 'tab1()', Epicalc has another exploration tool called 'dotplot()'.
> dotplot(money)
While the graph created from the 'summ' command plots individual values against its rank, 'dotplot' divides the scale into several small equally sized bins (default = 40) and stacks each record into its corresponding bin. In the figure above, there are three observations at the leftmost bin and one on the rightmost bin. The plot is very similar to a histogram except that the original values appear on the X-axis. Most people are more acquainted with a dot plot than the sorted dot chart produced by 'summ'. However, the latter plot gives more detailed information with better accuracy. When the sample size is small, plots by 'summ' are more informative. When the sample size is large (say above 200), 'dotplot' is more understandable by most people.
54
The command 'summ'easily produces a powerful graph. One may want to show even more information. R can serve most purposes, but the user must spend some time learning it. Let's draw a sorted dot chart for the heights. The command below should be followed step by step to see the change in the graphic window resulting from typing in each line. If you make a serious mistake simply start again from the first line. Using the up arrow key, the previous commands can be edited before executing again.
> > > > > zap() data(Familydata) use(Familydata) sortBy(ht) .data
The command 'sortBy', unlike its R equivalent 'sort', has a permanent effect on '.data'. The whole data frame has been sorted in ascending order by the value of height.
> dotchart(ht)
Had the data not been sorted, the incremental pattern would not be seen.
> dotchart(ht, col=unclass(sex), pch=18)
Showing separate colours for each sex is done using the 'unclass' function. Since 'sex' is a factor, 'uclass'ing it gives a numeric vector with 1 for the first level (female) and 2 for the second level (male). Colours can be specified in several different ways in R. One simple way is to utilise a small table of colours known as the 'palette' where the number 1 represents black and the number 2 represents the
55
colour red. Thus the black dots represent females and the red dots represent males. More details of the existing 'palette' can be found in the help pages. To add the y-axis, type the following command:
> axis(side=2,at=1:length(ht), labels=code, las=1)
The argument 'las' is a graphical parameter, which specifies the orientation of tick labelling on the axes. When 'las=1', all the labels of the ticks will be horizontal to the axis. A legend is added using the 'legend' command:
> legend(x=130, y=10, legend=c("female","male"), pch=18, col=1:2, text.col=1:2)
The argument 'pch' stands for point or plotting character. Code 18 means the symbol is a solid diamond shape which is more prominent than pch=1 (a hollow round dot). Note that 'col' is for dot colours and 'text.col' is for text in the legend. To add the titles type:
> title(main="Distribution of height") > title(xlab="cms")
J D E H A G I F C B K female male
120
130
140
150
160
170
To summarise, after 'use(datafile)', 'des' and 'summ', individual variables can be explored simply by 'summ(var.name)' and 'summ(var.name, by=group.var)'. In addition to summary statistics, the sorted dot chart can be very informative. The 'dotplot' command trades in accuracy of the individual values with frequency dot plots, which is similar to a histogram. Further use of this command will be demonstrated when the number of observations is larger.
56
Exercises
Try the following simulations for varying sample sizes and number of groups. Compare the graph of different types from using three commands, 'summ', 'dotplot' and 'boxplot'. For each condition, which type of graph is the best? ## Small sample size, two groups.
> > > > > grouping1 <- rep(1:2, times=5) random1 <- rnorm(10, mean=grouping1, sd=1) summ(random1, by=grouping1) dotplot(random1, by=grouping1) boxplot(random1 ~ grouping1)
57
One of the purposes of an epidemiological study is to describe the distribution of a population's health status in terms of time, place and person. Most data analyses, however deal more with a person than time and place. In this chapter, the emphasis will be on time. The time unit includes century, year, month, day, hour, minute and second. The most common unit that is directly involved in epidemiological research is day. The chronological location of day is date, which is a serial function of year, month and day. There are several common examples of the use of dates in epidemiological studies. Birth date is necessary for computation of accurate age. In an outbreak investigation, description of date of exposure and onset is crucial for computation of incubation period. In follow up studies, the follow-up time is usually marked by date of visit. In survival analysis, date starting treatment and assessing outcome are elements needed to compute survival time.
58
[1] 0
The first command above creates an object 'a' to be a 'Date' object. When converted to numeric, the value is 0. Day 100 would be
> a + 100 [1] "1970-04-11"
The default display format in R for a 'Date' object is ISO format. The American format of 'month day, year' can be achieved by
> format(a, "%b %d, %Y") [1] "Jan 01, 1970"
The function 'format' displays the object 'a' in a fashion chosen by the user. '%b' denotes the month in the three-character abbreviated form. '%d' denotes the day value and '%Y' denotes the value of the year, including the century. Under some operating system conditions, such as the Thai Windows operating system, '%b' and '%a' may not work or may present some problems with fonts. Try the following command:
> Sys.setlocale("LC_ALL", "C")
Now try the above format command again. This time, it should work. R has the 'locale' or working location set by the operating system, which varies from country to country. "C" is the motherland of R and the language "C" is American English. '%A' and '%a' are formats for full and abbreviated weekdays while '%B' and '%b' are for months. These are language and operating system dependent. Try these:
> b <- a + (0:3) > b
Then change the language and see the effect on the R console and graphics device.
> setTitle("German"); summ(b) > setTitle("French"); summ(b) > setTitle("Italian"); summ(b)
The command 'setTitle' changes the locale as well as the fixed wording of the locale to match it. To see what languages are currently available in Epicalc try:
> titleString() > titleString(return.look.up.table=TRUE)
Note that these languages all use standard ASCII text characters. The displayed results from these commands will depend on operating system. Thai and Chinese versions of Windows may give different results. You may try 'setTitle' with different locales. To reset the system to your original default values, type
> setTitle("")
For languages with non-standard ASCII characters, the three phrases often used in
59
Epicalc ("Distribution of", "by", and "Frequency") can be changed to your own language. For more details see the help for the 'titleString' function. Manipulation of title strings, variable labels and levels of factors using your own language means you can have the automatic graphs tailored to your own needs. This is however a bit too complicated to demonstrate in this book. Interested readers can contact the author for more information. Epicalc displays the results of the 'summ' function in ISO format to avoid country biases. The graphic results in only a few range of days, like the vector 'b', has the Xaxis tick mark labels in '%a%d%b' format. Note that '%a' denotes weekday in the three-character abbreviated form In case the dates are not properly displayed, just solve the problem by typing:
> Sys.setlocale("LC_ALL", "C")
Then, check whether the date format containing '%a' and '%b' works.
> format(b, "%a %d%b%y") [1] "Thu 01Jan70" "Fri 02Jan70" "Sat 03Jan70" "Sun 04Jan70" > summ(b)
Distribution of b
Fri02Jan
Sat03Jan
Sun04Jan
60
R can read in date variables from Stata files directly but not older version of EpiInfo with <dd/mm/yy> format. This will be read in as 'character' or 'AsIs'. A dataset from spreadsheet software such as Excel, usually converts date variables to characters before exporting to a text (.csv format) file. When reading in data from a comma separated variable (.csv) file format, it is a good habit to put an argument 'as.is = TRUE' in the 'read.csv' command to avoid date variables being changed into factors. It is necessary to know how to read in date variables from character format. Create a vector of three dates stored as character:
> date1 <- c("07/13/2004","08/01/2004","03/13/2005") > class(date1) [1] "character" > date2 <- as.Date(date1, "%m/%d/%Y")
The format or sequence of the original characters must be reviewed. In the first element of 'date1', '13', which can only be day (since there are only 12 months), is in the middle position, thus '%d' must also be in the middle position. Slashes '/' separate month, day and year. This must be correspondingly specified in the format of the 'as.Date' command.
> date2 [1] "2004-07-13" "2004-08-01" "2005-03-13" > class(date2) [1] "Date"
The default date format is "%Y-%m-%d". Changing into the format commonly used in Epicalc is achieved by:
> format(date2, "%d%b%y") [1] "13Jul04" "01Aug04" "13Mar05"
'%b' represents the short version of the month in character. '%y' represents the year without the century while '%Y' represents the year with the century. Other formats can be further explored by the following commands:
> help (format.Date) > help (format.POSIXct)
It is not necessary to have all day, month and year presented. For example, if only month is to be displayed, you can type:
> format (date2, "%B") [1] "July" "August" "March"
"Sunday"
61
Conversely, if there are two or more variables that are parts of date:
> day1 <- c("12","13","14"); > month1 <- c("07","08","12") > paste(day1, month1) [1] "12 07" "13 08" "14 12" > as.Date(paste(day1,month1), "%d %m") [1] "2005-07-12" "2005-08-13" "2005-12-14"
The function 'paste' joins two character variables together. When the year value is omitted, R automatically adds the current year of the system in the computer.
1 2 3 4 5 6 7 8 9 10 11
No. of children Hour to bed Min. to bed Hour woke up Min. woke up Hour arrived at wkshp Min. arrived at wkshp
> summ()
62
Timing questionnaire No. of observations = 18 Var. name id gender age marital child bedhr bedmin wokhr wokmin arrhr arrmin Obs. 18 18 18 18 18 18 18 18 18 18 18 mean 9.5 1.611 31.33 1.611 0.33 7.83 19.83 5.61 23.83 8.06 27.56 median 9.5 2 27.5 2 0 1.5 17.5 6 30 8 29.5 s.d. 5.34 0.502 12.13 0.502 0.59 10.34 17.22 1.61 17.2 0.24 12.72 min. 1 1 19 1 0 0 0 1 0 8 0 max. 18 2 58 2 2 23 45 8 49 9 50
1 2 3 4 5 6 7 8 9 10 11
The graph shows interrupted time. In fact, the day was entered incorrectly among those who woke up before midnight. It should be 12th December not 13th December, which was the day of the workshop. To correct this error type:
63
The 'ifelse' function chooses the second argument if the first argument is TRUE, the third otherwise.
> bed.time <- ISOdatetime (year=2004, month=12, day=bed.day, hour=bedhr, min=bedmin, sec=0, tz="") > summ(bed.time) Min. Median Mean 2004-12-12 21:30 2004-12-13 00:22 2004-12-13 00:09 Max. 2004-12-13 02:30
After this, woke up time and arrival time can be created and checked.
> woke.up.time <- ISOdatetime (year=2004, month=12, day=13, hour=wokhr, min=wokmin, sec=0) > summ(woke.up.time) Min. Median Mean 2004-12-13 01:30 2004-12-13 06:10 2004-12-13 06:00 Max. 2004-12-13 08:20
min. 1
max. 8
64
The argument 'xlim' is set to include the minimum of 'bed.time' and the maximum of 'woke.up.time'. The argument yaxt="n" suppresses the tick labels on the Y-axis.
> > > > points(woke.up.time,1:length(woke.up.time),pch=18,col="red") abline(h=1:length(woke.up.time), lty=3) title(main="Distribution of Bed time and woke up time") title(ylab="Subject sorted by bed time")
65
66
female
male
08:00
08:20
08:40
09:00
09:20
The command 'summ' works relatively well with time variables. In this case, it demonstrates that there were more females than males. Females varied their arrival time considerably. Quite a few of them arrived early because they had to prepare the workshop room. Most males who had no responsibility arrived just in time. There was one male who was slightly late and one male who was late by almost one hour.
Sleepiness among the participants in a workshop No. of observations =15 Variable Class Description 1 id integer code 2 gender factor gender 3 dbirth Date Date of birth 4 sleepy integer Ever felt sleepy in workshop 5 lecture integer Sometimes sleepy in lecture 6 grwork integer Sometimes sleepy in group work 7 kg integer Weight in Kg 8 cm integer Height in cm
67
The variable 'age' has class 'time difference' as can be seen by typing:
> class (age) [1] "difftime"
To display age
> age Time differences of 7488, 10557, 8934, 9405, 11518, 11982, 10741, 11122, 12845, 9266, 11508, 12732, 11912, 7315, NA days > summ(age) Obs. mean 15 10520
median 10930
max. 12850
Distribution of age
8000
9000
10000 days
11000
12000
13000
68
> summ(age.in.year, by=gender) For gender = male Obs. mean median s.d. min. 4 29.83 32.06 6.712 20.03 For gender = female Obs. mean median 10 28.4 29.16
max. 35.17
s.d. 4.353
min. 20.5
max. 34.86
female
male
20
25 years
30
35
Note that there is a blank dotted line at the top of the female group. This a missing value. Males have an obviously smaller sample size with the same range as women but most observations have relatively high values.
Exercises
In the Timing dataset: Compute time since woke up to arrival at the workshop. Plot time to bed, time woke up and arrival time on the same axis.
69
An outbreak investigation is a commonly assigned task to an epidemiologist. This chapter illustrates how the data can be described effectively. Time and date data types are not well prepared and must be further modified to suit the need of the descriptive analysis. The dataset Outbreak was obtained from the EpiInfo software. R and Epicalc are used here to analyse the data. On 25 August 1990, the local health officer in Supan Buri Province of Thailand reported the occurrence of an outbreak of acute gastrointestinal illness on a national handicapped sports day. Dr Lakkana Thaikruea and her colleagues went to investigate. The dataset is called outbreak, and was created using EpiInfo. Most variable names are self-explanatory. Variables are coded as 0 = no, 1 = yes and 9 = missing/unknown for three food items consumed by participants: 'beefcurry' (beef curry), 'saltegg' (salted eggs) and 'water'. Also on the menu were eclairs, a fingershaped iced cake of choux pastry filled with cream. This variable records the number of pieces eaten by each participant. Missing values were coded as follows: 88 = "ate but do not remember how much", while code 90 represents totally missing information. Some participants experienced gastrointestinal symptoms, such as: 'nausea', 'vomiting', 'abdpain' (abdominal pain) and 'diarrhea'. The ages of each participant are recorded in years with 99 representing a missing value. The variables 'exptime' and 'onset' are the exposure and onset times, which are in character format, or 'AsIs' in R terminology.
Quick exploration
Let's look at the data. Type the following at the R console:
> > > > zap() data(Outbreak) use(Outbreak) des()
70
1 2 3 4 5 6 7 8 9 10 11 12 13
id sex age exptime beefcurry saltegg eclair water onset nausea vomiting abdpain diarrhea
numeric numeric numeric AsIs numeric numeric numeric numeric AsIs numeric numeric numeric numeric
> summ() No. of observations = 1094 Var. name id sex age exptime beefcurry saltegg eclair water onset nausea vomiting abdpain diarrhea valid obs. mean 1094 547.5 1094 0.66 1094 23.69 1094 1094 1094 1094 1094 1094 1094 1094 0.95 0.96 11.48 1.02 0.4 0.38 0.35 0.21 median 547.5 1 18 1 1 2 1 0 0 0 0 s.d. 315.95 0.47 19.67 0.61 0.61 27.75 0.61 0.49 0.49 0.48 0.41 min. 1 0 1 0 0 0 0 0 0 0 0 max. 1094 1 99 9 9 90 9 1 1 1 1
1 2 3 4 5 6 7 8 9 10 11 12 13
We will first define the cases, examine the timing in this chapter and investigate the cause in the next section.
Case definition
It was agreed among the investigators that a case should be defined as a person who had any of the four symptoms: 'nausea', 'vomiting', 'abdpain' or 'diarrhea'. A common sense definition of a case can then by computed as follows:
> case <- (nausea==1) | (vomiting==1) | (abdpain==1) | (diarrhea==1)
To incorporate this new variable into '.data', we simply put a variable label on it, using the function label.var.
> label.var(case, "diseased")
The variable 'case' is now incorporated into '.data' as the 14th variable along with a variable description.
71
> des()
Timing of exposure
For the exposure time, first look at the structure of this character variable.
> exptime[1:3] [1] "25330825180000" "25330825180000" "25330825180000"
The variable consists of fourteen digits. The first four are B.E. or Buddhist Era, which is 543 + A.D. The 5th and 6th digits are for month, the 7th and 8th for day, 9th and 10th for hour, 11th and 12th for minute and 13th and 14th for second. The year and month are fixed at 2533 (1990) and 08 whereas day, hour and minute vary and second are all zero.
> day.exptime <- substr(exptime, 7, 8)
The R command 'susbtr' (from substring), which extracts parts of character vectors, works in accordance with the positions of the digits in the 'exptime' variable. First, let's look at the day of exposure.
> tab1(day.exptime) day.exptime : Frequency %(NA+) cum.%(NA+) 25 1055 96.4 96.4 <NA> 39 3.6 100.0 Total 1094 100.0 100.0
The day of exposure was 25th of August for all records (ignoring the 39 missing values).
> hr.exptime <- substr(exptime, 9, 10) > tab1(hr.exptime)
These are also acceptable, although most minutes have been rounded to the nearest hour or half hour. The time of exposure can now be calculated.
> time.expose <- ISOdatetime (year=1990, month=8, day= day.exptime, hour=hr.exptime, min=min.exptime, sec=0)
Then, the variable is labelled in order to integrate it into the default data frame.
> label.var(time.expose, "time of exposure") > summ(time.expose) Min. Median Mean 1990-08-25 11:00 1990-08-25 18:00 1990-08-25 18:06 Max. 1990-08-25 21:00
72
0 11:00
200
400
600
800
1000
13:00
15: 00
17: 00
19: 00
21: 00
Frequency
0 11:00
100
200
300
400
500
600
13:00
15: 00
17: 00 HH: MM
19: 00
21:00
Almost all the exposure times were during dinner; between 6 and 7 o'clock, while only a few were during the lunchtime.
73
The remaining manipulation of 'onset' is the same as that for time of exposure.
> day.onset <- substr(onset, 7, 8) > tab1(day.onset) day.onset : Frequency %(NA+) cum.%(NA+) 25 429 39.2 39.2 26 33 3.0 42.2 <NA> 632 57.8 100.0 Total 1094 100.0 100.0
%(NA-) cum.%(NA-) 92.9 92.9 7.1 100.0 0.0 100.0 100.0 100.0
Of the subjects interviewed, 57.8% had missing values for onset and subsequently on the derived variables such as 'day.onset'. This was due to having no symptoms or the subject could not remember. Among those who reported the time, 92.9% had the onset on 25 August 1990. The remaining 7.1% had it on the day after.
> hr.onset <- substr(onset, 9, 10) > tab1(hr.onset) > min.onset <- substr(onset, 11, 12) > tab1(min.onset) > time.onset <- ISOdatetime (year = 1990, month = 8, day = day.onset, hour = hr.onset, min = min.onset, sec=0, tz="") > label.var(time.onset, "time of onset") > summ(time.onset)
Distribution of time of onset
0 15:00
200
400
600
800
1000
18:00
21:00
00:00
03:00
06:00
09:00
74
The upper part of the graph is empty due to the many missing values. Perhaps a better visual display can be obtained wth a dotplot.
> dotplot (time.onset)
Distribution of time of onset
120 Frequency 0 15:00 20 40 60 80 100
18:00
21:00
00:00 HH:MM
03:00
06:00
09:00
Both graphs show the classic single-peak epidemic curve, suggesting a single point source. The earliest case had the onset at 3pm in the afternoon of August 25. The majority of cases had the onset in the late evening. By the next morning, only a few cases were seen. The last reported case occurred at 9am on August 26.
Incubation period
The analysis for incubation period is straightforward.
> incubation.period <- time.onset - time.expose > label.var(incubation.period, "incubation period") > summ(incubation.period) Valid obs. mean median s.d. min. max. 462 3.631 3.5 1.28 1 14.5 > dotplot (incubation.period, las=1)
75
100 Frequency 80 60 40 20
0 2 4 6 8 hours 10 12 14
Paired plot
We now try putting the exposure and onset times in the same graph. A sorted graph usually gives more information, so the whole data frame is now sorted.
> sortBy(time.expose)
With this large sample size, it is better to confine the graph to plot only complete values for both 'time.exposure' and 'time.onset'. This subset is kept as another data frame called 'data.for.graph'.
> data.for.graph <- subset(.data, (!is.na(time.onset) & !is.na(time.expose)), select = c(time.onset, time.expose)) > des(data.for.graph) No. of observations =462 Variable Class Description 1 time.onset POSIXt 2 time.expose POSIXt
There are only two variables in this data frame. All the missing values have been removed leaving 462 records for plotting.
> plot(data.for.graph$time.expose, 1:nrow(data.for.graph), col="red", pch=20, xlim = c(min(data.for.graph$time.expose), max(data.for.graph$time.onset)), main = "Exposure time & onset of food poisoning outbreak", xlab = "Time (HH:MM)", ylab = "Sorted subject ID by Exposure Time")
The variable in the horizontal axis, 'time.expose', is prefixed with the name of its parent data frame 'data.for.graph'. The plot pattern looks similar to that produced by 'summ(time.expose)'. The point character, 'pch', is 20, which plots small solid
76
circles, thus avoiding too much overlapping of the dots. The limits on the horizontal axis are from the minimum of time of exposure to the maximum of the time of onset, allowing the points of onset to be put on the same graph. These points are added in the following command:
> points(data.for.graph$time.onset, 1:nrow(data.for.graph), col="blue", pch=20)
The two sets of points are paired by subjects. A line joining each pair is now drawn by the next 'for' loop command.
> for(i in 1:nrow(data.for.graph)) {lines(x = c(data.for.graph$time.expose[i], data.for.graph$ time.onset[i]), y = c(i, i), col = "grey45") }
Using the above 'for' loop, in each pair, from the first when i being equal to 1 to the last pair when i is 462, a line is drawn using a medium grey colour. The list of colours used by R can be found from 'colours()'. A legend is inserted to make the graph self-explanatory.
> legend(x = ISOdatetime(1990,8,26,0,0,0), y = 150, legend = c("Exposure time", "Onset", "incubation period"), pch = c(20, 20, -1), lty=c(0,0,1), col = c("red","blue","grey45"), bg="lavender")
The left upper corner of the legend is located at the right lower quadrant of the graph with the x coordinate being midnight and y coordinate being 150. The legend consists of three items as indicated by the character vector. The point characters, 'pch', and colours of the legend are specified in accordance with those inside the graph. The last argument, incubation period, has 'pch' equal to -1 indicating no point is to be drawn. The line type, 'lty', of exposure time and onset are 0 (no line) whereas that for incubation period is 1 (solid line). The colours of the points and the lines are corresponding to that in the graph. The background of the legend was given lavender colour to supersede any lines or points behind the legend. Finally, the dense area of incubation period is a good place to put text describing the key statistic of this variable.
> text(x = ISOdatetime(1990, 8, 25, 19, 0, 0), y = 200, labels = "median incubation period = 3.5 hours", srt = 90, col = "white")
77
400
100
200
300
0 11:00
15:00
19:00
23:00
03:00
07:00
Time (HH:MM)
The middle of the text is located at x = 19:00 and y = 200 in the graph. The parameter 'srt' comes from 'string rotation'. In this case a rotation of 90 degrees gives the best picture. Since the background colour is already grey, white text would be suitable. Analysis of timing data has finished. The main data frame '.data' is saved for further use in the next chapter.
> save(.data, file = "Chapter7.Rdata")
Reference
Thaikruea, L., Pataraarechachai, J., Savanpunyalert, P., Naluponjiragul, U. 1995 An unusual outbreak of food poisoning. Southeast Asian J Trop Med Public Health 26(1):78-85.
78
Exercise
We recode the original time variable 'onset' right from the beginning using the command
> onset[!case] <- NA
For the data that we are passing to the next chapter, has the variable 'onset' been changed? If not, why? If not, how can we get a permanent change?
79
The next step in analysing the outbreak is to deal with the level of risk. However, let's first load the data saved from the preceding chapter.
> > > > > > > zap() load("Chapter7.Rdata") ls(all=TRUE) # '.data' is there search() # No dataset in the search parth attach(.data) search() # '.data' is ready for use des()
The variables with the same recoding scheme, 9 to missing value, are 'beefcurry', 'saltegg' and 'water'. They can be recoded together in one step as follows:
> recode(vars = c(beefcurry, saltegg, water), 9, NA)
For 'eclair', the absolute missing value is 90. This should be recoded first.
> recode(eclair, 90, NA)
All variables look fine except 'eclair' which still contains the value 80 representing "ate but not remember how much". We will analyse its relationship with 'case' by considering it as an ordered categorical variable. At this stage, cross tabulation can be performed by using the Epicalc command 'tabpct'.
80
diseased
TRUE
FALSE
eclair
The width of the columns of the mosaic graph denotes the relative frequency of that category. The highest frequency is 2 pieces followed by 0 and 1 piece. The other numbers have relatively low frequencies; particularly the 5 records where 'eclair' was coded as 80. There is a tendency of increasing red area or attack rate from left to right indicating that the risk was increased when more pieces of eclair were consumed. We will use the distribution of these proportions to guide our grouping of eclair consumption. The first column of zero consumption has a very low attack rate, therefore it should be a separate category. Only a few took half a piece and this could be combined with those who took only one piece. Persons consuming 2 pieces should be kept as one category as their frequency is very high. Others who ate more than two pieces should be grouped into another category. Finally, those coded as '80' will be dropped due to the unknown amount of consumption as well as its low frequency.
> eclairgr <- cut(eclair, breaks = c(0, 0.4, 1, 2, 79), include.lowest = TRUE, labels=c("0","1","2",">2"))
The argument 'include.lowest=TRUE' indicates that 0 eclair must be included in the lowest category. It is a good practice to label the new variable in order to describe it as well as put it into '.data'.
> label.var(eclairgr, "pieces of eclair eaten") > tabpct(eclairgr, case) ======== lines omitted =========
81
Row percent diseased FALSE TRUE 279 15 (94.9) (5.1) 1 54 51 (51.4) (48.6) 2 203 243 (45.5) (54.5) >2 38 89 (29.9) (70.1) ======== lines omitted ========= pieces of eclai 0 Total 294 (100) 105 (100) 446 (100) 127 (100)
diseased
TRUE
FALSE
The attack rate or percentage of diseased in each category of exposure, as shown in the bracket of the column TRUE, increases from 5.1% among those who did not eat any eclairs to 70.1% among those heavy eaters of eclair. The graph output is similar to the preceding one except that the groups are more concise. We now have a continuous variable of 'eclair' and a categorical variable of 'eclairgr'. The next step is to create a binary exposure for eclair.
> eclair.eat <- eclair > 0 > label.var(eclair.eat, "eating eclair")
This binary exposure variable is now similar to others such as 'beefcurry', 'saltegg' and 'water'
82
10
20
30
40
50
Male
Female 0 10 20 30 40 50
83
An alternative is to draw a population pyramid of age and sex, using the Epicalc function pyramid, as follows:
> pyramid(age, sex)
(55,60] (50,55] (45,50] (40,45] (35,40] (30,35] (25,30] (20,25] (15,20] (10,15] (5,10] [0,5]
200
150
100
50
50
100 Male
150
200
Female
From the resulting graph, young adult males (aged 10-20 years) predominated. The binwidth can also be changed to have fewer age groups.
> pyramid(age, sex, binwidth = 15)
The table generated by the pyramid function can also be shown, as follows:
> pyramid(age, sex, printable=TRUE) Tabulation of age by sex (frequency). sex age Female Male [0,5] 1 1 (5,10] 12 7 (10,15] 170 217 (15,20] 81 223 (20,25] 25 112 (25,30] 41 54 (30,35] 23 20 (35,40] 7 10 (40,45] 5 8 (45,50] 3 12 (50,55] 0 1 (55,60] 0 1
84
Tabulation of Female [0,5] 0.272 (5,10] 3.261 (10,15] 46.196 (15,20] 22.011 (20,25] 6.793 (25,30] 11.141 (30,35] 6.250 (35,40] 1.902 (40,45] 1.359 (45,50] 0.815 (50,55] 0.000 (55,60] 0.000
age by sex (percentage of each sex). Male 0.150 1.051 32.583 33.483 16.817 8.108 3.003 1.502 1.201 1.802 0.150 0.150
Finally, both the table and age group can be saved as R objects for future use.
> > > > > > age.tab <- pyramid(age, sex) age.tab ageGr <- age.tab$ageGroup label.var(ageGrp, "Age Group") des() des("age*")
The des function can also display variables using wild card matching.
> des("????????") No. of observations =1094 Variable Class 11 vomiting numeric 13 diarrhea numeric 18 eclairgr factor
Description
We have spent some time learning these features of Epicalc for data exploration (a topic of the next chapter). Let's return to the analysis of risk, which is another main feature of Epicalc.
85
notation of being a 'multiplicative model'. Risk difference on the other hand, suggests the amount of risk gained or lost had the subject changed from non-exposed to exposed. The increase is absolute, and the mathematical notation is an 'additive model'. The Epicalc command 'cs' is used to analyse such relationships.
> cs(case, eclair.eat) eating FALSE 279 15 294 Rne 0.05 eclair TRUE Total 300 579 383 398 683 977 Re Rt 0.56 0.41 Estimate Lower95 Upper95 0.51 0.44 0.58 10.99 8 15.1 0.91 87.48
Risk
Risk difference (attributable risk) Risk ratio Attr. frac. exp. -- (Re-Rne)/Re Attr. frac. pop. -- (Rt-Rne)/Rt*100 %
'Rne', 'Re' and 'Rt' are the risks in the non-exposed, exposed and the total population, respectively. 'Rne' in this instance is 15/294 = 0.05. Similarly Re is 383/683 = 0.56 and Rt is 398/977 = 0.41. The risk difference is Re-Rne, an absolute increase of 51% whereas the risk ratio is Re/Rne, a increase of 11 fold. The risk of getting the disease among those eating eclairs could have been reduced by 91% and the risk among all participants in the sports carnival could have been reduced by 87.5% had they not eaten any eclairs. The risk ratio is an important indicator for causation. A risk ratio above 10 would strongly suggest a causal relationship. The risk difference has more public health implications than the risk ratio. A high risk ratio may not be of public health importance if the disease is very rare. The risk difference, on the other hand, measures direct health burden and the need for health services. Those who ate eclairs had a high chance (55%) of getting symptoms. A reduction of 51% substantially reduces the burden of the sport game attendants and the hospital services. Attributable fraction population indicates that the number of cases could have been reduced by 87% had the eclairs not been contaminated. This outbreak was transient if we consider a chronic overwhelming problem such as cardio-vascular disease or cancer. Even a relatively low level of fraction of risk attributable to tobacco in the population, say 20%, could lead to a huge amount of resources spent in health services. Attributable fraction exposure has little to do with level of disease burden in the
86
population. It is equal to 1 RR-1, and is therefore just another way to express the risk ratio. We have eclair as a cause of disease. There are some interventions that can prevent the diseases such as vaccination, education, law enforcement and improvement of environment. In our example, let's assume that not eating eclairs is a prevention process.
> eclair.no <- !eclair.eat > cs(case, eclair.no) eclair.no case FALSE TRUE Total FALSE 300 279 579 TRUE 383 15 398 Total 683 294 977 Rne Re Rt Risk 0.56 0.05 0.41 Estimate Lower95 Risk difference (absolute change) -0.51 -0.44 Risk ratio 0.09 0.12 protective efficacy (%) 90.9 Number needed to treat (NNT) 1.96 Upper95 -0.58 0.07
The risk among exposed (not eating eclair) is lower than that among non-exposed (eating eclair). The risk difference changes sign to negative. The risk ratio reciprocates to a small value of 0.09. Instead of displaying the attributable fraction exposure and attributable fraction population, the command shows protective efficacy and number needed to treat (NNT). From the protective efficacy value, the exposure to the prevention program would have reduced the risk of the eclair eater (unexposed under this hypothetical condition) by 90.9%. NNT is just the reciprocal of the negative of risk difference. A reduction of risk of 0.51 comes from an intervention on one individual. A reduction of 1 would need to come from an intervention on 1/0.51 or 1.96 individuals. An intervention of high NNT would need to be given to many individuals just to avert one unwanted event. The lowest possible level of NNT is 1 or perfect prevention which also has 100% protective efficacy. NNT is a part of measurement of worthiness of intervention (either prevention or treatment) technology. To avert the same type of unwanted event, an intervention with low NNT is preferred to another with high NNT although the cost must also be taken into account.
Dose-response relationship
One of the criteria of causation is dose-response relationship. If a higher dose of exposure is associated with a higher level of risk in a linear fashion, the exposure is likely to be the cause. We now explore the relationship between the risk of getting the disease and the number of eclairs consumed.
87
> cs(case, eclairgr) eclairgr case 0 1 FALSE 279 54 TRUE 15 51 Absolute risk Risk ratio lower 95% CI upper 95% CI 0.05 0.49 1 9.52 6.6 13.72
10
1 0 1 eclairgr 2 >2
The risk ratio increases as the dose of exposure to eclairs increases. The step from not eating to the first group (up to one piece) is wide whereas further increases are shown at a flatter slope. The p values in the output are both zero. In fact, they are not really zero, but have been rounded to three decimal places. The default rounding of decimals of odds ratios and relative risks is two and for the p-values is three. See 'help (cs)' for more details on the arguments. Before finishing this chapter, the current data is saved for further use.
> save(.data, file = "Chapter8.Rdata")
Exercise
Compute the attributable risk and risk ratio of 'beefcurry', 'saltegg' and 'water'. Are these statistically significant? If so, what are the possible reasons?
88
Having assessed various parameters of risk of participants in the outbreak in the last chapter, we now focus on confounding among various types of foods. The assessment of risk in this chapter is changed from the possible cause. The next step in analysing outbreak is to deal with the level of risk. Let's first load the data saved from the preceding chapter.
> zap() > load("Chapter8.Rdata") > attach(.data)
The probability of being a case is 469/1094 or 42.9%. In this situation where noncases are coded as 0 and cases as 1, the probability is
> mean(case)
Note that when there are missing values in the variable, the 'mean' must have 'na.rm =TRUE' in the argument. For example the odds of eating eclairs is:
> m.eclair <- mean(eclair.eat, na.rm = TRUE) > m.eclair /(1 - m.eclair) [1] 2.323129
89
While a probability always ranges from 0 to 1, an odds ranges from 0 to infinity. For a cohort study we may compute the ratios of the odds of being a case among the exposed vs the odds among the non-exposed.
> table(case, eclair.eat) eclair.eat case FALSE TRUE FALSE 279 300 TRUE 15 383
This is the same value as the ratio of the odds of being exposed among cases and among non-cases.
> (383/15)/(300/279)
Epicalc has a command 'cc' producing odds ratio, its 95% confidence interval, performing the chi-squared and Fisher's exact tests and drawing a graph for the explanation.
> cc(case, eclair.eat) eating eclair case FALSE TRUE Total FALSE 279 300 579 TRUE 15 383 398 Total 294 683 977 OR = 23.68 95% CI = 13.74 43.86 Chi-squared = 221.21 , 1 d.f. , P value = 0 Fisher's exact test (2-sided) P value = 0
The value of odds ratio from the 'cc' function is slightly different from the calculations that we have done. This is because the 'cc' function computes the 'exact odds ratio' along with Fisher's exact test.
90
1/4
1/8
1/16
exposed
The vertical lines of the graph above show the estimate and 95% confidence intervals of the two odds of being diseased, non-exposed on the left and exposed on the right, computed by the conventional method. The size of the box at the estimate reflects the relative sample size of each subgroup. There were more exposed than non-exposed. The non-exposed group has the estimate value slightly below 1/16 since it real value is 15/279. The exposed group estimate is 383/300 or slightly higher than 1. The latter value is over 23 times of the former.
> fisher.test(table(case, eclair.eat))$estimate odds ratio 23.681 > fisher.test(table(case, eclair.eat))$conf.int [1] 13.736 43.862 attr(,"conf.level") [1] 0.95
91
The total valid records for computation is 1,089, which is higher than 907 of the cross-tabulation results between 'case' and 'eclair.eat'. The value of the odds ratio is not as high but is of statistical significance. Similar to the analysis of the odds ratio for 'eclair', the size of the box on the right is much larger than that on the left indicating a large proportion of exposure. Both eclairs and salted eggs have significant odds ratios and were consumed by a large proportion of participants. Let's check the association between these two variables.
> cc(saltegg, eclair.eat, graph = FALSE) eating eclair saltegg FALSE TRUE Total 0 53 31 84 1 241 647 888 Total 294 678 972 OR = 4.58 95% CI = 2.81 7.58 Chi-squared = 47.02 , 1 d.f. , P value = 0 Fisher's exact test (2-sided) P value = 0
There might be only one real cause and the other was just confounded. In other words, those participants who ate salted eggs also tended to eat eclairs. Stratified analysis gives the details of confounding as follows.
> mhor(case, saltegg, eclair.eat)
Odds of outcome
eclair.eatFALSE: OR = 0.87 (0.22, 5) eclair.eatTRUE: OR = 1.07 (0.48, 2.36) MH-OR = 1.02 (0.54, 1.93)
Exposed
Stratified analysis by
eclair.eat
92
eclair.eat FALSE eclair.eat TRUE M-H combined M-H Chi2(1) = 0 , Homogeneity test,
OR lower lim. upper lim. P value 0.874 0.224 5.00 0.739 1.073 0.481 2.36 0.855 1.023 0.541 1.93 0.944 P value = 0.944 chi-squared 1 d.f.=0.07, P value = 0.787
The above analysis of association between the disease and salted egg is stratified by level of eclair consumption based on records that have valid values of 'case', 'eclair.eat' and 'saltegg'. There are two main parts of the results. The first part concerns odds ratio of the exposure of interest in each stratum defined by the third variable, in this case 'eclair.eat' as well as odds ratio and chi-squared statistics computed by Mantel-Haenszel's technique. The second part suggests whether the odds ratio of these strata can be combined. We will focus on the first part at this stage and come back to the second part later. In both strata, the odds ratios are close to one and are not statistically significant. The slopes of the two lines are rather flat. The Mantel-Haenszel (MH) odds ratio is the weighted average of the two odds ratios, which is also close to one. Both of stratum-specific odds ratios and the MH odds ratio are not significantly different from one but the crude odds ratio is. The distortion of the crude result from the true or adjusted result is called confounding. The mechanism of this confounding can be explained with the above graph. The upper line of the graph denotes the subset or stratum of subjects who had eaten eclairs whereas the lower line is for those who had not. The upper line lies far above the lower line meaning that the subset of eclair eaters had a much higher risk than the non-eaters. The distance between the two lines is between 16 to 32 fold of odds. It is important to note that the distribution of subjects in this study is imbalanced in relation to eclair and salted eggs consumption. On the right-hand side (salted egg consumers), there are alot more eclair eaters (upper box) than non-eaters (lower box). The centre of this right-hand side then tends to be closer to the location of the upper box. In contrast, on the left-hand side, or those not consuming salted eggs, the number of eclair non-consumers (as represented by the size of the lower box) is higher than that of the consumers. The centre of the left-hand side therefore tends to lie closer to the lower box. In other words, when the two strata are combined, the (weighted average) odds of diseased among the salted egg consumers is therefore closer to the upper box. The opposite is true for the left-hand side where the weighted average odds of getting the disease should be closer to the lower box. A higher average odds on the right-hand side leads to the crude odds ratio being higher than one. This crude odds ratio misleads us into thinking that salted egg is another cause of the disease where in fact it was just confounded by eclairs. The level of confounding is noteworthy only if both of the following two conditions are met. Firstly, the stratification factor must be an independent risk factor. Secondly, there must be a significant association between the stratification factor and the exposure of interest. Now we check whether the relationship between the disease and eclair is
93
Odds of outcome
Exposed
Stratified by 'saltegg', the odds ratio of eclair.eat in both strata (19.3 and 24.8) and the MH odds ratio (24.3) are strong and close to the crude odds ratio (23.68). Graphically, the two lines of strata are very close together indicating that 'saltegg' is not an independent risk factor. In each of the exposed and non-exposed groups, the odds for disease are close and the weighted average odds is therefore not influenced by the number of subjects. Thus not being an independent risk factor, a variable cannot confound another exposure variable.
94
Stratified analysis by beefcurry OR lower lim. upper lim. P value beefcurry 0 5.33 1.53 21.7 3.12e-03 beefcurry 1 31.63 16.49 68.1 4.79e-56 M-H combined 24.08 13.85 41.9 1.39e-48 M-H Chi2(1) = 214.56 , P value = 0 Homogeneity test, chi-squared 1 d.f. = 7.23 , P value = 0.007
Stratified prospective/X-sectional analysis
1/4 beefcurry0: OR = 5.33 (1.53, 21.71) 1/8 beefcurry1: OR = 31.63 (16.49, 68.11) MH-OR = 24.08 (13.85, 41.89) 1/16 homogeneity test P value = 0.007
1/32
Exposed
The slopes of the odds ratios of the two strata cross each other. Among those who had not eaten beef curry, the odds of getting the disease among those not eating eclair was slightly below 1 in 6. The odds increases to over 1 in 2 for those who ate eclairs only. This increase is 5.33 fold or an odds ratio of 5.33. In contrast, the baseline odds among those eating beef curry only (left point of the green line) is somewhere between 1 in 32 and 1 in 16, which is the lowest risk group in the graph. The odds however steps up very sharply to over 1 among the subjects who had eaten both eclairs and beef curry. The homogeneity test in the last line concludes that the odds ratios are not homogeneous. In statistics, this is called significant interaction. In epidemiology, the effect of 'eclair' was modified by 'beefcurry'. Eating beef curry increased the harmful effect of eclair or increased the susceptibility of the person to eclair. We now check the effect of 'beefcurry' stratified by 'eclair.eat'.
> mhor(case, beefcurry, eclair.eat) Stratified analysis by eclair.eat OR lower lim. upper lim. P value eclair.eat FALSE 0.376 0.111 1.47 0.1446 eclair.eat TRUE 2.179 1.021 4.83 0.0329
95
M-H combined 1.401 0.769 2.55 0.2396 M-H Chi2(1) = 1.38 , P value = 0.24 Homogeneity test, chi-squared 1 d.f. = 6.78 , P value = 0.009
Stratified prospective/X-sectional analysis
2 1 1/2 1/4
Odds of outcome
eclair.eatFALSE: OR = 0.38 (0.11, 1.47) eclair.eatTRUE: OR = 2.18 (1.02, 4.83) MH-OR = 1.4 (0.77, 2.55)
Exposed
The effect of beef curry among those not eating clairs tends to be protective but without statistical significance. The odds ratio among those eating eclairs is 2.18 with statistical significance. The homogeneity test also concludes that the two odds ratios are not homogeneous. The stratification factor eclair has modified the effect of beef curry from a non-significant protective factor to a significant risk factor. Tabulation and stratified graphs are very useful in explaining confounding and interaction. However, they are limited to only a few variables. For a dataset with a larger number of variables, logistic regression is needed. We put the new variable 'eclair.eat' into '.data' by using 'label.var' and save the whole data frame for future use with logistic regression.
> label.var(eclair.eat, "ate at least some eclair") > save(.data, file="chapter9.Rdata")
Exercise
Analyse the effect of drinking water on the odds of the disease. Check whether it is confounded with eating clairs or other foods. Check for interaction.
96
Data cleaning
The previous datasets were relatively clean. Let's try an uncleaned dataset that came from a family planning clinic in the mid 1980's. The coding scheme can be seen from
> help(Planning)
Cleaning will enable you to learn Epicalc functions for data management.
> zap() > data(Planning) > des(Planning)
Note that all of the variable names are in upper case. To convert them to lower case simply type the following command.
> names(Planning) <- tolower(names(Planning)) > use(Planning) > summ() No. of observations = 251 Var. name id age relig ped income am reason bps bpd wt ht Obs. 251 251 251 251 251 251 251 251 251 251 251 mean 126 27.41 1.14 3.83 2.84 20.66 1.55 137.74 97.58 52.85 171.49 median 126 27 1 3 2 20 1 110 70 51.9 154 s.d. 72.6 4.77 0.59 2.32 2.38 5.83 0.86 146.84 153.36 11.09 121.82 min. 1 18 1 0 1 15 1 0 0 0 0 max. 251 41 9 9 9 99 9 999 999 99.9 999
1 2 3 4 5 6 7 8 9 10 11
Identifying duplication ID
Let's look more closely at the 'id' object. This variable represents the unique
97
max. 251
50
100
150
200
250
The graph looks quite uniformly distributed. However, the mean of id (125.996) is not equal to what it should be.
> mean(1:251) [1] 126
There must be some duplication and/or some gaps within these id numbers. Looking carefully at the graph, there is no noticeable irregularity. To check for duplication, we can type the following:
> any(duplicated(id)) [1] TRUE
The result tells us that there is in fact at least one duplicated id. To specify the id of the duplicates type:
> id[duplicated(id)] [1] 215
We see that id = 215 has one duplicate. The record numbers are 215 and 216. These two records should be investigated as to which one is incorrect. One of them should be changed to id = '216'. Graphically, one can put the spotlight on the area around id=215.
> suspected.id.range <- id[210:220] > summ(suspected.id.range) Valid obs. mean median s.d. min. 11 214.909 215 3.3 210
max. 220
98
Distribution of suspected.id.range
210
212
214
216
218
220
Missing values
This file is not ready for analysis yet. As is often the case, the data were coded using outlier numbers to represent missing codes. We first explore the data with boxplots.
> boxplot(.data, horizontal=T, las=1, main="Family Planning Clinic")
Family Planning Clinic
200
400
600
800
1000
99
The outlier values of 'bps', 'bpd' and 'ht' are rather obvious. These are confirmed with the numerical statistics from the 'summ' command seen earlier in this chapter. In this dataset, the value '9' represents a missing code for religion (3rd variable), patient education (4th variable), income group (5th variable) and reason for family planning (7th variable). There are four methods of changing values to missing (NA). The first method is based on the function 'replace', which handles one vector or variable at a time. The second uses extraction and indexing with subscript '[ ]'. This method can handle either a vector or array (several variables at the same time). The third method is based on the 'transform' command. These three methods use commands that are native to R. The fourth method uses the 'recode' command from Epicalc. It is the simplest method but the data frame must be in the style of Epicalc, i.e. '.data'. We will use the 'replace' function for the 3rd variable, 'relig', extraction and indexing for the 4th to 7th variables, 'ped', 'am', 'income' and 'reason', 'transform' for the 'wt' variable, and finally 'recode' for the remaining necessary variables.
We wish to replace all occurrences of 9 with the missing value 'NA'. The replace function handles only one variable at a time.
> replace(.data$relig, .data$relig==9, NA) -> .data$relig
There are three essential arguments to the 'replace' function; the vector, the index vector and the value. See the online help for more detailed information on usage. The first argument, 'relig', is the target vector containing values to be replaced. The second argument, 'relig==9', is the index vector specifying the condition, in this case, whenever 'relig' is equal to 9. The final argument, 'NA', is the new value that will replace the old value of 9. Thus, whenever 'relig' is equal to 9, it will be replaced with 'NA'. Note that the index vector, or condition for change, need not be the same vector as the target vector. For example, one may want to coerce the value of diastolic blood pressure to be missing if the systolic blood pressure is missing. Secondly, 'replace' is a function, not a command. It has no effect on the original values. The values obtained from this function must be assigned to the original values using the assignment operators, '->' or '<-'. Right now, the variable has changed.
100
s.d. 0.31
min. 1
max. 2
There was one subject with a missing value leaving 250 records for statistical calculations. The remaining subjects have values of one and two only for 'religion'.
The value 99 represents a missing value code during data entry. Note that the mean, median and standard deviation are not correct due to this coding of missing values. Instead of using the previous method, the alternative is:
> .data$am[.data$am==99] <- NA
With the same three components of the target vector, conditions and replacing value, this latter command is slightly more straightforward than the above one using the 'replace' function. This method can also be used for many variables with the same missing code. For example, the 4th, 5th and 7th variables all use the value 9 as the code for a missing value.
> .data[,c(4,5,7)][.data[,c(4,5,7)]==9] <- NA
All the 4th, 5th, and 7th variables of '.data' that have a value of 9 are replaced with 'NA'. The above command can be explained as follows. There are two layers of subsets of '.data' marked by '[ ]'. .data[,c(4,5,7)] means extract all rows of columns 4, 5 and 7, ('ped', 'income' and 'reason').
'[.data[,c(4,5,7)]==9]'means the subset of each particular column where
the row is equal to 9. '<-NA' means the left-hand side of the arrow is to be assigned a missing value (NA). Thus, for these four variables, any element in which the value equals 9 will be replaced by 'NA'.
101
The expression inside the function tells R to replace values of 'wt' that are greater than 99 with the NA value. The resulting object is saved into the data frame. Now check the 'wt' variable inside the data frame.
> summ(.data$wt) Valid obs. mean median s.d. min. 246 51.895 51.45 8.91 0 max. 73.8
Note the two outliers on the left-hand side of the graph. Similar to the results of previous methods, 'transform' did not change the 'wt' variable inside the data frame in the search path.
> summ(wt) Valid obs. mean median 251 52.851 51.9 s.d. 11.09 min. 0 max. 99.9
Note that the 'transform'ed data frame does not carry the variable labels or descriptions with it. The new '.data' will have all variable descriptions removed. So this method reduces the power of Epicalc.
Notice that the variable 'bps' has been changed. In fact, 'recode' has automatically detached the old data frame and attached to the new one, as shown below.
> summ(bps) Valid obs. mean median 244 113.033 110 s.d. 14.22 min. 0 max. 170
Variable 'bps' in '.data' and that in the search path have been synchronised. The number of valid records is reduced to 244 and the maximum is now 170 not 999. This automatic updating has also affected other variables in the search path that we changed before.
> summ(am) Valid obs. mean median 250 20.344 20 s.d. 3.06 min. 15 max. 31
When the variable 'am' is used as the argument of 'summ', the program looks for an independent object called 'am', which does not exist. It then looks in the search
102
path. Since the data frame in the search path ('search()[2]') has been updated with the new '.data', the variable 'am' that is used now is the updated one which has been changed from the command in the preceding section. The command 'recode' makes variable manipulation simpler than the above three standard R methods. The command 'recode' can be further simplified:
> recode(bpd, 999, NA) > recode(ht, 999, NA) > summ()
All the maxima have been corrected but the minima of 0 are also missing values for the last four variables plus 'ped'. We can use 'recode' to turn all the zeros into missing values in one step.
> recode(c(ped, bps, bpd, wt, ht), 0, NA) > summ() No. of observations = 251 Var. name Obs. mean median s.d. min. max. ============ variables #1, #2, #3 omitted ========= 4 ped 226 3.3 2 1.66 2 7 ============ variables #5, #6, #7 omitted ========= 8 bps 243 113.5 110 12.25 90 170 9 bpd 243 72.02 70 9.9 60 110 10 wt 245 52.11 51.5 8.28 16 73.8 11 ht 245 155.3 153 28.08 141 585
The minimum weight of 16kg and the maximum height of 585cm are dubious and in fact should not be accepted. Any weight below 30kg and any height above 200cm should also be treated as missing (unless there are very good reasons to leave them as is). A scatter plot is also useful here.
> plot(wt, ht, pch=19)
ht 200 300
400
500
600
20
30
40 wt
50
60
70
103
The outlier is clearly seen (top left corner). To correct these errors type:
> recode(wt, wt < 30, NA) > recode(ht, ht > 200, NA) > summ()
It should be noted that after cleaning, the effective sample size is somewhat less than the original value of 251. The box plot of all variables now has a different appearance.
> boxplot(.data, horizontal=T, las=1, main="Family Planning Clinic")
Family Planning Clinic
50
100
150
200
250
Then, an appropriate label or description for each variable can be created one at a time.
104
At this stage, checking description of the dataset will reveal the description of the first variable.
> des() No. of observations =251 Variable Class Description 1 id numeric Id code 2 age numeric 3 relig numeric ========= subsequent lines omitted ==========
The first line of the output guesses that the parental dataset is '.data'. This is based on the fact that '.data' has a variable with this name with equal length (251). Now let's complete all other variable labels.
> > > > > > > > > > > label.var(age, "age") label.var(relig, "religion") label.var(ped, "eduction") label.var(income, "monthly income") label.var(am, "age(yr) 1st marriage") label.var(reason, "reason for fam. plan.") label.var(bps, "systolic BP") label.var(bpd, "diastolic BP") label.var(wt, "weight (kg)") label.var(ht, "height (cm)") des()
No. of observations =251 Variable Class 1 id numeric 2 age numeric 3 relig numeric 4 ped numeric 5 income numeric 6 am numeric 7 reason numeric 8 bps numeric 9 bpd numeric 10 wt numeric 11 ht numeric
Description ID code age religion eduction monthly income age(yr) 1st marriage reason for fam. plan. systolic BP diastolic BP weight (kg) height (cm)
It is advised to keep each label short since it will be frequently used in the process of automatic graphical display and tabulation.
105
median 2
s.d. 1.66
min. 2
max. 7
Note that there is no count for category 1 of 'ped'. According to the coding scheme: 1 = no education, 2 = primary school, 3 = secondary school, 4 = high school, 5 = vocational school, 6 = bachelor degree, 7 = other. The data are numeric and therefore need to be converted into a factor. The labels can be put into a list of 7 elements.
> label.ped <- list(None="1", Primary="2", "Secondary school"="3", "High school"="4", Vocational="5", "Bachelor degree"="6", Others="7")
Each label needs to be enclosed in double quotes if it contains a space. The quote is optional otherwise. For example, one can have: None="1" or "None"="1". To convert a numeric vector to a categorical one use the 'factor' function.
> educ <- factor (ped, exclude = NULL)
The new variable is a result of factoring the values of 'ped' in the search path. The argument 'exclude' is set to 'NULL' indicating no category (even missing or 'NA') will be excluded in the factoring process.
> summary(educ) 2 3 4 117 31 20 5 26 6 16 7 <NA> 16 25
We can check the labels of a factor object using the levels command.
106
> levels (educ) [1] "2" "3" "4" "5" "6" "7" NA
There are seven known levels, ranging from "2" to "7" and one missing level (NA). Note that these numbers are actually characters or group names. There was no "1" in the data and correspondingly is omitted in the levels. The 'levels' for the codes should be changed to meaningful words as defined previouisly.
> levels (educ) <- label.ped > levels (educ) [1] "None" "Primary" [4] "High school" "Vocational" [7] "Others"
To incorporate a new variable derived from the data frame '.data', simply label the variable name as follows.
> label.var(educ, "education")
Then recheck.
> des() No. of observations =251 Variable Class Description 1 id numeric ID code ============ Variables # 2 to 11 omitted ======= 12 educ factor education
For a variable outside '.data', the command 'label.var' actually accomplishes five tasks. The new variable is incorporated into the data frame '.data', The new variable is labelled with a description, The old data frame is detached, The old 'free' variable outside the data frame is removed, unless the argument 'pack=FALSE' is specified, The new data frame is attached to the search path.
107
Distribution of education
NA 's Others Bachelor degree Vocational High school Secondary school Primary None 0 16 16
25
26 20 31 117
20
40
60
80
100
120
140
The table and the graph show that most subjects had only primary education. A horizontal bar chart is produced when the number of groups exceeds 6 and the longest label of the group has more than 8 characters. The tabulation can also be sorted.
> tab1(educ, sort.group = "decreasing") educ : education Frequency %(NA+) %(NA-) Primary 117 46.6 51.8 Secondary school 31 12.4 13.7 Vocational 26 10.4 11.5 NA's 25 10.0 0.0 High school 20 8.0 8.8 Bachelor degree 16 6.4 7.1 Others 16 6.4 7.1 None 0 0.0 0.0 Total 251 100.0 100.0
108
Distribution of education
Primary Secondary school Vocational NA 's High school Others Bachelor degree None 0 31 26 25 20 16 16
117
20
40
60
80
100
120
140
A sorted table and bar chart are easier to read and viewed when there is no order of category. However, education level is partially ordered in nature, so the non-sorted chart may be better.
Collapsing categories
Sometimes a categorical variable may have too many levels. The analyst may want to combine two or more categories together into one. For example, vocational and bachelor degree, which are the 5th and the 6th levels could be combined into one level called 'tertiary'. We can do this by creating a new variable, which is then incorporated into '.data' at the end.
> > > > > ped2 <- educ levels (ped2)[5:6] <- "Tertiary" label.var(ped2, "level of education") des() tab1(ped2)
109
ped2 : level of education Frequency 0 117 31 20 42 16 25 251 %(NA+) 0.0 46.6 12.4 8.0 16.7 6.4 10.0 100.0 %(NA-) 0.0 51.8 13.7 8.8 18.6 7.1 0.0 100.0
None Primary Secondary school High school Tertiary Others NA's Total
The two categories have been combined into one giving 42 subjects having a tertiary level of education.
Conclusion
In this chapter, we have looked at a dataset with a lot of data cleaning required. In real practice, it is very important to have preventive measures to minimise any errors during data collection and data entry. For example, a constraint of range check is necessary in data entry. Missing values would better be entered with missing codes specific for the software. In EpiInfo, Stata and SPSS these are period marks '.' or simply left blank. One of the best ways of entering data is to use the EpiData software, which can set legal ranges and several other logical checks as well as label the variables and values in an easy way. If this had been properly done, then the difficult commands used in this chapter would not have been necessary. In the remaining chapters, we will use datasets which have been properly entered, treated for missing values and properly labelled. Whenever a variable is modified it is a good practice to update the variable inside the attached data frame with the one outside. The best way to edit data is to use 'recode', which is a powerful command of Epicalc. It can work with one variable or a number of variables with the same recoding scheme or recoding a variable or variables under a condition. Finally, the best way to update the data frame with new or modified variable(s) is to use 'label.var'. This command not only labels the variable for further use but also updates and incorporates the data frame with the variable outside. Attachment to the new data frame is automatic, making data manipulation in R more smooth and simple. There are many other more advanced data management functions in R that are not covered in this chapter. These include aggregate, reshape and merge, and readers are encouraged to explore these very useful and powerful commands on their own.
110
Linear regression involves modelling a continuous outcome variable with one or more explanatory variables. With all data analysis the first step is always to explore the data. In this case, scatter plots are very useful in determining whether or not the relationships between the variables are linear.
The file is clean and ready for analysis. With this small sample size it is somewhat straightforward to verify that there is no repetition of 'id' and no missing values. The
111
records have been sorted in ascending order of 'worm' (number of worms) ranging from 32 in the first subject to 1,929 in the last one. Blood loss ('bloss') is however, not sorted. The 13th record has the highest blood loss of 86 ml per day, which is very high. The objective of this analysis is to examine the relationship between these two variables.
Scatter plots
When there are two continuous variables cross plotting is the first necessary step.
> plot(worm, bloss)
The above command gives a simple scatter plot with the first variable on the horizontal axis and the second on the vertical axis.
bloss 20 0 40
60
80
500
1000 worm
1500
2000
The names of the variables are used for the axis labels, and there is no title. The axis labels can be modified and a title added by supplying extra arguments to the plot function, as follows:
> plot(worm, bloss, xlab="No. of worms", ylab="ml. per day", main = "Blood loss by number of hookworms in the bowel")
112
20 0
40
60
80
500
1500
2000
For a small sample size, putting the identification of each dot can improve the information conveyed in the graph.
> plot(worm, bloss, xlab="No. of worms", ylab="ml. per day", main="Blood loss by nunmber of hookworms in the bowel", type="n")
The above command produces an empty plot. The argument 'pch' stands for point character or plot symbol. The default value is 1 (no quote), which is a round hollow circle. The graph is therefore blank since we assigned a blank space for the plotting symbol. This is to set a proper frame for further points and lines. The variable 'id' can be used as the text to write at the coordinates using the 'text' command.
> text(worm, bloss, labels=id)
113
60
12 11 4 20 8 910
14
40
In order to draw and regression line, a linear model using the above two variables should be set up.
The model 'lm1' is created. Displaying the model by typing 'lm1' gives limited information. To get more information, one can look at the attributes of this model, its summary and attributes of its summary.
> attr(lm1, "names") [1] "coefficients" [4] "rank" [7] "qr" [10] "call" "residuals" "fitted.values" "df.residual" "terms" "effects" "assign" "xlevels" "model"
There are 12 attributes. Most of them can be displayed with the 'summary' function.
> summary(lm1) Call: lm(formula = bloss ~ worm)
114
Median 0.7502
3Q 4.3562
Max 34.3896
Estimate Std. Error t value Pr(>|t|) (Intercept) 10.847327 5.308569 2.043 0.0618 . worm 0.040922 0.007147 5.725 6.99e-05 *** --Residual standard error: 13.74 on 13 degrees of freedom Multiple R-Squared: 0.716, Adjusted R-squared: 0.6942 F-statistic: 32.78 on 1 and 13 DF, p-value: 6.99e-05
The first section of summary shows the formula that was 'called'. The second section gives the distribution of residuals. The pattern is clearly not symmetric. The maximum is too far on the right (34.38) compared to the minimum (-15.84) and the first quartile is further left (-10.81) of the median (0.75) than the third quartile (4.35) is. Otherwise, the median is close to zero. The third section gives coefficients of the intercept and the effect of 'worm' on blood loss. The intercept is 10.8 meaning that when there are no worms, the blood loss is estimated to be 10.8 ml per day. This is however, not significantly different from zero as the P value is 0.0618. The coefficient of 'worm' is 0.04 indicating that each worm will cause an average of 0.04 ml of blood loss per day. Although the value is small, it is highly significantly different from zero. When there are many worms, the level of blood loss can be very substantial. The multiple R-squared value of 0.716 indicates that 71.6% of the variation in the data is explained by the model. The adjusted value is 0.6942. (The calculation of Rsquared is discussed in the analysis of variance section below). The last section describes more details of the residuals and hypothesis testing on the effect of 'worm' using F-statistics. The P value from this section (6.99 10-5) is equal to that tested by the t-distribution in the coefficient section. This F-test more commonly appears in the analysis of variance table.
The above analysis of variance (aov) table breaks down the degrees of freedom, sum of squares and mean square of the outcome (blood loss) by sources (in this case there only two: worm + residuals). The so-called square is actually the square of difference between the value and the mean. The total sum of squares of blood loss is therefore:
> sum((bloss-mean(bloss))^2) [1] 8647.044
115
The sum of squares of worm or sum of squares of difference between the fitted values and the grand mean is:
> sum((fitted(lm1)-mean(bloss))^2) [1] 6191.577
The latter two sums add up to the first one. The R-squared is the proportion of sum of squares of the fitted values from the total sum of squares.
> 6191.577/8647.044 [1] 0.716034
This value of R-squared can also be said to be the percent of reduction of total sum of squares when the explanatory variable is fitted. Thus the number of worms can reduce or explain the variation by about 72%. Instead of sum of squares, one may consider the mean square as the level of variation. In such a case, the number of worms can reduce the total mean square (or variance) by: (total mean square - residual mean square) / total mean square, or (variance - residual mean square) / variance.
> (var(bloss)-188.9)/var(bloss) [1] 0.6941614
F-test
When the mean square of 'worm' is divided by the mean square of residuals, the result is:
> 6191.6/188.9 [1] 32.77713
Using this F value with the two corresponding degrees of freedom (from 'worm' and residuals) the P value for testing the effect of 'worm' can be computed.
> pf(32.78, df1=1, df2=13, lower.tail=FALSE) [1] 6.990513e-05
The function 'pf' is used to compute a P value from a given F value together with the two values of the degrees of freedom. The last argument 'lower.tail' is set to FALSE to obtain the right margin of the area under the curve of the F distribution. In summary, both the regression and analysis of variance give the same conclusion; that number of worms has a significant linear relationship with blood loss. Now the regression line can be drawn.
116
The regression line has an intercept of 10.8 and a slope of 0.04. The expected value is the value of blood loss estimated from the regression line with a specific value of 'worm'.
> points(worm, fitted(lm1), pch=18, col="blue")
A residual is the difference between the observed and expected value. The residuals can be drawn by the following command.
> segments(worm, bloss, worm, fitted(lm1), col="pink")
Blood loss by number of hookworms in the bowel
13 80 15
60
12 11 4 20 8 910
14
40
The actual values of the residuals can be checked from the specific attribute of the defined linear model.
> residuals (lm1)
Note that some residuals are positive and some are negative. The 13th residual has the largest value (furthest from the fitted line). The sum of the residuals and the sum of their squares can be checked.
> sum(residuals (lm1)); sum(residuals(lm1)^2) [1] 3.996803e-15 [1] 2455.468
The sum of residuals is close to zero whereas the sum of their squares is the value previously displayed in the summary of the model. The distribution of residuals, if the model fits well, should be normal. A common sense approach is to look at the
117
histogram.
> hist(residuals (lm1))
Epicalc combines the three commands and adds the p-value of the test to the graph.
> qqnorm(residuals (lm1)) -> a
The coordinates X and Y are 'a$x' and 'a$y'. If the residuals were perfectly normally distributed, the text symbols would have formed along the straight dotted line. The graph suggests that the largest residual (13th) is too high (positive) whereas the smallest value (7th) is not large enough (negative). However, the P value from the Shapiro-Wilk test is 0.08 suggesting that the possibility of residuals being normally distributed cannot be rejected.
118
Sample Quantiles
20
4 10
10 1 6 7 5 14 15
3 11 8
12
-10
-1
0 Theoretical Quantiles
Finally, the residuals are plotted against the fitted values to see if there is a pattern.
> plot(fitted(lm1), residuals (lm1), xlab="Fitted values") > plot(fitted(lm1), residuals (lm1), xlab="Fitted values", type="n") > text(fitted(lm1), residuals (lm1), labels=as.character(id)) > abline(h=0, col="blue")
13 30 20
4 residuals 10
3 2 0 1 -10 6 5 7
8 9 10
11
12
14
15
20
40
60 Fitted values
80
There is no obvious pattern. The residuals are quite independent of the expected
119
values. With this and the above findings from the 'qqnorm' command we may conclude that the residuals are randomly and normally distributed. Some of the above plots for the model 'lm1' can also be obtained from:
> par(mfrow=c(2,2)) > plot(lm1)
Final conclusion
From the analysis, it is clear that blood loss is associated with number of hookworms. On an average, each worm may cause 0.04 ml of blood loss. The remaining uncertainty of blood loss, apart from hookworm, is explained by random variation or other factors that were not measured.
Exercise
The dataset SO2 is stored in fixed field format. Read in the data and label the variables using the following commands.
> label.var(smoke, "Smoke (mg/cu.m.)") > label.var(SO2, "SO2 (ppm.)")
Using scatter plots and linear regression check whether smoke or SO2 has more influence on logarithm of deaths. Interpret the results the best simple linear regression.
120
Datasets usually contain many variables collected during a study. It is often useful to see the relationship between two variables within the different levels of another third, categorical variable.
cross-sectional survey on BP & risk factors No. of observations =100 Variable Class Description 1 id integer id 2 sex factor sex 3 sbp integer Systolic BP 4 dbp integer Diastolic BP 5 saltadd factor Salt added on table 6 birthdate Date > summ() cross-sectional survey on BP & risk factors No. of observations = 100 Var. name Obs. mean median s.d. id 100 50.5 50.5 29.01 sex 100 1.55 2 0.5 sbp 100 154.34 148 39.3 dbp 100 98.51 96 22.74 saltadd 80 1.538 2 0.502 birthdate 100 1952-10-11 1951-11-17 <NA> 12-08
min. 1 1 80 55 1 1930-11-14
Note that the maximum systolic and diastolic blood pressures are quite high. There are 20 missing values in 'saltadd'. The frequencies of the categorical variables 'sex'
121
The next step is to create a new age variable from birthdate. The calculation is based on 12th March 2001, the date of the survey.
> age.in.days <- as.Date("2001-03-12") - birthdate
There is a leap year in every four years. Therefore, an average year will have 365.25 days.
> class(age.in.days) [1] "difftime" > age <- as.numeric(age.in.days)/365.25
The function 'as.numeric' is needed to transform the units of age (difftime); otherwise modelling would not be possible.
> summ(sbp, by = saltadd) For saltadd = no Obs. mean median 37 137.5 132 For saltadd = yes Obs. mean median 43 163 171 For saltadd = missing Obs. mean median 20 166.9 180
122
missing
yes
Since there is not enough evidence that the missing group is important and for additional reasons of simplicity, we will ignore this group and continue the analysis with the original 'saltadd' variable consisting of only two levels. Before doing this however, a simple regression model and regression line are first fitted.
> lm1 <- lm(sbp ~ age) > summary(lm1) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 65.1465 14.8942 4.374 3.05e-05 age 1.8422 0.2997 6.147 1.71e-08 Residual standard error: 33.56 on 98 degrees of freedom Multiple R-Squared: 0.2782, Adjusted R-squared: 0.2709 F-statistic: 37.78 on 1 and 98 DF, p-value: 1.712e-08
123
Although the R-squared is not significant, the P value is small indicating important influence of age on systolic blood pressure.
Systolic BP by age
mm.Hg
100
150
200
30
40
50 Years
60
70
> plot(age, sbp, main = "Systolic BP by age", xlab = "Years", ylab = "mm.Hg") > abline(lm1)
Subsequent exploration of residuals suggests a non-significant deviation from normality and no pattern. Details of this can be adopted from the techniques discussed in the previous chapter and are omitted here. The next step is to provide different plot patterns for different groups of salt habits.
> lm2 <- lm(sbp ~ age + saltadd) > summary(lm2) ==================== Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 63.1291 15.7645 4.005 0.000142 age 1.5526 0.3118 4.979 3.81e-06 saltaddyes 22.9094 6.9340 3.304 0.001448 --Residual standard error: 30.83 on 77 degrees of freedom Multiple R-Squared: 0.3331, Adjusted R-squared: 0.3158 F-statistic: 19.23 on 2 and 77 DF, p-value: 1.68e-07
On the average, a one year increment of age increases systolic blood pressure by 1.5 mmHg. Adding table salt increases systolic blood pressure significantly by approximately 23 mmHg. Similar to the method used in the previous chapter, the following step creates an empty frame for the plots:
124
Add blue hollow circles for subjects who did not add table salt.
> points(age[saltadd=="no"], sbp[saltadd=="no"], col="blue")
Then add red solid points for those who did add table salt.
> points(age[saltadd=="yes"], sbp[saltadd=="yes"], col="red", pch = 18)
Note that the red dots corresponding to those who added table salt are higher than the blue circles. The final task is to draw two separate regression lines for each group. The command 'abline()' has the argument 'a' as intercept and 'b' as slope. Since we will have two lines, the 'a' will be 'a0' and 'a1' whereas there is only one 'b'.
> coef(lm2) (Intercept) 63.129112 age 1.552615 saltaddyes 22.909449
For the intercept of salt non-users, the third coefficient is multiplied by zero. Since 'age' is 0 the second coefficient is also multiplied by zero. Thus the intercept for this group is:
> a0 <- coef(lm2)[1]
For the salt users, the intercept is added with the third coefficient:
> a1 <- coef(lm2)[1] + coef(lm2)[3]
Now the first (lower) regression line is drawn in blue, then the other in red.
> abline(a = a0, b, col = "blue") > abline(a = a1, b, col = "red")
Note that X-axis does not start at zero. Thus the intercepts are out of the plot frame. The red line is for the red points of salt adders and the blue line is for the blue points of non-adders. In this model, age has a constant independent effect on systolic blood pressure. Look at the distributions of the points of the two colours; the red points are higher than the blue ones but mainly on the right half of the graph. To fit lines with different slopes, a new model with interaction term is created.
125
Systolic BP by age
mm.Hg
100
150
200
30
40
50 Years
60
70
The two lines can first be overwritten with white colour to return to only graph with points. Note that any dots that fall on the lines will be overwritten by the lines.
> abline(a = a0, b, col = "white") > abline(a = a1, b, col = "white")
Alternatively, the two points can be created from the procedures described above. Redraw the graph but this time with black representing the non-salt adders.
> plot(age, sbp, main="Systolic BP by age", xlab="Years", ylab="mm.Hg", pch=18, col=as.numeric(saltadd))
The next step is to prepare a model with different slopes (or different 'b' for the 'abline' arguments) for different lines. The model needs an interaction term between 'addsalt' and 'age'.
> lm3 <- lm(sbp ~ age * saltadd) > summary(lm3) Call: lm(formula = sbp ~ age * saltadd) =============== Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 78.0066 20.3981 3.824 0.000267 *** age 1.2419 0.4128 3.009 0.003558 ** saltaddyes -12.2540 31.4574 -0.390 0.697965 age:saltaddyes 0.7199 0.6282 1.146 0.255441 --Multiple R-Squared: 0.3445, Adjusted R-squared: 0.3186 F-statistic: 13.31 on 3 and 76 DF, p-value: 4.528e-07
In the formula part of the model, 'age * saltadd' is the same as 'age + saltadd +
126
age:saltadd'. The four coefficients are displayed in the summary of the model. They can also be checked as follows.
> coef(lm3) (Intercept) 78.0065572 age 1.2418547 saltaddyes age:saltaddyes -12.2539696 0.7198851
The first coefficient is the intercept of the slope among non-salt users. For the intercept of the salt users, the second term and the fourth are all zero (since age is zero) but the third should be kept as such. This term is negative. The intercept of salt users is therefore lower than that of the non-users.
> a0 <- coef(lm3)[1] > a1 <- coef(lm3)[1] + coef(lm3)[3]
For the slope of the non-salt users group, the second coefficient alone is enough since the first and the third are not involved with each unit of increment of age and the fourth term has 'saltadd' being zero. On the other hand, the slope for the salt users group includes the second and the fourth coefficients since saltaddyes is one.
> b0 <- coef(lm3)[2] > b1 <- coef(lm3)[2] + coef(lm3)[4]
mm.Hg
100
150
200
30
40
50 Years
60
70
This model suggests that at the young age, the systolic blood pressure of two groups are not much different as the two lines are close together on the left of the plot. For
127
example, at the age of 25, the difference is 5.7mmHg. Increasing age increases the difference between the two groups. At 70 years of age, the difference is as great as 38mmHg. (For simplicity, the procedures for computation of these two levels of difference are skipped in these notes. Readers can find them in the command file specific to this chapter). In this aspect, age modifies the effect of adding table salt. On the other hand the slope of age is 1.24mmHg per year among those who did not add salt but becomes 1.24+0.72 = 1.96mmHg among the salt adders. Thus, salt adding modifies the effect of age. Interaction is a statistical term whereas effect modification is the equivalent epidemiological term. The coefficient of the interaction term 'age:saltaddyes' is not statistically significant. The two slopes just differ by chance.
Exercise
Plot systolic and diastolic blood pressures of the subjects, use red colour of males and blue for females as shown in the following figure.
Systolic and diastolic blood pressure of the subjects
blood pressure
0 0
50
100
150
200
20
40 Index
60
80
100
Check whether there is any significant difference of diastolic blood pressure among males and females after adjustment for age.
128
money
2000
3000
4000
1000
H G K J I 0
B A 80
20
40 age
60
To put the 'code' as text at the points, add a title and a regression line, type the following:
> > > > text(age, money, labels = code) title("Relationship between age and money carried") lm1 <- lm(money ~ age) abline(lm1)
129
> summary(lm1) ============ Residual standard error: 1560 on 9 degrees of freedom Multiple R-Squared: 0.0254, Adjusted R-squared: -0.08285 F-statistic: 0.2349 on 1 and 9 DF, p-value: 0.6395
The R-squared is very small indicating a poor fit. This is confirmed by the poor fit of the regression line in the previous graph. People around 40-60 years old tend to carry more money than those in other age groups. Checking residuals reveal the following results.
> > > > > > Residuals <- resid(lm1) Fitted.values <- fitted(lm1) opar <- par(mfrow=c(1,2)) shapiro.qqnorm(Residuals) plot(Fitted.values, Residuals) abline(h=0)
Normal Q-Q plot of Residuals
4000 Shapiro-Wilk test P value <.001 4000 Residuals -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 -1000 600 0 1000 2000 3000
Sample Quantiles
-1000
1000
2000
3000
800
1000 Fitted.values
1200
Theoretical Quantiles
From the above plots the residuals are not normally distributed. The values of the high residuals are in the middle of the range of the fitted values. Usually variation in money is in an exponential scale. Taking logarithms may help.
> plot(age, money, pch = " ", main = "Relationship between age and money carried", log = "y") > text(age, money, labels = code) > lm2 <- lm(log10(money) ~ age) > abline(lm2)
130
500
G C
money
I 50 100 A J
10 K 20 40 age 60 80
With the log scale of the y-axis, the distribution of the relationship tends to be curvilinear. Drawing a straight regression line through these points is thus not appropriate. To fit a regression line under the log scale but with a linear (non-log scale) value would be too complicated. A better way would be to transform 'money' into a new variable on a log10 scale and fit a new model with a quadratic term of age.
> logmoney <- log10(money) > age2 <- age^2 # age2 is a quadratic term > lm4 <- lm(logmoney ~ age + age2) > summary(lm4) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.1026501 0.3385024 0.303 0.769437 age 0.1253546 0.0176409 7.106 0.000101 age2 -0.0012677 0.0002011 -6.304 0.000232 Residual standard error: 0.3317 on 8 degrees of freedom Multiple R-Squared: 0.8751, Adjusted R-squared: 0.8439 F-statistic: 28.04 on 2 and 8 DF, p-value: 0.000243
Both the adjusted and non-adjusted R-squared values are high. Adding the age2 term improves the model substantially and is statistically significant. The next step is to fit a regression line, a task that is not straightforward. A regression line is a line joining fitted values. There are too few points of fitted values in the model. A new data frame is now created to include a new 'age' variable ranging from 6 to 80 (which is the age range of our subjects) and the corresponding age-squared term.
20
131
Then the predicted values of this data frame are computed based on the last model.
> predict1 <- predict.lm(lm4, new) > plot(age, logmoney, pch=" ", main="Relationship between age and money carried", ylab = "log10(money)") > text(age, logmoney, labels = code) > lines(new$age, predict1, col = "blue")
Relationship between age and money carried
E 3.5 F 3.0 D
H log10(money) 2.5
G C
However, more precise mathematical calculation from the coefficients can be obtained as follows:
132
> coef(lm4) (Intercept) age age2 0.102650130 0.125354559 -0.001267718 > a <- coef(lm4)[3] > b <- coef(lm4)[2] > c <- coef(lm4)[1] > x <- -b/(2*a); x # 49.44104
The conclusion from the model is that at the age of 49 years, an average person will carry approximately 1,590 baht. This amount is lower than the actual value of money carried by "E", which is 5,000 baht or more than three times higher.
> 10^(logmoney[code=="E"]-y) # 3.144053
The model 'lm5' gives a slightly higher R-squared than that from 'lm4'. Sex ("M" compared with "F") is not significant. We use this model for a plotting exercise.
> plot(age, logmoney, pch = " ", main = "Relationship between age and money carried", ylab = "log10(money)") > text(age,logmoney,labels=code, col=unclass(sex))
Note that the first line is the same as previous plots. The second line however, differentiates sex with colour. When 'sex', which is a factor, is unclassed, the values become the numerical order of the levels. "F" is coded 1 and "M" is coded 2. The
133
The first command creates a data frame containing variables used in 'lm5'. Note that the 'sex' here is confined to males. The second command creates a new vector based on 'lm5' and the new data frame. First we draw the line for males.
> lines(age.frame2.male$age, predict2.male, col = 2)
H log10(money) 2.5
G C
The red line is located consistently above the black line, since our model did not include an interaction term. For every value of age, males tend to carry 102.4 or 1.738 times more money than females. The difference is however, not significant.
134
> agegr <- cut(age, breaks = c(0, 20, 60, 85), labels = c("Child", "Adult", "Elder"))
This method of cutting has already been explained in Chapter 2. Here, we put the specific labels to replace the default bin names of "(0,20]","(20,60]" and "(60,80]". To illustrate the change of log(money) by age, a series of box plots are drawn with the statistical parameters stored in a new object 'a'.
> a <- boxplot(logmoney ~ agegr, varwidth = TRUE)
Then lines are drawn to join the median of log(money) of the age groups, which are in the third row of 'a'.
> lines(x = 1:3, y = a$stats[3, ], col = "red") > title(main = "Distribution of log(money) by age group", ylab = "log(money)")
Distribution of log(money) by age group
log(money)
1.0
1.5
2.0
2.5
3.0
3.5
Child
Adult
Elder
135
There are two age group parameters in the model; "Adult" and "Elder". The first level, "Child", is omitted since it is the referent level. This means the other levels will be compared to this level. Adults carried 10 1.578 or approximately 38 times more money than children, which is statistically significant. Elders carried 100.8257 = 6.7 times more money than children, but is not statistically significant. We could check the pattern of contrasts as follows:
> contrasts(agegr) Adult Elder Child 0 0 Adult 1 0 Elder 0 1
The columns of the matrix are the variables appearing in the model. The rows show all the levels. The column 'Adult' in the model is equal to 1 when agegr is equal to "Adult" and zero otherwise. The column 'Elder' is 1 when 'agegr' is "Elder" and zero otherwise. There is no column of 'Child'. When both 'Adult' and 'Elder' are equal to zero, the model then predicts the value of 'agegr' being "Child". If "Adult" is required to be the referent level, the contrasts can be changed.
> contrasts(agegr) <- contr.treatment(levels (agegr), base=2)
Note that he coefficient of 'Child' is the negative of that of 'Adult' from model 'lm6'. Moreover, elderly persons did not carry significantly more money than adults.
Exercise
What will happen in 'lm4' if log2 is used instead of log10? Would the conclusion be the same?
136
From lm to glm
Linear modelling using the 'lm' function is based on the least squares method. The concept is to minimise the sum of squares of residuals. Modelling from 'lm' is equivalent to that of analysis of variance or 'aov'. The only difference is that the former focuses on coefficients of the independent variables whereas the latter focuses on their sum of squares. Generalized linear modelling ('glm') is, as it is called, more general that just linear modelling. The method is based on the likelihood function. When the likelihood is maximised, the coefficients and variances (and subsequently standard errors) of independent variables are achieved. While classical linear modelling assumes the outcome variable is defined on a continuous scale, such as blood loss in the previous examples, (as well as assuming normality of errors and constant variance), 'glm' can handle outcomes that are expressed as proportions, Poisson distributed (counts) and others such as those from the gamma and negative binomial distributions. We will first start with the outcome on a continuous scale as in the previous example of blood loss and hookworm infection.
> > > > > zap() data(Suwit) use(Suwit) bloodloss.lm <- lm(bloss ~ worm) summary(bloodloss.lm)
The results are already shown in the previous chapter. Now we perform a generalised linear regression model using the function 'glm'. For glms the default family is the Gaussian distribution, and so this argument can be omitted.
> bloodloss.glm <- glm(bloss ~ worm) > summary(bloodloss.glm) Call: glm(formula = bloss ~ worm) Deviance Residuals: Min 1Q -15.8461 -10.8118
Median 0.7502
3Q 4.3562
Max 34.3896
137
Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 10.847327 5.308569 2.043 0.0618 worm 0.040922 0.007147 5.725 6.99e-05 (Dispersion parameter for gaussian family taken to be 188.8821) Null deviance: 8647.0 Residual deviance: 2455.5 AIC: 125.04 on 14 on 13 degrees of freedom degrees of freedom
Using the same data frame and the same formula, i.e. 'bloss ~ worm', the results from 'lm' and 'glm' for residuals (called deviance residuals in 'glm'), coefficients and standard errors are the same. However, there are more attributes of the latter than the former.
Model attributes
> attributes(bloodloss.lm) $names [1] "coefficients" "residuals" [5] "fitted.values" "assign" [9] "xlevels" "call" $class [1] "lm" > attributes(bloodloss.glm) $names [1] "coefficients" "residuals" [4] "effects" "R" [7] "qr" "family" [10] "deviance" "aic" [13] "iter" "weights" [16] "df.residual" "df.null" [19] "converged" "boundary" [22] "call" "formula" [25] "data" "offset" [28] "method" "contrasts" $class [1] "glm" "lm" "effects" "qr" "terms" "rank" "df.residual" "model"
"fitted.values" "rank" "linear.predictors" "null.deviance" "prior.weights" "y" "model" "terms" "control" "xlevels"
Note that 'bloodloss.glm' also has class as 'lm' in addition to its own 'glm'. The two sets of attributes are similar with more sub-elements for the 'bloodloss.glm'. Subelements of the same names are essentially the same. In this setting, the 'deviance' from the glm is equal to the sum of squares of the residuals.
> sum(bloodloss.glm$residuals^2) [1] 2455.468 > bloodloss.glm$deviance [1] 2455.468
138
Similarly, the 'null.deviance' is equal to the total sum of squares of the difference of individual amount of blood loss from the mean blood loss.
> sum((bloss-mean(bloss))^2) [1] 8647.044 > bloodloss.glm$null.deviance [1] 8647.044
Some of the attributes in of the 'glm' are rarely used but some, such as 'aic', are very helpful. There will be further discussion on this in future chapters.
A large proportion of the elements of both sets of attributes repeat those of the models. The additional attributes include the R squared in the 'lm' model and the covariance matrix ('cov.unscaled') in both models. This covariance matrix is used for calculation of the standard errors and 95% confidence intervals of the coefficients.
Covariance matrix
When there are two or more explanatory variables, and they are not independent, the collective variation is denoted as covariance (compared to variance for a single variable). It is stored as a symmetrical matrix since one variable can covary with each of the others. A covariance matrix can be 'scaled' or 'unscaled'. The one from the 'lm' model gives 'cov.unscaled' while 'glm' gives both.
> vcov(bloodloss.glm) # or summary(bloodloss.glm)$cov.scaled (Intercept) worm (Intercept) 28.18090491 -2.822006e-02 worm -0.02822006 5.108629e-05
139
> summary(bloodloss.glm)$cov.unscaled (Intercept) worm (Intercept) 0.1491983716 -1.494057e-04 worm -0.0001494057 2.704665e-07
The latter covariance matrix can also be obtained from the summary of the ordinary linear model.
> summary(bloodloss.lm)$cov.unscaled
The scaling factor is, in fact, the dispersion, or sigma squared, which is the sum of squares of residuals divided by degrees of freedom of the residual. Thus the first matrix can be obtained from
> summary(bloodloss.glm)$cov.unscaled * summary(bloodloss.glm)$dispersion
or
> summary(bloodloss.lm)$cov.unscaled * summary(bloodloss.lm)$sigma^2
or
> summary(bloodloss.lm)$cov.unscaled * sum(summary(bloodloss.lm)$residuals^2)/13
The scaled covariance matrix is used for computing standard errors of the coefficients. The diagonal term of this matrix where the row name is the same as the column name is the value of variance of the coefficient under the same name. Taking the square root of this term will result in the standard error of the coefficient.
Subsequently, the 't value' can be computed from division of the coefficient by the standard error:
> coef(summary(bloodloss.glm))[2,1] / summary(bloodloss.glm)$cov.scaled[2,2]^.5 -> t2 > t2
140
or
> 0.04092205 / 0.007147467 # 5.7254
The P value is the probability that 't' can be at this or a more extreme value. The more extreme can be on both sides or signs of the t value. Therefore, the P value is computed from
> pt(q=t2, df=13, lower.tail=FALSE) * 2 [1] 6.9904e-05
This value is equal to that in the summary of the coefficients. More details on the computation of a probability from the t distribution can be search from 'help(TDist)' or 'help(pt)'. Finally to compute the 95% confidence interval:
> beta2 <- coef(summary(bloodloss.glm))[2,1]; beta2 [1] 0.04092205 > ci2 <- beta2 + qt(c(0.025, 0.975), 13)*se2; ci2 [1] 0.02548089 0.05636321
In fact, R has a command to compute the 95% confidence interval of the model as follows:
> confint(bloodloss.lm) 2.5 % 97.5 % (Intercept) -0.621139 22.315793 worm 0.025481 0.056363
The results are the same but faster. Note that the command 'confint(bloodloss.glm)' gives a slightly different confidence interval. This is because the function uses the normal distribution instead of t distribution and therefore it is not as appropriate.
Modelling by 'lm' is equivalent to 'glm' with family being 'gaussian'. The link function is 'identity', which means that the outcome variable is not transformed. Other types of 'family' and 'link' will be demonstrated in subsequent chapters. Since the link function is 'identity', the 15 values of the linear predictors for this family of 'glm' are the same as the fitted values (of both the 'lm' and 'glm' models).
> all(fitted(bloodloss.glm) == predict(bloodloss.glm)) [1] TRUE
The 'glm' summarises the error using the 'deviance'. For the linear model, this value
141
The interpretation of the error is the same as from the linear model; a larger deviance indicates a poorer fit. Generalized linear modelling employs numerical iterations to achieve maximum likelihood. The value of the maximum likelihood is small because it is the product of probabilities. Its logarithmic form is therefore better to handle. The maximum log likelihood can be obtained from the following function:
> logLik(bloodloss.glm) 'log Lik.' -59.51925 (df=3).
The higher (less negative) the log likelihood is, the better the model fits. However, each model has its own explanatory parameters. Having too many parameters can be inefficient. When fitting models one always strives for parsimony. An attribute of a model that balances the log-likelihood and the number of parameters is the AIC value. It is abbreviated from "Akaike Information Criterion" and is equal to -2*loglikelihood + k*npar, where k is usually 2 and 'npar' represents the number of parameters in the fitted model. A high likelihood or good fit will result in a low AIC value. However, a large number of parameters also results in a high AIC. The number of parameters of this model is 3. The AIC is therefore:
> -2*as.numeric(logLik(bloodloss.glm))+2*3 [1] 125.0385 > AIC(bloodloss.glm) [1] 125.0385
The AIC is very useful when choosing between models from the same dataset. This and other important attributes will be discussed in more details in subsequent chapters.
References
Dobson, A. J. (1990). An Introduction to Generalized Linear Models. London: Chapman and Hall. McCullagh P. and Nelder, J. A. (1989). Generalized Linear Models. London: Chapman and Hall.
Exercise
In the dataset BP, use 'glm' with family=gaussian to analyse models predicting systolic blood pressure from age and adding table salt with and without the interaction term. Use the AIC to choose the most efficient model.
142
143
-4
-2
0 log(odds)
The probability has a minimum of 0, maximum of 1 and mid value of 0.5. The odds has its corresponding values at 0, infinity and 1. Loge(odds) or log(odds) or often called 'logit' has a linear increment with corresponding extremes of - infinity and + infinity and 0 for the mid-point. The curve is called a logistic curve. Being on a linear and well-balanced scale, the logit is a more appropriate scale for a binary outcome than the probability itself. Modelling logit(Y|X) ~ X is the general form of logistic regression. It means that the logit of Y given X (or under the condition of X), where X denotes one or more independent variables, can be determined by the sum of products between each specific coefficient with its value of X. Suppose there are independent or exposure variables: X1 and X2. X would be 0 + 1X1 + 2X2, where 0 is the intercept. In the medical field, the binary (also called dichotomous) outcome Y is often disease vs non-disease, dead vs alive, etc. The X can be age, sex, and other prognostic variables. Among these X variables, one or a few are under testing of the specific hypothesis. Others are potential confounders, sometimes called co-variates. Mathematically, it turns out that Pr(Y|X) is equal to exp(X)/(1 + exp(X)). Hence, logistic regression is often used to compute the probability of an outcome under a given set of exposures. For example, prediction of probability of getting a disease under a given set of age, sex, and behaviour groups, etc.
144
> zap() > data(Decay) > use(Decay) > des() No. of observations =436 Variable Class Description 1 decay numeric Any decayed tooth 2 strep numeric CFU of mutan strep. > summ() No. of observations =436 Var. name Obs. mean median s.d. min. max. 1 decay 436 0.63 1 0.48 0 1 2 strep 436 95.25 105 53.5 0.5 152.5
The outcome variable is 'decay', which indicates whether a person has at least one decayed tooth (1) or not (0). The exposure variable is 'strep', the number of colony forming units (CFU) of streptococci, a group of bacteria suspected to cause tooth decay. The prevalence of having decayed teeth is equal to the mean of the 'decay' variable, i.e. 0.63. To look at the 'strep' variable type:
> summ(strep)
The plot shows that the vast majority have the value at about 150. Since the natural distribution of bacteria is logarithmic, a transformed variable is created and used as the independent variable.
> log10.strep <- log10(strep) > model0 <- glm(decay ~ log10.strep, family> summary(model0) ============ Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.5539 0.5184 -4.927 8.36e-07 log10.strep 1.6808 0.2764 6.082 1.19e-09 ============ AIC: 535.83
Both the coefficients of the intercept and log10.strep are statistically significant. The estimated intercept is -2.5539. This means that when log10.strep is 0 (or strep equals 1 CFU), the logit of having at least a decayed tooth is -2.55. We can then calculate the baseline odds and probability.
> exp(-2.5539) -> baseline.odds > baseline.odds [1] 0.07777774 > baseline.odds/(1+baseline.odds) -> baseline.prob > baseline.prob [1] 0.07216492
There is an odds of 0.077 or a probability of 7.2% of having at least one decayed tooth if the number of CFU of the mutan strep is at 1 CFU. The coefficient of log10.strep is 1.6808. For every unit increment of log10(strep),
145
or an increment of 10 CFU, the logit will increase by 1.6808. This increment of logit is constant but not the increment of probability because the latter is not on a linear scale. The probability at each point of CFU, can be computed by replacing both coefficients obtained from the model. For example, at 100 CFU, the probability would be
> model0$coefficients[1] + log10(100)*model0$coefficients[2] # 0.8078015
A logistic nature of the curve is partly demonstrated. To make it clearer, the ranges of X and Y axes are both expanded to allow a more extensive curve fitting.
> plot(log10.strep, model0$fitted.values, xlim = c(-2,4), ylim=c(0,1), xlab=" ", ylab=" ", xaxt="n")
Another vector of the same name 'log10.strep' is created in the form of a data frame for plotting a fitted line on the same graph.
> newdata <- data.frame(log10.strep=seq(from=-2, to=4, by=.01)) > predicted.line <- predict.glm(model0, newdata, type="response")
The values for predicted line on the above command must be on the same scale as the 'response' variable. Since the response is either 0 or 1, the predicted line would be in between, ie. the predicted probability for each value of log10(strep).
> lines(newdata$log10.strep, predicted.line, col="blue") > axis(side=1, at=-2:4, labels=as.character(10^(-2:4)))
Relationship between mutan streptococci and probability of tooth decay
1.0
0.6
0.4
0.2
146
> title(main="Relationship between mutan streptococci \n and probability of tooth decay", xlab="CFU", ylab="Probability of having decayed teeth")
Note the use of the '\n' in the command above to separate a long title into two lines.
We model 'case' as the binary outcome variable and take 'eclair.eat' as the only explanatory variable.
> model0 <- glm(case ~ eclair.eat, binomial) > summary(model0) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.923 0.265 -11.03 <2e-16 eclair.eatTRUE 3.167 0.276 11.48 <2e-16 =================== Lines omitted =================
The above part of the display is actually a matrix of 'coef(summary(model0))'. Epicalc manipulates this matrix and gives rise to a display more understandable by most epidemiologists.
> logistic.display(model0) OR lower95ci upper95ci P value eclair.eatTRUE 23.746 13.824 40.789 0 Log-likelihood = -527.60746 No. of observations = 977 AIC value = 1059.2
The odds ratio from the logistic regression is derived from exponentiation of the estimate, i.e. 23.746 is obtained from:
> exp(summary(model0)$coefficient[2,1])
147
These values are close to simple calculation of the 2-by-2 table discussed earlier in Chapter 9. The log-likelihood and the AIC value will be discussed later. The default values in 'logistic.display' are 95% for the confidence intervals and the digits are shown to three decimal places. See the online help for details.
> args(logistic.display) > help (logistic.display)
You can change the default values by adding the extra argument(s) in the command.
> logistic.display(model0, alpha=0.01, decimal=2)
The odds ratio for 'saltegg' is statistically significant and similar to that seen from the cross-tabulation in Chapter 9. The number of valid records is also higher than that for 'eclair.eat'. To check whether the odds ratio is confounded by 'eclair.eat', the two explanatory variables are put together in the next model.
> model2 <- glm(case ~ eclair.eat+saltegg, binomial) > logistic.display(model2) OR lower95ci upper95ci P value eclair.eatTRUE 24.075 13.922 41.632 0.000 saltegg 1.023 0.539 1.942 0.944 Log-likelihood = -523.46786 No. of observations = 972 AIC value = 1052.936
The odds ratios of the explanatory variables in 'model2' are adjusted for each other. The adjusted odds ratio of 'eclair.eat' changes minimally suggesting that it is not confounded by 'saltegg', whereas the odds ratio of 'saltegg' is changed toward unity, and has a very large p value. The difference between the adjusted odds ratio and the crude odds ratio is an indication that 'saltegg' is confounded by 'eclair.eat', which is an independent risk factor. These adjusted odds ratios are close to those obtained from Mantel-Haenszel's method shown in chapter 9. Logistic regression gives both the adjusted odds ratio simultaneously compared to the Mantel-Haenszel method, which only gives the odds ratio of the main interest. An additional advantage is that logistic regression can handle multiple covariates simultaneously.
> model3 <- glm(case~eclair.eat+saltegg+sex, binomial) > logistic.display(model3) OR lower95ci upper95ci P value eclair.eatTRUE 25.806 14.828 44.912 0.000
148
saltegg 0.936 0.494 sex 1.900 1.389 Log-likelihood = -515.27652 No. of observations = 972 AIC value = 1038.553
1.775 2.598
0.839 0.000
The third explanatory variable 'sex' is another independent risk factor. Since 1 = male and 0 = female, males have an increased odds of 90% compared to females. This variable is not a confounder to either of the preceding variables because it has not substantially changed the odds ratios of any of them. The reason for not being able to confound is its lack of association with either of the preceding exposure variables. In other words, males and females were not different in eating clairs and salted eggs.
Interaction
An interaction term consists of at least two variables, at least one of which must be categorical. If an interaction is present, the effect of one variable will depend on the status of the other and thus they are not independent. In R the interaction term can be specified in two ways: 'x1*x2' or 'x1:x2'. The former is equivalent to 'x1+ x2+x1:x2'. Examine the following model where the variables 'eclair.eat' and 'beefcurry' are specified as an interaction term.
> model4 <- glm(case~eclair.eat*beefcurry, binomial) > logistic.display(model4) OR lower95ci upper95ci P eclair.eatTRUE 5.448 1.716 17.291 beefcurry 0.374 0.122 1.148 eclair.eatTRUE:beefcurry 5.825 1.547 21.941 Log-likelihood = -519.74919 No. of observations = 972 AIC value = 1047.498
The last term, 'eclair.eatTRUE:beefcurry', is the interaction term. It is significant, which means that both 'eclair.eat' and 'beefcurry' are not acting independently from each other. Denoting those who ate neither eclairs nor beef curry as the referent level, those who ate only eclairs but not beef curry have an odds ratio of 5.448. Those who ate beef curry only have an odds ratio of 0.374 since the other two terms are multiplied by 0. However, those who ate both eclairs and beef curry have an odds ratio of 5.448 0.374 5.825 = 11.869. The interaction term has an odds ratio higher than 1, which means that the two main factors have a synergistic effect on the odds for the outcome.
149
First, a subset of the dataset must be created to make sure that all the variables have valid (non missing) records.
> complete.data <- subset(.data, !is.na(eclair.eat) & !is.na(beefcurry) & !is.na(saltegg),select=c(case, eclair.eat, eclairgr, beefcurry, sex, saltegg))
The new data frame is a subset of '.data'. Only records without any missing values of the required variables: 'case', 'eclair.eat', 'beef.curry', 'sex' and 'saltegg' are included. This data frame is then used for creating a model with these variables included.
> model5 <- glm(case ~ eclair.eat + beefcurry + eclair.eat * beefcurry + saltegg + sex, family = binomial, data = complete.data)
The model may have excessive variables. We let R select the model with lowest AIC.
> modelstep <- step(model5, direction = "both") Start: AIC= 1038.45 case ~ eclair.eat + beefcurry + eclair.eat * beefcurry + saltegg + sex Df Deviance AIC - saltegg 1 1026.5 1036.5 <none> 1026.5 1038.5 - eclair.eat:beefcurry 1 1030.2 1040.2 - sex 1 1039.5 1049.5 Step: AIC= 1036.46 case ~ eclair.eat + beefcurry + sex + eclair.eat:beefcurry Df Deviance AIC <none> 1026.5 1036.5 - eclair.eat:beefcurry 1 1030.4 1038.4 + saltegg 1 1026.5 1038.5 - sex 1 1039.5 1047.5
Initially, the AIC is 1038.45. The command 'step' tries removing each independent variable and compares the degrees of freedom reduced, the new deviance and the new AIC. The results are increasingly sorted by AIC. The top one having the lowest AIC is the best one. At the first step, removal of 'saltegg' would give the lowest AIC and is therefore chosen and used for the next step. In the second selection phase, not removing any remaining independent variable gives the lowest AIC. Thus the selection process stops with the remaining variables kept. Now, we check the results.
> summary(modelstep) =================== Lines omitted ================== Coefficients: Estimate St. Error z value Pr(>|z|) (Intercept) -2.6722 0.4940 -5.409 6.32e-08 eclair.eatTRUE 2.0667 0.6011 3.438 0.0006 beefcurry -0.9034 0.5730 -1.577 0.1148 sex 0.5861 0.1631 3.593 0.0003
150
The final model has salted egg excluded. Sex is an independent risk factor. Eating eclairs is a risk factor, the effect of which was enhanced by eating beef curry. Eating beef curry by itself is a protective factor. However, when eaten with eclairs, the odds is increased and becomes positive. It should be noted that stepwise regression is limited to exploration and often not suitable for specific hypothesis testing, the way most epidemiological studies are designed for. It tends to remove all non-significant independent variables from the model. In hypothesis testing one or a few independent variables are set for testing. The odds ratios and their confidence intervals must still be calculated regardless of the statistical significance.
All the three variables 'eclair.eat', 'beefcurry' and 'sex' are dichotomous and coded 1 if the exposure is true, 0 otherwise. The odds ratio for 'sex' is that of males (where sex is 1) compared to females (where sex is 0). For 'eclair.eat' and 'beefcurry', 1 means exposed (eating the food) and 0 means non-exposed. The independent variable 'sex' has an odds ratio of approximately 1.8, which means that males have approximately a 1.8 times higher risk than females. The other two variables, 'eclair.eat' and beefcurry, are interacting. The odds ratio of 'eclair.eat' depends on the value of 'beefcurry' and vice versa. Three terms 'eclair.eat', 'beefcurry' and their interaction term 'eclair.eatTRUE:beefcurry' need to be considered simultaneously. If 'beefcurry' is zero (those who did not eat beef curry), 'eclair.eatTRUE:beefcurry' is also zero. The odds ratio for eclair.eat for this subgroup is therefore only 7.899. Among the beef curry eaters, the interaction term should be multiplied by 1 (since 'eclair.eat' and 'beefcurry' are both 1), the odds ratio is then 7.899 x 4.105 or approximately 32.4. The required odds ratio can be obtained from computing the product of the appropriate odds ratio of the individual variables. However, the standard errors and level of 95% confidence interval cannot be easily computed from the above result. A better way to get the odds ratio and 95% confidence interval for 'eclair.eat' among 'beefcurry' eaters is to perform a little trick.
> complete.data$nobeefcurry <- 1 - complete.data$beefcurry
151
The new variable 'nobeefcurry' is created as the opposite of 'beefcurry'. When 'beefcurry' is 0, 'nobeefcurry' will be 1 and vice versa. Now, 'nobeefcurry', instead of 'beefcurry', is included into the model.
> tricked.model <- glm(case ~ eclair.eat * nobeefcurry + sex, family = binomial, data = complete.data) > logistic.display(tricked.model) OR lower upper P value eclair.eatTRUE 32.424 16.868 62.327 0.000 nobeefcurry 2.468 0.803 7.587 0.115 sex 1.797 1.305 2.474 0.000 eclair.eatTRUE:nobeefcurry 0.244 0.064 0.933 0.039
The odds ratio and 95% confidence interval of 'eclair.eat' among those who ate beef curry are in the first row because the 'nobeefcurry' term in the second row and the interaction term in the last row are all 0.
The Epicalc function 'pack' identifies all free vectors with the same length as the number of records in '.data' and adds them into the data.frame. These free vectors are then removed from the global environment.
> .data death 1 no 2 yes 3 no 4 yes 5 no 6 yes 7 no 8 yes anc clinic Freq old A 176 old A 12 new A 293 new A 16 old B 197 old B 34 new B 23 new B 4
This is a format with 'Freq' being a variable denoting numbers of subjects in each category. This variable is put as 'weight' in the model.
> glm(death ~ anc + clinic, binomial, weight=Freq, data=.data)
152
The coefficients are the same as those from model2. The deviance is however different. Another data format for logistic regression is possible where the number of cases and number of controls of the same exposure are in the same row, but separate columns.
> .data$condition <- c(1,1,2,2,3,3,4,4) > data2 <- reshape (.data, timevar="death", v.name="Freq", idvar="condition", direction="wide")
The variable 'condition' is created to facilitate reshaping. The reshaped file 'data2' has only four rows of data compared to '.data', which has 8 rows.
> data2 anc clinic condition Freq.no Freq.yes 1 old A 1 176 12 3 new A 2 293 16 5 old B 3 197 34 7 new B 4 23 4
The first column in each row is the 'row.names' of the data frame. This data frame can be written to a text file with 'row.names' and the variable 'condition' (the third variable) omitted. Logistic regression for 'data2' can be carried out as follows:
> glm(cbind(Freq.yes, Freq.no) ~ anc + clinic, data=data2, family=binomial)
The left-hand side of the formula is a result of column binding the two outcome frequency columns. The remaining parts of the commands remain the same as for the case-by-case format. The coefficients and standard errors from this command are the same as those above. However, the residual deviance and AIC are much smaller due to the smaller number of degrees of freedom. Case-by-case format of data is most commonly dealt with in the actual data analysis. The formats in 'ANCtable' and 'data2', which are occasionally found, are mainly of theoretical interest.
153
> data(Ectopic) > use(Ectopic) > des() No. of observations = 723 Variable Class 1 id integer 2 outc factor 3 hia factor 4 gravi factor > summ() No. of observations = 723 Var. name id outc hia gravi Obs. 723 723 723 723 mean 362 2 1.545 1.537 median 362 2 2 1 s.d. 208.86 0.817 0.498 0.696 min. 1 1 1 1 max. 723 3 2 3
tab1(outc, graph=F) tab1(hia, graph=F) tab1(gravi, graph=F) case <- outc == "EP" case <- factor (case) levels (case) <- c("control","case") tabpct(case, gravi)
Distribution of Gravidity by case
control case
Gravidity
>4 3-4
1-2
case
154
ever IA
never IA
case
The cases had a higher level of gravidity as well as a higher experience of induced abortion.
> cc(case, hia, design = "case-control") hia case no yes Total control 268 214 482 case 61 180 241 Total 329 394 723 OR = 3.689 95% CI = 2.595 5.291 Chi-squared = 59.446 , 1 d.f. , P value = 0 Fisher's exact test (2-sided) P value = 0
155
Outcome category
control
I 0.67
Odds of exposure
This odds ratio graph is specified with ' design="case-control" ', therefore the orientation of it is adjusted toward the outcome variable. The odds of exposure among the cases are on the right (higher value). Next we adjust for gravidity.
> mhor(case, hia, gravi, design="case-control") Stratified analysis by gravi OR lower lim. upper lim. gravi 1-2 3.72 2.328 5.98 gravi 3-4 4.01 1.714 10.55 gravi >4 2.02 0.307 22.42 M-H combined 3.68 2.509 5.41
M-H Chi2(1) = 47.29 , P value = 0 Homogeneity test, chi-squared 2 d.f. = 0.52 , P value = 0.769
The stratified analysis shows output tables for the three strata of gravi and corresponding three exposure lines in the graph. The odds of exposure to induced abortion increases (moving towards the right-hand side) with gravidity. The odds among the control group is lower (more on the left) in each stratum of the gravidity group. The slopes of the three lines are somewhat similar indicating minimal interaction, and this is confirmed by the p-value from the homogeneity test. The MH combined odds ratio is similar to the crude odds ratio indicating rather little effect of confounding by gravidity.
156
gravi1-2: OR= 3.72 (2.33, 5.98) gravi3-4: OR= 4.01 (1.71, 10.55) gravi>4: OR= 2.02 (0.31, 22.42) MH-OR = 3.68 (2.51, 5.41) homogeneity test P value = 0.769 I I 2 4 8 16 I
Control
I 1/2 1
32
64
Odds of exposure
Similar to the preceding chapter, 'logistic.display' can be used in order to obtain the odds ratio and 95% confidence interval of the exposure to induced abortion. The intercept term here has no meaning and should be ignored.
> logistic.display(model1) OR lower95ci upper95ci P value hiaever IA 3.695 2.626 5.199 0 Log-likelihood = -429.38634 No. of observations = 723 AIC value = 862.7727 > model2 <- glm(case ~ hia + gravi, binomial) > logistic.display(model2) OR lower95ci upper95ci P value hiaever IA 3.697 2.521 5.421 0.000 gravi3-4 0.997 0.677 1.469 0.989 gravi>4 1.003 0.595 1.690 0.992
157
The AIC from 'model1' is lower than the one from 'model2' indicating a better fit. Cases of ectopic pregnancies had approximately 3.7 times the odds of previous exposure to induced abortion compared to the control group. Gravidity has no effect on the outcome and is not a confounder.
The gravidity group ">4", which is the highest risk group, is now set as the referent level. The odds ratio for the 1-2 gravidity group is now shown while the last group (gravi >4) disappears. Both gravidity groups have a slightly lower risk than the highest one (but still not significant). Otherwise,'model3' is basically the same as 'model2'. The Log-likelihood and AIC values are not changed from changing the referent level.
References
Hosmer Jr DW & Lemeshow S (2004). Applied Logistic Regression, 2nd Edition. Kleinbaum DG & Klein M (). Logistic Regression. A self-learning Text (2nd Edition). Springer-Verlag New York, Inc. August 2002.
158
Exercise
Problem 1. With the data frame 'complete.data', compute the odds ratio and 95% confidence interval for combined exposure to 'eclair.eat' and 'beefcurry' using the group who were exposed to neither eclair nor beef curry as the reference.
Problem 2. Use the modified ANCdata dataset and the function 'xtabs' to create a stratified 2x2 table. Then use the 'mhor' function to analyse the adjusted odds ratio. Hint: 'help (xtabs)', 'help(mhor)'.
Problem 4. Use logistic regression to investigate a dose response relationship (linear trend) between gravidity and risk of ectopic pregnancy, after adjustment for the effect of previous induced abortion.
159
Examples in previous chapters have cases and control independently recruited. For a matched case control study, when a case is recruited, a control, or a set of controls (more than one person), can be selected to match with the case in some parameters such as age and sex and other conditions such as being siblings or neighbours. If control series are chosen based on matching on only age and sex and the purpose of such selection is only to avoid imbalances, then the dataset should probably be analysed in a non-matched setting. There many good books on how to analyse casecontrol studies, particularly in the matched setting, and readers should consult the references at the end of this chapter. The examples in this chapter are for demonstration purposes only. The sample size is rather small for making solid conclusions. However, the methods can still be applied to other matched case-control studies. In the analysis of matched sets, comparison is made within each matched set rather than one series against the other. In this chapter, the datasets VC1to1 and VC1to6 consist of data from a matched case-control study testing whether smoking, drinking alcohol and working in the rubber industry are risk factors for oesophageal cancer. Each case was matched with his/her neighbours of the same sex and age group. The matching ratio varied from 1:1 to 1:6. The file VC1to6 is the full dataset whereas VC1to1 has the number of controls per case reduced to 1 for all matched sets. This latter file is first used for matched pair analysis.
> > > > zap() data(VC1to1) use(VC1to1) des()
No. of observations = 52 Variable Class 1 matset numeric 2 case numeric 3 smoking numeric 4 rubber numeric 5 alcohol numeric > summ()
Description
160
No. of observations = 52 Var. name matset case smoking rubber alcohol obs. 52 52 52 52 52 mean 13.5 0.5 0.81 0.33 0.52 median 13.5 0.5 1 0 1 s.d. 7.57 0.5 0.4 0.47 0.5 min. 1 0 0 0 0 max. 26 1 1 1 1
1 2 3 4 5
There are 26 matched pairs as shown in the sorted 'matset' variable. The codes of the variable 'case' are 1 for diseased and 0 for non-diseased.
> wide <- reshape(.data, timevar="case", v.names=c("smoke", "rubber", "alcohol"), idvar="matset", direction="wide") > wide[1:3,] matset smoke.1 rubber.1 alcohol.1 smoking.0 rubber.0 alcohol.0 1 1 1 0 0 1 0 3 2 1 0 1 1 1 5 3 1 1 0 1 1
The original data frame '.data' has the variables arranged in long form. Each record represents one subject. The new data frame 'wide' is in wide form. Each record represents one matched pair. Cross-tabulating the smoking habit of cases and controls in each matched pair can now be done easily.
> attach(wide) > table(smoking.1, smoking.0, dnn=c("smoking in case", "smoking in control")) smoking in control smoking in case 0 1 0 0 5 1 5 16
The optional argument 'dnn' in the above 'table' command allows the dimension names to be specified, facilitating interpretation. From this cross tabulation, there was no matched pair where both the case and control were non-smokers. There were sixteen matched pairs where both were smokers. In five pairs, the cases smoked but the controls did not (left lower corner). In the remaining five pairs (right upper corner), the controls smoked while the cases did not. The level of contrast of history of smoking between the two based on matched pairs is called a conditional odds ratio. It is the value of the left lower corner cell divided by the right upper corner cell. In this case the conditional odds ratio (sometimes called McNemar's odds ratio) is 5/5 = 1. In fact, this means that the ratio of
161
discordant counts between cases having the exposure against controls having exposure is 1. Epicalc has a function 'matchTab' that can be used to analyse the matched set (not necessary 1 case per 1 control) from the original dataset as follows:
> detach(wide) > rm(wide) > matchTab(case, smoking, strata=matset) Number of controls = 1 No. of controls exposed No. of cases exposed 0 1 0 0 5 1 5 16 Odds ratio by Mantel-Haenszel method = 1 Odds ratio by maximum likelihood estimate (MLE) method = 1 95%CI= 0.29 , 3.454
The two methods give the same values for the odds ratio. The MLE method also gives a 95% confidence interval of the estimate.
1:n matching
If there is no serious problem on scarcity of diseased cases, the best ratio of matching is one case per control. Resources spent on collecting data from each individual will be most efficient regardless of whether the subject is a case or a control. However, when the disease of interest is rare, it is often cost-effective to increase the number of controls per case. The efficiency (especially resources spent on collecting data from extra controls) is decreased but it means that the study may end sooner.
> > > > > > zap() data(VC1to6) use(VC1to6) des() summ() .data matset case smoking rubber alcohol 1 1 1 1 0 0 2 1 0 1 0 0 3 2 1 1 0 1 4 2 0 1 1 0 ================= lines omitted ============ 115 26 1 1 0 1 116 26 0 0 0 0 117 26 0 1 1 0 118 26 0 0 0 0 119 26 0 1 1 1
162
Here the number of controls per case in the matched sets varies from one to six. It would be cumbersome to reshape it into a wide form. Let's use the Epicalc function 'matchTab'.
> matchTab(case, smoking, strata=matset) Number of controls = 1 No. of controls exposed No. of cases exposed 0 1 0 0 0 1 0 3 Number of controls = 2 No. of controls exposed No. of cases exposed 0 1 2 0 0 0 1 1 1 1 0 ================= lines omitted ============ Number of controls = 6 No. of controls exposed No. of cases exposed 0 1 2 3 4 5 6 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 2 Odds ratio by Mantel-Haenszel method = 1.988 Odds ratio by maximum likelihood estimate (MLE) method = 2.066 95%CI= 0.678 , 6.299
The command gives six tables based on the matched sets of the same size (cases per controls). The last table, for example, shows that there are four matched sets with six controls per case. One of them has case non-exposed and three out of the controls exposed. One has case exposed and five of the six controls non-exposed. The remaining two sets have the case and all of the six controls exposed. The odds ratios from the two different methods are slightly different. The effect of smoking on the outcome is not statistically significant as the 95% confidence interval of the odds ratio contains the value 1.
163
> data(VC1to1) > use(VC1to1) > wide <- reshape(.data, timevar="case", v.names=c("smoking", "rubber", "alcohol"), idvar="matset", direction="wide") > > > > > attach(wide) smoke.diff <- smoking.1 - smoking.0 alcohol.diff <- alcohol.1 - alcohol.0 outcome.diff <- rep(1, time=26) cbind(outcome.diff, smoke.diff, alcohol.diff)
The variable 'outcome.diff' is always 1 whereas 'smoke.diff' and 'alcohol.diff' can be 1 (when the case is exposed but the control is not), -1 (when the control is exposed but the case is not) and 0 (both exposed or both not exposed).
> co.lr1 <- glm(outcome.diff ~ smoke.diff-1, binomial) > summary(co.lr1) ======================== Coefficients: Estimate Std. Error z value Pr(>|z|) smoke.diff 0.000 0.632 0 1 ======================== AIC: 38.04
In the above 'glm' model, the difference of the outcome (which is always 1 for the above reason) is predicted by the difference in smoking habit. There is an additional term '-1' in the right-hand side of the formula, which indicates that the intercept should be removed from the model. Usually, the intercept is the expected value of the dependent variable (the variable on the left-hand side of the formula) when all the independent variables are equal to 0. In conditional logistic regression, there is no such intercept because the difference of the outcome is fixed to 1, the logit of which is 0. With the coefficient of 0, the odds ratio is e0 = 1, which is the same as the result from the matched tabulation. The 95% confidence interval of the odds ratio can be obtained from:
> exp(confint.default(co.lr1)) 2.5 % 97.5 % smoke.diff 0.2895 3.4542
These values are exactly the same as those obtained from the matched tabulation. The advantage of logistic regression, however is in its ability to handle more than one exposure variable.
> co.lr2 <- glm(outcome.diff ~ smoke.diff + alcohol.diff - 1, family=binomial) > summary(co.lr2) ======================== Coefficients: Estimate Std. Error z value Pr(>|z|)
164
0.708 0.803
-0.44 1.96
0.66 0.05
The introduction of 'alcohol.diff' has changed the coefficient of 'smoke.diff' substantially indicating that 'smoke.diff' is confounded by 'alcohol.diff'. The odds ratios and their 95% CI can further be computed.
> exp(cbind(coef(co.lr2), confint.default(co.lr2))) 2.5 % 97.5 % smoke.diff 0.73038 0.18239 2.9248 alcohol.diff 4.81406 0.99796 23.2225
Alternatively, the above output can be displayed easily with the Epicalc command 'logistic.display'.
> logistic.display(co.lr2) OR lower95ci upper95ci P value smoke.diff 0.730 0.182 2.925 0.657 alcohol.diff 4.814 0.998 23.223 0.050 Log-likelihood = -15.51315 No. of observations = 26 AIC value = 35.02629
smoking alcohol
165
Rsquare= 0.092 (max possible= Likelihood ratio test= 5.02 on Wald test = 3.83 on Score (logrank) test = 4.62 on
The top section of the results reports that the 'clogit' command actually calls another generic command 'coxph'. If the called command is used, the result will be the same.
> coxph(formula = Surv(rep(1, 52), case) ~ smoking + alcohol + strata(matset), method = "exact")
The odds ratios and their 95% confidence intervals from 'clogit' are the same as those obtained by modelling the difference. The last section of 'clogit' contains several test results, each of which indicates that the model is not significantly different from the null model (which does not include any predicting variables).
References
Breslow NE, Day NE (1980). The Analysis of Case-Control Studies (Statistical Methods in Cancer Research, Vol. 1). Int Agency for Research on Cancer. Thomas Lumley. Survival package in R.
Exercises Problem 1.
Carry out a matched tabulation for alcohol exposure in 'VC1to6'. Compare the results with those obtained from the conditional logistic regression analysis.
Problem 2.
Refer to the log likelihood and AIC values in the preceding chapter on generalized linear model. The conditional logistic regression model gives neither the log likelihood nor AIC value but it does give the conditional log likelihood, which also indicates the level of fit. This conditional log likelihood can be used for comparison of nested models from the same dataset.
> clogit3 <- clogit(case ~ smoking + alcohol + rubber + strata(matset)) > attributes(clogit3) > clogit3$loglik [1] -37.89489 -31.89398
The element 'loglik' from each 'clogit' command (analogous to 'logLik' of 'glm')
166
contains two sub-elements. The first sub-element, which is the conditional likelihood of the null model, is the same for all the conditional logistic regression models. The second sub-element is specific to the particular model. Twice the absolute difference of the two sub-elements is equal to the likelihood ratio test for the model. This likelihood ratio test result can be seen from the display of the model. Try different models and compare their conditional log likelihoods. Choose the best fitting model.
167
Logistic regression is well known for the modelling of binary outcomes. In some occasions, the outcome can have more than two non-ordered categories. In chapter 15 we looked at the dataset 'ectopic.dta', which came from a study testing a hypothesis whether previous induced abortion is a risk factor for current ectopic pregnancy (EP). The outcome has two groups of controls: subjects coming for induced abortion services (IA) and women who delivered babies (Deli). Both groups were used to represent intra-uterine pregnancy. The outcome in this study has therefore three nominal categories.
Tabulation
> zap() > data(Ectopic); > des() use(Ectopic)
No. of observations =723 Variable Class 1 id integer 2 outc factor 3 hia factor 4 gravi factor
> tabpct(outc, hia, graph=FALSE) Original table Previous induced abortion Outcome never IA ever IA Total EP 61 180 241 IA 110 131 241 Deli 158 83 241 Total 329 394 723 Row percent Previous induced abortion Outcome never IA ever IA Total EP 61 180 241 (25.3) (74.7) (100) IA 110 131 241 (45.6) (54.4) (100) Deli 158 83 241 (65.6) (34.4) (100)
168
Column percent Previous induced abortion Outcome never IA % ever IA EP 61 (18.5) 180 IA 110 (33.4) 131 Deli 158 (48.0) 83 Total 329 (100) 394
Two-way tabulation reveals the highest proportion (74.7%) of ever IA in the EP group compared to 54.4% and 34.4% in the IA and Deli groups, respectively.
> table1 <- table(outc, gravi, hia) > plot(table1, col=c("white", "blue"), las=4, main="Previous induced abortion by outcome & gravidity", xlab="Outcome", ylab= "Gravidity")
Previous induced abortion by outcome & gravidity
EP IA never IA never IA never IA ever IA ever IA Deli ever IA
1-2
Gravidity
3-4 >4
Outcome
The mosaic plot gives complicated information. The column of the plot is outcome, which is divided into EP, IA and Deli, as previously described. The sizes of the 3 columns are the same (241 subjects). Each row represents the three levels of gravidity (number of pregnancies): 1-2, 3-4 and > 4, respectively. The distribution of gravidity among the EP and IA groups are more or less the same, i.e. around a half having 1-2 pregancies, whereas among the women coming to deliver a baby, the percentage in this group is much higher (about 75%). Finally, information can be obtained from the different colours. Blue areas represent women who experienced previous induced abortion while white represents those who did not. In each column, such a percentage appears to increase with gravidity, i.e. women who have high gravidity will have a higher level of exposure to induced abortion in the past. Comparison among the three columns, which is the main hypothesis of this study, shows that the proportion of blue colour is highest among the EP group.
169
The upper part of the output concerns the iteration process of the neural network. The important part for epidemiology is in the 'Coefficients:' section. Interpretation of the coefficients of polytomous logistic regression is rather complicated, especially when the design has one group of cases and more than one group of controls. There are three outcome categories. The first one 'EP' is the reference against which the two comparisons are made. The risk for being EP in this case is reverted to the chance of not being EP within the dataset. Since this study was a case control study, the intercept values should be ignored. The most important part is the coefficients of 'hia'. For those who had a history of induced abortion, the logit of being IA in this pregnancy changes by -0.9073525 unit. This is equivalent to an odds ratio of exp(0.9073525) or 0.403. "The odds of having intra-uterine pregnancy (and eventually came for induced abortion) is reduced by a factor of 0.403 if the subject had a history of induced abortion" can be rephrased as "The odds of having ectopic pregnancy (and therefore not in the IA group) is increased by 1/0.403, or a factor of 2.48". Similarly, the odds ratio for EP using Deli as the control is 1 / e-1.7258539 = 5.617. It is worth remembering that in the chapter on logistic regression, the odds ratio for history of previous induced abortion using two groups combined was obtained as follows:
170
> logistic.display(glm(outc=="EP" ~ hia, binomial)) OR lower95ci upper95ci P value hiaever IA 3.695 2.626 5.199 0 ========= subsequent lines omitted =========
The odds ratio from the logistic regression in chapter 15 of 3.6954 is a value between the two odds ratios of polytomous logistic regression from this chapter. Standard errors can be obtained by the following command:
> summary(multi1) -> s1; s1 ========== coefficient section omitted ============= Std. Errors: (Intercept) hiaever IA IA 0.15964 0.19666 Deli 0.15074 0.20081 ========== correlation section omitted ============
Only the standard errors section is displayed because the coefficients section is shown above with the previous command and the correlation section is not directly related here. To obtain the z value for each cell, type:
> coef(s1) / s1$st -> z; z (Intercept) hiaever IA IA 3.6932 -4.6139 Deli 6.3136 -8.5943
High levels of 'z' indicate the coefficient is several times the value of the standard error. In other words, the coefficient is far away from 0, which the null hypothesis (of no association) is based on. P values can be further obtained by:
> pnorm(abs(z), lower.tail=FALSE)*2 -> p.values > p.values (Intercept) hiaever IA IA 2.2143e-04 3.9513e-06 Deli 2.7264e-10 8.3774e-18
Note that the absolute values of 'z' were used before computing the P values. The 95% confidence interval of the coefficients can be computed based on the coefficients and the standard errors.
> > > > coeff.lower.95ci <- coef(s1) - qnorm(.975) * s1$st coeff.lower.95ci coeff.upper.95ci <- coef(s1) + qnorm(.975) * s1$st coeff.upper.95ci
The odds ratios and their 95% confidence intervals can be achieved from exponentiation of the coefficients and their upper and lower 95% CI.
171
O.R.(95%CI) 0.178(0.12,0.264)
The formatting of the output has been modified to fit on the page. The P values are coded with the number of asterisks conforming to those used in the summary of the 'glm' and 'lm' models. Odds ratios for the intercepts are irrelevant and are therefore omitted. As discussed previously, the odds ratios here are not for risk of ectopic pregnancy but for their reciprocals. To include the next variable 'gravi', type:
> multi2 <- multinom(outc ~ hia + gravi) > mlogit.display(multi2)
Optionally, the upper three commands can be combined and replaced with the one below, which gives the same results.
> mlogit.display(multinom(outc ~ hia + gravi)) # weights: 15 (8 variable) initial value 794.296685 iter 10 value 744.763718 final value 744.587307 converged Outcome =outc; Referent group = EP IA Coeff./SE (Intercept) 0.51/0.165** hiaever IA -1.11/0.223*** gravi3-4 0.39/0.224 gravi>4 0.47/0.295 Deli
172
Coeff./SE (Intercept) 1.02/0.154*** hiaever IA -1.49/0.222*** gravi3-4 -0.47/0.24 gravi>4 -0.7/0.366 Residual Deviance: 1489.175 AIC = 1505.175
Again, the formatting of the output has been modified to fit on the page. None of the coefficients and odds ratios of gravidity in the model 'multi2' are significant. However, this model has a much lower residual deviance compared to model 'multi1'. A reduction from 1507.464 to 1489.175 or 18.289 units at a cost of introducing four more parameters (two gravi levels for two outcomes) can be considered worthwhile since the P value from the chi-squared of 18.289 with 4 degrees of freedom is 0.001. Moreover, the AIC value from model 'multi2' of 1505.175 is obviously smaller than that from 'multi1' of 1515.464. For the final conclusion, after adjustment for gravidity, history of previous induced abortion significantly increases the risk for ectopic pregnancy. The odds ratio is 1/.33 or 3.03 if the client currently requesting for induced abortion is used as the referent group and 1/.225 or 4.4 if women who delivered a baby is the referent group. It is well known that induced abortion is often repeated. Current clients for this service usually experience more induced abortions than the general population. Ectopic pregnancy patients have even more experience of induced abortion than this group. Therefore, history of induced abortion is very likely a true risk factor for ectopic pregnancy.
> mlogit.display(multi3)
The above commands should give the same results as those from 'multi2' except that the names of outcome groups are in lower case. Since the first column is always used as the referent group, one can exploit this method to shuffle the order of outcome variables in order to change the referent group. For example, to use 'deli' as the referent level, 'deli' is put as the first column of the outcome matrix:
173
> multi4 <- multinom(cbind(deli,ep,ia) ~ hia+gravi) > mlogit.display(multi4) Outcome =cbind(deli, ep, ia); Referent group = deli ep Coeff./SE (Intercept) -1.02/0.154*** hiaever IA 1.49/0.222*** gravi3-4 0.47/0.24 gravi>4 0.7/0.366 ia Coeff./SE (Intercept) -0.51/0.131*** hiaever IA 0.38/0.215 gravi3-4 0.85/0.237*** gravi>4 1.16/0.369**
The output is relatively easy to interpret. Using delivery as the referent outcome, for a woman with a history of induced abortion, the odds of being 'ep' or having an ectopic pregnancy in this admission increased by 4.443 fold (which is highly significant) and that for being a (repeating) induced abortion patient increased by only 47 percent (OR = 1.466, which is non-significant). On the other hand, increasing gravidity does not independently increase the risk for ectopic pregnancy but significantly, and in a dose-response relationship fashion, increases the chance for being a client for induced abortion service in the current visit.
Exercises
In a fictitious trial of a vaccine on 120 mice, 75 were given the vaccine ('vac' = 1) while 45 were given a placebo ('vac' = 0). Among these were 35 young mice ('agegr' = 0) and 85 old mice ('agegr' = 1). There were three levels of outcomes: 1 = no change, 2 = became immune and 3 = died. Outcome 1 1 1 1 2 2 2 2 3 3 3 3 vac 0 0 1 1 0 0 1 1 0 0 1 1 agegr 0 1 0 1 0 1 0 1 0 1 0 1 total 25 15 4 8 1 0 25 35 3 1 2 1
174
Problem 1. Is there any difference in age group among the two groups of these vaccine recipients?
Problem 3. Is there any difference in outcomes between the vaccine and placebo treatment groups?
175
In the previous chapters, all variables that were factors were treated as non-ordered categorical variables. Polytomous logistic regression deals with predicting outcomes that are categorical but not ordered. In many situations, the outcome has some kind of ordering. Using polytomous logistic regression for such situations would lose power to detect the association as well as misinterpret the way the outcome variable is related to the exposure variables.
Ordered factors
This chapter uses a dataset from a survey on hookworm infections in southern Thailand conducted in 1993. The objective is to document the effect of age and shoe wearing ('shoes') on the intensity of the infection.
> > > > > > library(nnet) # For polytomous logistic regression library(MASS) # For ordinal logistic regression zap() data(HW93) use(HW93) des()
No. of observations = 637 Variable Class 1 id integer 2 epg numeric 3 age integer 4 shoes factor 5 intense factor 6 agegr factor > summ() No. of observations = 637 Var. name Obs. mean 1 id 637 325.38 2 epg 637 1141.85 3 age 637 25.94 4 shoes 637 1.396 5 intense 637 1.834 6 agegr 637 1.667
Description eggs per g of faeces Shoe wearing Intensity (EPG) Age group
min. 1 0 2 1 1 1
176
2,000+
1-1,999
For light infection (1-1,999 epg), only young adults had a higher risk than the children. For heavy infection (2,000+ epg), the young adults and the elder subjects had a 2.8 and 6.1 times higher risk than the children, respectively. Shoe wearing has a protective effect on both light and heavy infection with odds ratios of 0.62 and 0.262, respectively.
177
This ordinal logistic regression model has two intercepts, one for each cut point of the outcome. The values of these intercepts are not so meaningful and can be ignored at this stage. The coefficients for age are shared by the two cut points. Both coefficients are positive indicating that the risk of infection increases with age. Shoe wearing has a negative coefficient indicating that it protects both levels of infection.
> summary(ord.hw) -> > attributes (s1) $names [1] "coefficients" [4] "fitted.values" [7] "df.residual" [10] "nobs" [13] "convergence" [16] "contrasts" [19] "digits" $class [1] "summary.polr" s1
178
shoesyes 2.713385e-05
The above commands define 't' and 'df' from the summary of the regression. The last command uses the absolute value of 't' for computation of the two-sided P values. All P values are significant.
'ordinal.or.display'
Epicalc has a function to display ordinal odds ratio and their 95% confidence intervals.
> ordinal.or.display(ord.hw) Ordinal OR lower95ci agegr15-59 yrs 2.169 1.517 agegr60+ yrs 3.596 1.913 shoesyes 0.485 0.341 upper95ci 3.116 6.788 0.686 P.value 1.39e-05 4.07e-05 2.71e-05
The conclusion from this ordinal logistic regression model is that intensity of infection significantly increases with age and is significantly reduced by wearing shoes. At each cut point of the intensity of infection, on the average, wearing shoes is associated with a reduction of 0.48 or a half of the odds of those not wearing shoes.
References
Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth edition. Springer.
Exercise
The level of pain after treatment (1 = no pain, 2 = some pain, 3 = severe pain) was measured after treatment of one group of subjects with a pain killer (Drug = 1 ) against placebo (Drug = 0 ) in males (1) and females (0) with the following data:
Male Drug Pain Total 0 0 1 3 0 1 2 5 0 0 3 15 0 1 1 10 0 0 2 5 0 1 3 7 1 0 1 8 1 1 2 5 1 0 3 10 1 1 1 10 1 0 2 10 1 1 3 2
Analyse the effect of this drug with adjustment for sex using polytomous and ordinal logistic regression.
179
Poisson regression
Poisson regression deals with outcome variables that are 'counts' in nature (whole numbers or integers). Independent covariates are similar to those encountered in linear and logistic regression. In epidemiology, Poisson regression is used for analysing grouped cohort data, looking at incidence density among person-time contributed by subjects of similar characteristics of interest. Poisson regression is one of three common generalized linear models (GLM) used in epidemiological studies. The other two that are more commonly used are linear regression and logistic regression, which have been covered in previous chapters. There are two main assumptions for Poisson regression. Firstly, risk is homogeneous among person-times contributed by different subjects who have the same characteristics of interest (e.g. sex, age-group) and the same period. Secondly,
180
asymptotically, or as the sample size becomes larger and larger, the mean of the counts is equal to the variance.
Poisson regression eliminates some of the problems faced by other regression techniques. For example, in logistic regression, different subjects may have different person-times of exposure. Analysing risk factors while ignoring differences in person-times is therefore wrong. In survival analysis using Cox regression (discussed in chapter 22), only the hazard ratio and not incidence density of each subgroup is computed. The analysts and the readers may not have clear idea on the descriptive statistics of these baseline risks. In other words, Poisson regression produces both 'baseline incidence density' as well as 'incidence density ratio' among strata.
No. of observations = 114 Var. name Obs. 1 respdeath 114 2 personyrs 114 3 agegr 114 mean 2.42 1096.41 2.61 median 1 335.15 3 s.d. 3.3 2123.1 1.1 min. 0 4.2 1 max. 19 12451 4
181
2 1 2
1 1 1
4 2 4
> des() No. of observations = 114 Variable respdeath personyrs agegr period start arsenic Class integer numeric integer integer integer integer Description
1 2 3 4 5 6
The last four variables are classed as integers. We need to tell R to interpret them as categorical variables, or factors, and attach labels to each of the levels. This can be done using the 'factor' command with a labels argument included.
> agegr <- factor (agegr, labels = c("40-49", "50-59", "6069", "70-79")) > period <- factor (period, labels = c("1938-1949", "19501959", "1960-1969", "1970-1977")) > start <- factor (start, labels = c("pre-1925", "1925 & after")) > arsenic <- factor (arsenic, labels = c("<1 year", "1-4 years","5-14 years", "15+ years")) > label.var(agegr, "Age group") > label.var(period, "Period of employment") > label.var(start, "Era of starting employment") > label.var(arsenic, "Amount of exposure to arsenic") > des() No. of observations =114 Variable Class 1 respdeath integer 2 personyrs numeric 3 agegr factor 4 period factor 5 start factor 6 arsenic factor
Description
Age group Period of employment Era of starting employment Amount of exposure to arsenic
Carry out the same procedure for number of deaths, and compute the table of incidence per 10,000 person years for each cell.
> tapply(respdeath, list(period, agegr), sum) -> table.deaths > table.inc10000 <- table.deaths/table.pyears*10000
182
> table.inc10000 40-49 1938-1949 5.424700 1950-1959 3.344638 1960-1969 4.341516 1970-1977 4.408685
#/10,000 person-years
60
1938-1949
1950-1959
1960-1969
1970-1977
The above graph shows that the older age group is generally associated with a higher risk. On the other hand, the sample size (reflected by the size of the squares at each point) decreases with age. The possibility of a confounding effect of age can better be examined by using Poisson regression.
183
============================== Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -6.4331 0.1715 -37.511 <2e-16 period1950-1959 0.2365 0.2117 1.117 0.2638 period1960-1969 0.3781 0.2001 1.889 0.0588 period1970-1977 0.4830 0.2036 2.372 0.0177 AIC: 596 ==============================
The option 'offset = log(personyrs)' allows the variable 'personyrs' to be the denominator for the counts of 'respdeath'. A logarithmic transformation is needed since, for a Poisson generalized linear model, the canonical link function is the natural log, and the default link for the Poisson family is the log link. An important criterion in the choice of a link function for various families of distributions is to ensure that the fitted values from the modelling stay within reasonable bounds. Specifying a log link (default for Poisson) ensures that the fitted counts are all greater than or equal to zero.
Note: For more details on default links for various families of distributions related to generalized linear modelling, see the help in R under 'help(family)'.
The first model above of Poisson regression with 'period' as the only independent variable suggests that the death rate increased with time. The model can be tested for goodness of fit and the checked whether the Poisson assumptions mentioned earlier in the chapter have been violated.
The component '$chisq' is actually computed from the model deviance, a parameter reflecting the level of errors. A large chi-squared value with small degrees of freedom results in a significant violation of the Poisson assumption (p < 0.05). If only the P value is wanted, the command can be shortened.
184
> poisgof(model1)$p.value
The P value is very small indicating a poor fit. Note: It should be noted that this method is under assumption of a large sample size. An alternative method is to a fit negative binomial regression model and check if the parameters are different from 1, which is demonstrated in the latter section of this chapter. We now add the second independent variable 'agegr' to the model.
> model2 <- glm(respdeath ~ agegr + period, offset = log(personyrs), family = poisson) > AIC(model2) # 396.64
AIC has decreased remarkably from model1 to model2 indicating a poor fit of the first model.
> poisgof(model2)$p.value # 0.0003295142
Removal of 'period' further reduces the AIC but still violates the Poisson assumption to the same extent as the previous model. The next step is to add the main independent variable 'arsenic'.
> model4 <- glm(respdeath ~ agegr + arsenic, offset = log(personyrs), family = poisson) > AIC(model4) # 355.04 > poisgof(model4)$p.value # 0.14869
Model4 has a much lower AIC than model3 and it now does not violate the assumption. Alternatively, instead of having arsenic as categorical variable, it can be included in the model as a continuous variable. If the AIC is smaller, then this would imply that there is a linear dose-response relationship between exposure to arsenic and the risk for the disease. The variable 'arsenic' is 'unclass'ed in the next model.
185
5.938 2.88e-09 9.767 < 2e-16 9.995 < 2e-16 6.403 1.52e-10
Although the linear term ('unclass(agegr)') is significant, the AIC value in model5 is higher than that of model4. It would therefore be better keeping arsenic as factor. However, arsenic may also be dichotomised (re-classified into two levels).
> arsenic1 <- arsenic!="<1 year" > model6 <- glm(respdeath ~ agegr + arsenic1, offset = log(personyrs), poisson) > summary(model6) ============================ Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -8.0086 0.2233 -35.859 < 2e-16 agegr50-59 1.4702 0.2453 5.994 2.04e-09 agegr60-69 2.3661 0.2372 9.976 < 2e-16 agegr70-79 2.6238 0.2548 10.297 < 2e-16 arsenic1TRUE 0.8109 0.1210 6.699 2.09e-11 ============================ AIC: 353.8 > poisgof(model6)$p.value # 0.13999
At this stage, we would accept 'model6' as the model of choice as it has the smallest AIC among all the models that we have tried. We conclude that exposure to arsenic for at least one year would increase the risk for the disease by exp(0.8109) or 2.25 times with statistical significance.
186
> newdata <- as.data.frame(list(agegr="40-49", arsenic1=FALSE, personyrs=100000)) > predict(model6, newdata, type="response") [1] 33.257
This population would have an estimated incidence density of 33.26 per 100,000 person-years.
The above procedure starts by appending a new row to the data frame 'newdata' having everything the same as the first row except that the variable 'arsenic1' is TRUE. The responses or incidence densities of the two conditions are then computed. The IDR is then obtained from division of the 'arsenic1=TRUE' or arsenic exposed for at least one year with 'arsenic1=FALSE' or arsenic exposure of shorter then one year. A shorter way to obtain this IDR is to exponentiate the coefficient of the specific variable 'arsenic', which is the fifth coefficient.
> coef(model6) > exp(coef(model6)[5]) # 2.2499
187
The required values are obtained from exponentiating the last matrix with the first row or intercept removed. The display is rounded to 2 decimals for better viewing. Then the matrix column is labelled and the 95% CI is displayed.
> colnames(IDR.95ci) <- c("IDR", "lower95ci", "upper95ci") > IDR.95ci
Note that the command 'idr.display' gives results to 3 decimal places by default. This can easily be changed by the user.
> idr.display(model6, decimal=2)
188
> library(MASS) > data(DHF99); use(DHF99) > des() No. of observations = 300 Variable Class 1 houseid integer 2 village integer 3 education factor 4 containers integer 5 viltype factor > summ() No. of observations = 300 Var. name obs. mean median 1 houseid 300 174.27 154.5 2 village 300 48.56 51 3 education 300 2.09 1 4 containers 299 0.35 0 5 viltype 300 1.56 1
min. 1 1 1 0 1
> summ(containers, by=viltype) For viltype = rural obs. mean median s.d. min. 179 0.492 0 1.251 0 For viltype = urban obs. mean median s.d. min. 72 0.069 0 0.256 0 For viltype = slum obs. mean median s.d. min. 48 0.25 0 0.526 0
slum
urban
rural 0 2 4 6 8 10
189
The function for performing a negative binomial glm is 'glm.nb'. This function is located in the 'MASS' library. In addition, a very helpful function for selecting the best model based on the AIC value is the 'step' function, which is located in the 'stats' library (a default library loaded on start-up).
> model.poisson <- step(glm(containers ~ education + viltype, family=poisson, data=.data)) > model.nb <- step(glm.nb(containers ~ education + viltype, data=.data)) > coef(model.poisson) (Intercept) viltypeurban viltypeslum -0.7100490 -1.9571792 -0.6762454 > coef(model.nb) (Intercept) viltypeurban viltypeslum -0.7100490 -1.9571792 -0.6762454
Both models end up with only 'viltype' being selected. The coefficients are very similar. The Poisson model has significant overdispersion but not the negative binomial model.
> poisgof(model.poisson)$p.value [1] 0.0043878 > poisgof(model.nb)$p.value [1] 1
The AIC of the negative binomial model is also better (smaller) than that of the Poisson model.
> model.poisson$aic [1] 505.92 > model.nb$aic [1] 426.23
Finally, the main differences to be examined are their standard errors, the 95% confidence intervals and P values.
> summary(model.poisson)$coefficients Estimate Std. Error z value Pr(>|z|) (Intercept) -0.7100490 0.1066000 -6.660873 2.722059e-11 viltypeurban -1.9571792 0.4597429 -4.257117 2.070800e-05 viltypeslum -0.6762454 0.3077286 -2.197538 2.798202e-02 > summary(model.nb)$coefficients Estimate Std. Error z value Pr(>|z|) (Intercept) -0.7100490 0.1731160 -4.101578 4.103414e-05 viltypeurban -1.9571792 0.5255707 -3.723912 1.961591e-04 viltypeslum -0.6762454 0.4274174 -1.582166 1.136116e-01 > idr.display(model.poisson) IDR lower95ci upper95ci P value viltypeurban 0.141 0.057 0.348 0.000 viltypeslum 0.509 0.278 0.930 0.028 > idr.display(model.nb)
190
IDR lower95ci upper95ci P value viltypeurban 0.141 0.05 0.396 0.000 viltypeslum 0.509 0.22 1.175 0.114
The standard errors from the negative binomial model are slightly larger than those from the Poisson model resulting in wider 95% confidence intervals and larger P values. From the Poisson regression, both urban community and slum area had a significantly lower risk (around 14% and a half reduction, respectively) for infestation. However, from the negative binomial regression, only the urban community had a significantly lower risk.
References
Agresti, A. (1996). An Introduction to Categorical Data Analysis. New York: John Wiley and Sons. Agresti, A. (2002). Categorical Data Analysis. Hoboken, NJ: John Wiley and Sons. Powers, D.A., Xie, Y. (2000). Statistical Methods for Categorical Data Analysis. San Diego: Academic Press. Long, J.S. (1997). Regression Models for Categorical and Limited Dependent Variables. Thousand Oakes, CA: Sage Publications. Vermunt, J.K. (1997). Log-linear Models for Event Histories. Thousand Oakes, CA: Sage Publications.
Exercise
Use 'step' to select the best model predicting incidence densities of the Montana dataset. Check the Poisson goodness of fit. Compute the incidence density ratio for significant independent variables. Fit a negative binomial regression model to check the theta and its standard error term before conclusion whether there is any evidence of dispersion.
191
There are many other names for multi-level modelling, e.g. hierarchical modelling, mixed effects modelling, modelling with random effects. They are all the same. Each name has its own implication. In epidemiological studies, variables often have a hierarchy. For example, measurement of blood pressure belongs to an individual subject who can have more than one measurement. In this case, the individual person is at higher hierarchy than each measurement. An individual, however, belongs to a family, all members of which may share several independent variables, such as ethnicity, housing, etc. In turn a family is usually a member of a village, and so forth. Thus the hierarchy can be country, province, district, village, family, individual and measurement. Certain independent variables will be at the individual measurement level, such as time of measurement. Some variables may belong to a higher hierarchical order, such as sex and age (individual), ethnicity (family), and distance from the capital city (village). Independent variables at different levels of the hierarchy should not be treated in the same way. For this reason multi-level modelling is also called hierarchical modelling. In another aspect, modelling is usually meant for explanation of the relationship of variables in an informative and efficient manner. In simple modelling, where the number of groups are not high, say m ethnic groups under study, the number of parameters used to explain the effect of 'ethnic' is m-1 because the omitted one is used as the referent group. If the sample size is large and m is small the number of parameters used would not be too high. On the other hand, if the sample size is small but the number of groups is high, for example, 50 subjects with multiple blood pressure measurements, the grouping variables would have too many levels too put into the model. To do this, an average value for the group is computed and the individual members are treated as random effects without a parameter. In this situation, multi-level modelling is also called modelling with random effects. However, the random effects must always have an average, which is used to estimate the overall effect. This average or overall effect is called the fixed effects. With the mixture of fixed and random effects in the same model, multi-level modelling is also called 'mixed effects modelling'.
192
Multi-level modelling is relatively new compared to other common types of modelling such as linear and Poisson regression. There are variations in the methods of numerical iteration for computation of coefficients and standard errors. They generally give very close estimates but different standard errors, variances and covariances. The examples in this chapter confined to 'glmmPQL' or Generalized Linear Mixed Model using Penalized Quasi-Likelihood. It can handle all families used in GLMs with similar arguments in the command except the additional terms defining the fixed and random effects. Readers are advised to explore other functions such as 'lme' (linear mixed effects) and 'nlme' (non-linear mixed effects).
193
effects' (for individual children) whereas the slope has a 'fixed effect' for the whole group. Combining these two types of random and fixed effects, the model is often called a 'mixed model'. Once the library 'nlme' has been loaded, the dataset Orthodont can be used. Be careful in typing as some of the variable names in this data frame start with upper case.
> > > > > > zap() library(MASS) # For the glmmPQL command library(nlme) # For the example dataset data(Orthodont) .data <- as.data.frame(Orthodont) attach(.data); des()
No. of observations =108 Variable Class 1 distance numeric 2 age numeric 3 Subject factor 4 Sex factor > summ() Var. name 1 distance 2 age 3 Subject 4 Sex
median 23.75 11 14 1
min. 16.5 8 1 1
max. 31.5 14 27 2
> par(las=1) > followup.plot(id=Subject, time=age, outcome=distance, line.col="multicolor") > title(main="PPMF distance by age", ylab="mm", xlab="years")
PPMF distance by age
30
25 mm 20
10
11 years
12
13
14
194
To see whether there is a gender difference, we replace the 'lines' argument with the 'by' argument in the command.
> followup.plot(id=Subject, time=age,outcome=distance, by=Sex) > title(main="PPMF distance by age", ylab="mm", xlab="years")
PPMF distance by age
Male Female 30
25 mm 20
10
11 years
12
13
14
In both plots, it is evident that as age increases so does distance. The rates of individuals are however criss-crossing to a certain extent. Otherwise, the highest and the lowest lines are quite consistent. Males generally had larger pituitary to pterygomaxillary fissure distances.
The above command creates a generalized linear multi-level model (glmm) using the Penalized Quasi-Likelihood (PQL) method of iteration. The dependent variable is 'distance'. The independent variable is 'age', which has fixed effects (for all subjects). The random effects (as indicated by the word 'random') is a constant of 1. The upper level of the model (following the '|' sign) is 'Subject' because the same subject has 4 repeated measurements. In other words, 'Subject' is at higher level. The 'glmmPQL' command handles the 'family' argument of the model in the same way as the 'glm' command. Since the errors are assumed to be normally distributed, the family is specified as 'gaussian'.
> summary(model0)
195
Linear mixed-effects model fit by maximum likelihood Data: .data AIC BIC logLik NA NA NA Random effects: Formula: ~1 | Subject (Intercept) Residual StdDev: 2.072142 1.422728 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: distance ~ age Value Std.Error DF t-value p-value (Intercept) 16.761111 0.8020244 80 20.89851 0 age 0.660185 0.0617993 80 10.68272 0 Correlation: (Intr) age -0.848 Standardized Within-Group Residuals: Min Q1 Med Q3 -3.68695131 -0.53862941 -0.01232442 0.49100161 Number of Observations: 108 Number of Groups: 27
Max 3.74701484
The 'AIC' and 'BIC' values are derived from 'logLik', which is the log likelihood. They will be used to compare the level of fit with other models using the same dataset and the same method of iteration. Note that AIC is equal to -2*logLik+ 2*npar and BIC is equal to -2*logLik + log(n)*npar, where 'npar' is the number of parameters in the model (in this model, four; namely, the standard deviations of intercepts and residuals, which are the random effects, and the coefficient of the fixed intercept and the fixed effect of age) and n is the number of observations (108). Random effects express themselves as standard deviations of errors. There are two parts of errors. The first part is the standard deviations of difference between the fixed intercept and the intercepts of individual subjects. The second part is the standard deviation of the residuals or the difference between the final predicted values and the observed values for each subject. There is no coefficient for these random effects terms because the means should be close to zero. This is because they are assumed to come from the standard normal distribution. The fixed part of the summary, similar to a conventional regression model, contains the coefficients and their standard errors. The coefficient of the intercept is 16.76. This means that on the average, at the age of 0, the PPMF distance for a child is expected to be 16.76 mm. The coefficient of age is 0.66. This means that for each birthday reached, an average child is expected to gain 0.66 mm length of PPMF distance. This coefficient is statistically significant as the standard error is relatively
196
small, resulting in a large t-value and a small P value. The standardised residuals within groups (or within the child) are distributed with a certain degree of symmetry since the median is close to 0, and the lower and upper quartiles are relatively equidistant from the median, as are the minimum and the maximum. Finally, the model confirms that there were 27 children giving 108 records.
There are two parts of the coefficients: the fixed part and random part. The fixed part, shown in the summary, is the average for all of the 27 strata (children). The fixed intercept is 16.761111, which means that the (average) estimated distance at birth (when age is 0) is 16.76 mm. For each increasing year of age, the PPMF distance increases by approximately two-thirds of a millimetre (0.66). The second or random part shows 'random intercepts only' since there is no variable in this part as specified by 'random ~ 1'. There are 27 (additional coefficients for) intercepts, one for each child. For the first child (M16) who has a negative random intercept, or starting distance, the mean intercept from the fixed part (16.76) must be subtracted by 0.9152788. The second person (M05) shares the same intercept. Altogether, the random intercepts range from 4.940849 (F10) to +4.899434 (M10).
197
There are many other attributes worth exploring. The next interesting one is 'model0$fitted', which contains the fitted or predicted values of each point of observation.
> model0$fitted fixed Subject 1 22.04259 25.37653 2 23.36296 26.69690 3 24.68333 28.01727 4 26.00370 29.33764 ==== Up to 108th person ==========
There are two columns of fitted values: fixed (average of each point of time) and random (by Subject). In fact, the fixed part has only four values predicting the average value for each value of age.
> tab1(model0$fitted[, 1]) [ model0$fitted 1 : Frequency Percent 22.0425925925926 27 25 23.3629629629630 27 25 24.6833333333333 27 25 26.0037037037037 27 25 Total 108 100
Each value has 27 repeated records. In other words, there are only four terms of fixed effects, each shared by all 27 subjects. The second component is predicting the intercept value for each subject, which varies from one child to another.
> followup.plot(id=Subject, time=age, outcome=fitted(model0), line.col="multicolor") > title(main="Model0: random intercepts", ylab="mm", xlab="years")
Model0: random intercepts, fixed slopes
30
28
26
mm
24
22
20
18
10
11 years
12
13
14
198
The X-coordinates for each line are the ages for that child. The corresponding Ycoordinates are the fitted values for the PPMF distance. Recall that there are two columns for the fitted values (for the fixed and random effects). The plot uses the second column, which is the predicted value for each child (random effects). The colour varies according to the (order of) 'Subject'. The model fixes the coefficient of the slope, allowing only the intercepts to be a random variable. The next model releases the effects of age to become random with a mean value.
Max 3.92211226
199
30
28
26 mm 24 22 20 18 8 9 10 11 years 12 13 14
Model0 is equivalent to a stratified analysis without interaction whereas model1 is equivalent to keeping an interaction term. The latter model suggests that each child has their own baseline distance (intercept) as well as their own growth rate. The graph shows different slopes for different subjects. The slopes are now a random effect as well as a fixed effect. In the random effects part, age has a standard deviation of 0.215 mm, which is relatively small compared to the randomness of the intercept (2.2 mm) and the residuals (1.3 mm). The variation due to differences in growth rate of the PPMF distance among subjects is small compared to the variation in baselines and the average growth rate. The correlation between age and intercept is negative (-0.585) in the random effects suggesting that the slope of the subjects tends to be flatter as the level of the Y-intercepts increases. The coefficients of the fixed effects for the intercept and age are not different from 'model0'. In fact the coefficients are the same as those from ordinary glm.
> summary(glm(distance ~ age, family=gaussian))
The standard errors from the generalised linear model are much higher than those of the multi-level models. These advanced models improve the precision of the estimates. In this example 'model1' has wider standard errors than 'model0'. When the age effect is partially individualised, the overall age effect reduces its precision. We have another independent variable 'Sex'. It would be interesting to examine whether the boys have larger distance than girls and whether the growth rates are different between the sexes.
200
> model2 <- glmmPQL(distance ~ age + Sex, random = ~1 | Subject, data = .data, family = gaussian) > summary(model2) Linear mixed-effects model fit by maximum likelihood Data: .data AIC BIC logLik NA NA NA Random effects: Formula: ~1 | Subject (Intercept) Residual StdDev: 1.730079 1.422728 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: distance ~ age + Sex Value Std.Error DF t-value p-value (Intercept) 17.706713 0.8315459 80 21.293729 0.0000 age 0.660185 0.0620929 80 10.632212 0.0000 SexFemale -2.321023 0.7430668 25 -3.123572 0.0045 ========= Remaining parts of output omitted ========
'Sex' is introduced as a pure fixed effect. In fact, it cannot be a random effect because there is no variation of sex in an individual subject. The growth lines are now separated by 'Sex'.
> followup.plot(id=Subject, time=age, outcome=fitted(model2), by=Sex) > title(main="Model2: random intercepts and slopes", ylab="mm", xlab="years")
Model2: random intercepts and slopes
Male Female
30
28
26
mm
24
22
20
18
10
11 years
12
13
14
201
It is clear that the lines for males tend to be in the upper half of the plot whereas those for females tend to be in the lower part. To test whether the rates are different between the two sexes, and interaction term between age and sex is introduced.
> model3 <- glmmPQL(distance ~ age * Sex, random = ~1 | Subject, data = .data, family = gaussian) > summary(model3) Linear mixed-effects model fit by maximum likelihood Data: .data AIC BIC logLik NA NA NA Random effects: Formula: ~1 | Subject (Intercept) Residual StdDev: 1.740851 1.369159 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: distance ~ age * Sex Value Std.Error DF t-value p-value (Intercept) 16.340625 0.9814310 79 16.649795 0.0000 age 0.784375 0.0779963 79 10.056564 0.0000 SexFemale 1.032102 1.5376069 25 0.671239 0.5082 age:SexFemale -0.304830 0.1221968 79 -2.494580 0.0147 ========= Remaining parts of output omitted ========
The interaction term between age and sex is significant. The coefficient of the main effect of 'Female' is 1.03, indicating that under a linear growth assumption, at birth (where age is 0), girls have an average of 1.03 mm longer distance of PPMF than boys. The coefficient of the interaction term is -0.30483 indicating that for each increment of one year, girls will have an average of 0.3 mm shorter PPMF distance of the boys. In other words, females have a shorter PPMF distance and a smaller growth rate.
> followup.plot(id=Subject, time=age, outcome=fitted(model3), by=Sex) > title(main="Model3: random intercepts, fixed effects of age:sex", ylab="mm", xlab="years")
202
28
26 mm
24
22
20
18
10
11 years
12
13
14
In conclusion, individual children had different baseline PPMF distances. Girls tended to have a higher PPMF distance at birth. However, boys have a faster growth rate than girls.
203
Exercises
The dataset Bang consists of a subset of data from the '1988 Bangladesh Fertility Survey'. The file has no header but consists of 7 columns and 1,934 rows. Use the following command to read in the data.
> zap() > data(Bang) > names(Bang) <- c("woman", "district", "user", "living.children", "age_mean", "urban", "constant") > use(Bang) > label.var(woman, "woman ID") # > > > > Response variable label.var(user, "current contraceptive use") label.var(age_mean, "age(yr) centred around mean") living.children <- factor (living.children) label.var(living.children, "No. of children living")
Problem 1. Use 'glmmPQL' to compute the effects of the number of living children, age and living in urban area on the probability of contraceptive use among the women. Compute the 95% confidence interval of their odds ratios. Problem 2. Does number of living children have a linear dose response relationship with contraceptive use? Problem 3. Should age be a random effect? Problem 4. Does age have the same effect among urban and rural women on contraceptive use?
204
In a cohort study, a person is followed up from a starting time to the end of the study or to the time the follow-up has been terminated by the outcome event, whichever comes first. The event-free duration is an important outcome. For an unwanted event, the desired outcome is a longer event-free duration. For subjects whose events take place before the end of the study, the total duration of time is known. For the subjects whose follow up times end without the event, the end status is called 'censored' because the actual duration of time to the event is not known or 'censored' by the study. The outcome variable for each subject is therefore composed of 'time' and the 'status' at the end. Mathematically, the status is 1 if the event takes place and 0 otherwise.
No. of observations =27 Variable Class 1 id integer 2 sex factor 3 birthyr integer 4 educ factor 5 marital factor 6 maryr integer 7 endyr integer > summ() No. of observations = 27
Description
year of birth level of eduction marital status year of marriage year of analysis
205
1 2 3 4 5 6 7
Obs. 27 27 27 27 27 16 27
To see the codes for the factor variables type the following command:
> codebook () id : obs. mean 27 14
median 14
s.d. 7.94
min. 1
max. 27
================== sex : Label table: sexlab code Frequency Percent male 1 9 33.3 female 2 18 66.7 ================== birthyr : obs. mean median 27 1962.148 1963
================== educ : level of education Label table: educlab code Frequency Percent bach2 13 48.1 >bachelor 3 14 51.9 ================== marital : marital status Label table: marlab code Frequency Percent Single 1 11 40.7 Married 2 16 59.3 ================== maryr : year of marriage obs. mean median s.d. min. 16 1987.562 1988 5.18 1979 ================== endyr : year of analysis obs. mean median s.d. min. 27 1997 1997 0 1997 ==================
max. 1995
max. 1997
206
Note that the original codes for the variable 'educ' were 2 = bach-, 3 = >bachelor, as shown in the output of the 'codebook' command. This was how the codes were defined in the original data entry program, and the label table associated with each categorical variable were kept with the data. In the output from the 'summ' function however, the numeric codes for 'educ' are displayed as 1 (bach-) and 2 (>bachelor). This anomaly is simply due to unclassing the levels of the factor variable in the output from the 'summ' command. These numeric codes should not be confused with the original coding scheme. In fact, the codes were only used during the original entry of the data, and are never used during data analysis. The variable 'endyr', fixed at 1997, is used for computation of age and age at marriage.
> age <- endyr - birthyr > label.var(age, "Age") > summ(age, by = marital) For marital = Single Obs. mean median s.d. 11 31.18 32 4.996 For marital = Married Obs. mean median 16 37.38 37.5
min. 25
max. 39
s.d. 5.596
min. 29
max. 45
Married
Single
25
30
35
40
45
There were 16 (59%) married participants. Clearly the married participants were older than the single ones.
> age.marriage <- maryr - birthyr > label.var(age.marriage, "Age at marriage") > summ(.data[,c(8,9)])
207
No. of observations = 27 Var. name obs. mean 1 age 27 34.85 2 age.marriage 16 27.94 median 34 27.5 s.d. 6.11 2.77 min. 25 25 max. 45 36
Among the 16 married participants the mean age at marriage was 27.94 years. The whole essence of survival analysis is related to time-to-event. In this dataset we are using age as the time variable and marriage as the event. In most epidemiological studies time is time of follow up and the event is occurrence of an unwanted event, such as death or disease recurrence. Our data comes from a crosssectional survey, whereas most data for survival analysis come from a follow up studies. However, the procedures used on this simple dataset can be applied to other survival data.
Survival object in R
The 'survival' library contains all the functions necessary to analyse survival type data. In order to analyse this data, we need to create an object of class 'Surv', which combines the information of time and status in a single object. The status variable must be either numeric or logical. If numeric, there are two options. Values must be either 0=censored and 1=event, or 1=censored and 2=event. If logical, FALSE=censored and TRUE=event. In the 'marryage' dataset, 'marital' is a factor and so must be converted to one of the formats specified above. We will choose the logical format, but this is arbitrary.
> married <- marital == "Married" > time <- ifelse(married, age.marriage, age)
Note that time for married and unmarried subjects are differently generated. For a married person, we know exactly that the duration of time is age at marriage. His/her survival time stops at the year of marriage. For an unmarried person, we do not know this length of time. So 'age' is used instead. The survival object for marriage can now be created and compared against other variables.
> surv.marriage <- Surv(time, married) > surv.marriage [1] 26 26 29 25+ 26 26+ 28 28 28 36+ 36 39+ 29 33+ 25 31 27 34+ 37+ 26 27+ 25 27 26+ 28+ 30 32+ > data.frame(age, age.marriage, married, surv.marriage)[1:7,] age age.marriage married surv.marriage 1 44 26 TRUE 26 2 43 26 TRUE 26 3 45 29 TRUE 29 4 25 NA FALSE 25+ 5 37 26 TRUE 26 6 26 NA FALSE 26+ 7 42 28 TRUE 28
208
For the first three subjects, the 5th and the 7th who were all married, the values of 'surv.marriage' are equal to 'age.marriage'. For the 4th and the 6th subjects, the values are equal to the age. The plus sign indicates that the actual 'time' is beyond those values but were censored. (Those participants had not married at the time of the workshop). For further exploration, subsets of variables sorted by 'time' are displayed by the following command.
> cbind(age, sex, age.marriage, married, surv.marriage)[order(time),] age sex age.marriage married time status [1,] 25 1 NA 0 25 0 [2,] 32 2 25 1 25 1 [3,] 29 1 25 1 25 1 [4,] 44 1 26 1 26 1 [5,] 43 2 26 1 26 1 [6,] 37 2 26 1 26 1 [7,] 26 2 NA 0 26 0 [8,] 34 1 26 1 26 1 ================= subsequent lines omitted =========
The 'Surv' object consists of time and status. The first person, a 25 year old male, was single. His 'time' is 25 and his status is 0 or censored. The second was a 32 year old woman who had married at the age of 25, so this is her 'time'. The event (marriage) had already occurred, thus her status = 1, etc.
Life table
A life table is a tabulation of the survival, event and survival probability over time. The classical method for this analysis in the general population has been well developed for centuries. In general, the method involves calculating the cumulative survival probability, which is the product of the survival probabilities at each step. For our simple dataset, the overall life table can be achieved by:
> summary(survfit(surv.marriage), censor=TRUE) Call: survfit(formula = surv.marriage) time n.risk n.event survival std.err lower95CI upper95CI 25 27 2 0.926 0.0504 0.832 1.000 26 24 4 0.772 0.0820 0.627 0.950 27 18 2 0.686 0.0926 0.526 0.894 28 15 3 0.549 0.1025 0.380 0.791 29 11 2 0.449 0.1054 0.283 0.711 30 9 1 0.399 0.1048 0.238 0.668 31 8 1 0.349 0.1029 0.196 0.622 32 7 0 0.349 0.1029 0.196 0.622 33 6 0 0.349 0.1029 0.196 0.622 34 5 0 0.349 0.1029 0.196 0.622 36 4 1 0.262 0.1080 0.117 0.588 37 2 0 0.262 0.1080 0.117 0.588 39 1 0 0.262 0.1080 0.117 0.588
209
The first row of the output says that at time 25 (when all participants were aged 25 which is everyone), there were 27 subjects, two of whom were married at that time. The survival probability (probability of getting married at this age) is calculated as (27-2)/27 = 0.926. In fact, there is one person aged 25 years who is not shown. This person is censored (not married) so is included in this row but not in subsequent rows. On the second row, there were 24 persons remaining who had reached or passed their 26th birthday (27 started, 2 events and 1 censored at the end of the 25th year). At this time, 4 events took place, and since the third row says that only 18 persons remained at the next time point, 2 subjects must have been censored. The survival probability for time 26 is therefore (24 - 4)/24 = 0.833. When multiplying this value with the previous probability in the first row, the cumulative probability is (25/27) x (20/24) = 0.772. This computation of cumulative survival probability continues in a similar way until the end of the dataset. Note that at the time points of 32, 33, 34, 37 and 39 years, there were no events (n.event = 0). The probability is therefore unchanged. The above Kaplan-Meier life table is a slight modification from the classical demographical method where the time interval is fixed (usually at every 5 years of age) and adjustment for incomplete information of exact time of event is taken into account.
Kaplan-Meier curve
The summary of a survival object reveals many sub-objects.
> km1 <- summary(survfit(surv.marriage), censor=T) > attributes(km1) $names [1] "surv" "time" "n.risk" "n.event" "conf.int" "std.err" "lower" "upper" "call" $class [1] "summary.survfit"
We can use this 'km1' object to plot 'time' vs 'surv', to produce a stepped line plot, which is called a 'survival curve' or 'Kaplan-Meier curve'.
> plot(km1$time, km1$surv, type="s")
210
0.0 0
0.2
0.4
0.6
0.8
1.0
10
20
30
40
If 'xlim=c(0, max(km$time))' is added, the curve will be very similar to that produced by the standard command.
> plot(survfit(surv.marriage))
When there is only one curve plotted, the two 95% confidence interval lines and the time marks for censored subjects are included in the plot. To suppress them, they can be set be 'FALSE'.
> plot(survfit(surv.marriage), conf.int=FALSE, mark.time=FALSE)
The vertical axis is survival probability and the horizontal axis is time. If a horizontal line were drawn at probability 50%, it would cross the survival curve at the point of the median survival time. If less than half of the subjects have experienced the event then the median survival time is undefined.
> abline(h=.5, lty=2, col="red")
In this dataset, the median survival time (age at marriage) is 29 years. This value is actually displayed when the 'survfit' command is executed.
> survfit(surv.marriage) Call: survfit(formula = surv.marriage) n 27 events 16 median 0.95LCL 0.95UCL 29 28 Inf
211
the cumulative rate since it is relative easy to perceive the change of rate by the slope of the cumulative curve.
> plot(survfit(surv.marriage), conf.int=FALSE, fun="cumhaz")
0.0 0
0.2
0.4
0.6
0.8
1.0
1.2
10
20
30
40
In the first 25 years, the slope is flat due to the absence of events. From 25-31 years, the slope is relatively steep, indicating a high rate of marriage during these years. The last steep rise occurs at 36 years. At the end of curve, the rate is not very precise due to the smallness of the sample size in this time period. Survival summaries can be obtained by different levels of a factor variable by adding terms to the formula argument of the 'survfit' function. Multiple survival curves can also be shown in the same graph.
> survfit(surv.marriage ~ sex) Call: survfit(formula = surv.marriage ~ sex) n events median 0.95LCL 0.95UCL sex=male 9 6 30 26 Inf sex=female 18 10 28 28 Inf > summary(survfit(surv.marriage ~ sex)) > plot(survfit(surv.marriage ~ sex), col=c("red", "blue"), legend.text=c("male", "female"))
Note that the legend can also be created in the conventional way.
> plot(survfit(surv.marriage ~ sex), col=c("red", "blue")) > legend(10,.4, legend=c("male", "female"), col=c("red", "blue"), lty=c(1,1))
212
0.2
0.4
0.6
0.8
1.0
0.0
male female
10
20
30
40
When there are multiple survival curves, the 95% confidence interval lines are omitted.
With this small sample size, the difference can simply be explained by chance alone. The 'survdiff' command actually has 5 arguments, the last one being 'rho', which specifies the type of test to use. When rho = 0 (by default) the log-rank or Mantel-Haenszel chi-squared test is performed. This compares the expected number of events in each group against the observed values. If the level of difference between these two groups is too high, the chi-squared value will be high and the P value will be small indicating that the curves are significantly different. If rho = 1 then the Peto modification of the Gehan-Wilcoxon test (sometimes called the Peto test) is performed, which places more weight on earlier events.
213
Stratified comparison
There is a significant association between sex and education.
> cc(sex, educ) educ sex bach- >bachelor Total male 1 8 9 female 12 6 18 Total 13 14 27 OR = 0.07 95% CI = 0.001 0.715 Chi-squared = 7.418 , 1 d.f. , P value = 0.006 Fisher's exact test (2-sided) P value = 0.013
Females are seen to have a higher level of education. The effect of sex on survival with adjustment for education can be obtained as follows:
> survdiff(surv.marriage ~ sex + strata(educ)) Call: survdiff(formula=surv.marriage ~ sex + strata(educ)) N Observed Expected (O-E)^2/E (O-E)^2/V sex=male 9 6 5.61 0.0266 0.0784 sex=female 18 10 10.39 0.0144 0.0784 Chisq= 0.1 on 1 degrees of freedom, p= 0.779
The adjusted effect is not much different from the crude one. Lack of confounding in this case is due to the lack of independent effect of education on age of marriage. We will keep this working environment and return to work on it in the next chapter.
> save.image(file = "Marryage.Rdata")
References
Kleinbaum D, Klein M (2005). Survival Analysis: A Self-Learning Text. Hosmer Jr D, Lemeshow S (1999). Applied Survival Analysis: Regression Modeling of Time to Event Data.
214
Exercises
The file Compaq contains data from a follow up study on breast cancer in Europe evaluating whether patients in private hospital ('hospital') had better survival ('year').
Problem 1.
Check the distribution of year of deaths and censoring.
Problem 2.
Draw Kaplan-Meier curves for each hospital group with censoring marks shown on the curves.
Problem 3.
Test the significance with and without adjustment for other potential confounders: age ('agegr'), stage of disease ('stage') and socio-economic level ('ses').
215
h(t, X) = h0 (t)e
Xi
The left-hand side of the equation says that the hazard is influenced by time and the covariates. The right-hand side of the equation contains h0(t), which is the baseline hazard function when all the Xi are zero. This baseline hazard function is multiplied by e to the power of the summation of all the covariates weighted by the estimated coefficients, i. Consequently,
216
i X i h(t, X) = e h0 (t)
The left-hand side is the proportion, or ratio, between the hazard of the group with exposure of X against the baseline hazard. The right-hand side is the exponentiation of the sum of products of estimated coefficients and the covariate vector, Xi, which is now independent of time, i.e. assumed constant over time. Thus eiXi is the increment of the hazard, or hazard ratio, due to the independent effect of the ith variable. Whenever there is an event, the conditional probability, or proportion of subjects among different groups in getting the hazard, is assumed constant. We will use the data from the preceding chapter to examine the independent effect of sex on the age of marriage.
> zap() > library(survival) Loading required package: splines > load("Marryage.Rdata") > attach(.data) > model1 <- coxph(surv.marriage ~ sex) > model1 =============================== coef exp(coef) se(coef) z p sexfemale -0.170 0.844 0.522 -0.325 0.74
The coefficient is negative and non-significant. The hazard ratio, exp(coef), is 0.844 suggesting an overall reduction of 16% hazard rate of females compared to males. To obtain its 95% confidence interval, a summary of this 'coxph' object is necessary.
> summary(model1) =============================== exp(coef) exp(-coef) lower .95 upper .95 sexfemale 0.844 1.19 0.304 2.35 ===============================
217
-2.5 25
-2.0
-1.5
-1.0
-0.5
0.0
30
35
40
The two curves cross more than once. It is difficult to judge from the graph whether the assumption has been violated. A formal test of the proportional hazard assumption can be carried out as follows:
> cox.zph(model1) -> diag1; diag1 rho chisq p sexfemale 0.00756 0.000883 0.976
The evidence against the proportional hazard assumption is very weak. This diagnostic result can be further explored.
218
-4
-2
26
27
28 Time
29
30
32
This graph should be read along with the previous results earlier in the chapter where the events and the information of sex of the subjects are sorted by time.
> data.frame(age, sex, age.marriage, married, surv.marriage)[order(time),]
The first two events occurred in the 25th year where one male and one female got married. The hazard in 'diag1$y' is 1.43 and -2.92. In the 26th year, there were four events of two males (beta = -3.16) and two females (beta = 1.19). The duplicate values of beta result in a warning but this is not serious. Subsequent points are plotted in the same fashion. A line is drawn to pass through these betas to illustrate the level of stability of the coefficient over time. The probability of getting married for females is lower than for males when they are younger than 26 years or older than 29 years. In between, females have a higher probability of getting married. However, the test suggests that this finding can be simply explained by chance alone. For multiple covariates the same principle applies.
> model2 <- coxph(surv.marriage ~ sex + educ) > model2 > summary(model2) =================================================== exp(coef) exp(-coef) lower.95 upper.95 sexfemale 0.831 1.20 0.230 2.99 educ>bachelor 0.975 1.03 0.278 3.42 =================================================== > cox.zph(model2) -> diag2; diag2 rho chisq p sexfemale 0.0246 0.00885 0.925
219
The test results are separated by each variable. Finally, a global test is performed showing a non-significant result.
> diag2$x # x coordinates for plotting time: same as diag1 > diag2$y # two columns, one for each variable > plot(cox.zph(model2), var=1) # for the first variable of y, or 'sex'
The coefficients of sex with adjustment for education were not much changed.
> plot(cox.zph(model2), var=2)
-6
-4
-2
26
27
28 Time
29
30
32
The hazard rate for marriage of persons who had a higher education rises at around 27-29 years. By the late twenties, they have a slightly higher chance of getting married than those with a lower education. The reverse is true for the remaining times. Again, these differences are not significant and can be explained by chance alone.
220
> use(Compaq) > des(); summ(); codebook() > surv.ca <- Surv(year, status) > model3 <- coxph(surv.ca ~ hospital + stage + ses > summary(model3) Call: coxph(formula=surv.ca ~ hospital+stage+ses+agegr) n= 1064 coef exp(coef) se(coef) z hospitalPrivate -0.4224 0.655 0.142 -2.971 stageStage 2 0.7682 2.156 0.123 6.221 stageStage 3 2.4215 11.263 0.156 15.493 stageStage 4 1.3723 3.944 0.190 7.213 sesHigh-middle -0.0944 0.910 0.133 -0.712 sesPoor-middle 0.0341 1.035 0.178 0.192 sesPoor -0.4497 0.638 0.144 -3.126 agegr40-49 0.2574 1.294 0.164 1.569 agegr50-59 0.4923 1.636 0.164 2.999 agegr60+ 1.4813 4.399 0.159 9.343
+ agegr)
p 3.0e-03 5.0e-10 0.0e+00 5.5e-13 4.8e-01 8.5e-01 1.8e-03 1.2e-01 2.7e-03 0.0e+00
Patients in private hospitals have two-thirds the risk (hazard) compared to those in public hospitals after adjustment for stage, socio-economic status and age. To check whether all three categorical variables deserve to be included in the model, the command 'step', meaning stepwise regression, can be used.
> step(model3) Start: AIC= 4854.56 surv.ca ~ hospital + stage + ses + agegr Df <none> - ses 3 - hospital 1 - agegr 3 - stage 3 ===== Further AIC 4854.6 4860.2 4862.0 4951.6 5059.9 output omitted due to redundancy ====
The level of AIC is lowest when none of the variables is removed. Therefore, all should be kept. Next the proportional hazard assumption is assessed.
> cox.zph(model3) rho chisq hospitalPrivate hospital 0.03946 0.6568 stageStage 2 0.05406 1.1629 stageStage 3 -0.09707 3.6786 stageStage 4 -0.10222 4.2948 sesHigh-middle 0.00968 0.0367 sesPoor-middle -0.04391 0.7612 sesPoor 0.10409 4.4568 agegr40-49 -0.07835 2.3831 agegr50-59 -0.09297 3.2339 agegr60+ -0.09599 3.5242 GLOBAL NA 23.3117 p 0.41768 0.28086 0.05512 0.03823 0.84818 0.38297 0.03476 0.12266 0.07213 0.06048 0.00965
221
The highest stage and the lowest socio-economic group contribute the most to the chi-squared statistic. The global test gives a significant P value suggesting that the assumption is violated. A possible solution is to do a stratified analysis on one of the categorical variables, say 'stage'.
> model4 <- coxph(surv.ca ~ hospital+strata(stage)+ses+agegr) > cox.zph(model4) rho chisq p hospitalPrivate hospital 0.04407 0.797 0.3720 sesHigh-middle 0.00801 0.025 0.8743 sesPoor-middle -0.04857 0.920 0.3376 sesPoor 0.09747 3.785 0.0517 agegr40-49 -0.07366 2.097 0.1476 agegr50-59 -0.08324 2.565 0.1093 agegr60+ -0.08521 2.761 0.0966 GLOBAL NA 10.297 0.1724
Using 'stage' as a stratification factor reduces all chi-squared values and the proportional hazard assumption is not violated.
> summary(model4) Call: coxph(formula = surv.ca ~ hospital + strata(stage) + ses + agegr) n= 1064 coef exp(coef) se(coef) z p hospitalPrivate -0.4049 0.667 0.141 -2.866 0.0042 sesHigh-middle -0.1078 0.898 0.133 -0.811 0.4200 sesPoor-middle 0.0374 1.038 0.179 0.209 0.8300 sesPoor -0.4201 0.657 0.144 -2.926 0.0034 agegr40-49 0.2532 1.288 0.164 1.542 0.1200 agegr50-59 0.4703 1.600 0.165 2.857 0.0043 agegr60+ 1.4514 4.269 0.159 9.141 0.0000
The coefficients of 'model4' are quite similar to 'model3'. Note the omission of the 'stage' terms. Stratified Cox regression ignores the coefficients of the stratification factor. Since our objective is to document the difference between types of hospital, the coefficients for other variables are not seriously required if the covariates are well adjusted for.
References
Kleinbaum D, Klein M (2005). Survival Analysis: A Self-Learning Text. Hosmer Jr D, Lemeshow S (1999). Applied Survival Analysis: Regression Modeling of Time to Event Data.
222
Exercises
Problem 1. Could the other 2 variables (socio-economic status and age) be used as a stratification factor? Problem 2. Use the command 'plot(cox.zph())' for 'model3' and 'model4' to check the change of hazard ratio of private hospital over time. Discuss the pattern of residuals.
223
Sample size calculation is very important for an epidemiological study. For most surveys, the population size is large, consequently the costs involved in collecting data from all subjects would be high. In clinical studies, recruiting too many subjects into the study not only causes management and financial problems but also raises ethical concerns. If a conclusion can be drawn from a small sample size, recruiting more subjects than necessary may pose an unnecessary risk to the group of subjects whose treatment is known to be inferior. On the other hand, a survey with a sample size that is too small will not be able to detect a statistically significant effect if there truly is one.
Field survey
The aim of a field survey is usually to document the prevalence in the population on a certain condition such as helminthic infection, or coverage of a health service such as an immunization programme. The sample size required depends on the estimated prevalence and the level of errors of prevalence that the researcher can accept. For many circumstances, cluster sampling is employed. The advantage of this sampling method is that it reduces the time and budget for travelling to collect
224
data. For example, a simple random sampling may require 96 persons from 96 different villages to be surveyed. This can place a heavy burden on the travelling resources. Instead, the number of villages can be reduced to, say, 30 and the sample size compensated by increasing more subjects from each selected village. The slight increase in sample size is more than offset by the large reduction in travelling costs. The cluster sampling technique, however, encounters another problem. People in the same villages often tend to be more similar to each other than from people from other villages in terms of disease risk and coverage of service etc. In other words, subjects selected from the same cluster are usually not 'independent'. Therefore the sample size estimated from a simple random sampling technique must be inflated to cover this 'alikeness among the same cluster' (or 'design effect') problem. The function 'n.for.survey' in Epicalc is used for the calculation of the sample size for a survey. To have a look at the arguments of this function type:
> args(n.for.survey) function (p, delta = 0.5 * min(c(p, 1 - p)), popsize = FALSE, deff = 1, alpha = 0.05)
The arguments to this function are as follows: p The estimated prevalence as a proportion between 0 and 1.
delta The difference between the estimated prevalence and the margin of the confidence interval. For example, if p is estimated to be 30% but we still accept that the maximum error can result in 50% prevalence, then 'delta' is 0.5 - 0.3 = 0.2. If 'delta' is not given, the default value is set to be a half of either p or 1-p, whichever is the smaller. In general, delta has more influence on the sample size than p. When p is small, 'delta' should be smaller than p. Otherwise, the lower limit of confidence interval will be negative or the upper limit will be higher than 100%, both of which are invalid. The default value is therefore quite acceptable for a rather low prevalence (say, over 15%) or a rather high prevalence (say, above 80%). If the prevalence is in between these values, then half of p (or 1-p) would be too imprecise. The user should then give a smaller 'delta'. popsize Finite population size. This is the population size in which the survey is to be conducted. A small population size will require a relatively smaller sample size. If the value is FALSE, it will be ignored and assumed that the population is very large. Usually when the size exceeds a certain value, say 5000, any further increase in would have a rather little effect on sample size. deff The design effect, which is the adjustment factor for cluster sampling as explained above. By definition, for simple random sampling, 'deff' is 1. In cluster sampling with a large cluster size and the level of similarity among subjects in the same cluster is high, 'deff' can be large, and so would the required sample size. alpha Probability of a Type I error. In standard situations, 'alpha' is set at 0.05 and the confidence interval of p + delta is the 95% confidence limit of the prevalence. With higher accuracy demands, for example, a 99% confidence limit, the required
225
sample size will be increased. If a survey is to be conducted with a small (less than 15%) prevalence, in a large population, all the default values of the arguments can be accepted. The command then becomes:
> n.for.survey(p=.05) Sample size for survey. Assumptions: Proportion = 0.05 Confidence limit = 95 % Delta = 0.025 from the estimate. Sample size = 292
The function sets the 'alpha' value at 0.05, since it was omitted. Thus the confidence limit is 95%. The argument 'delta' is automatically set to half of 5% or 0.025. The design effect, 'deff', is not given and so set at 1. The population size is assumed to be very large and is thus not used in the calculation of the sample size. In conclusion, the function suggests that if a 95% confidence limit of 5% + 2.5% or from 2.5% to 7.5% is desired for an estimated proportion of 0.05 in a large population, then the sample size required is 292. If the prevalence is low, 'deff' for cluster sampling is usually close to unity. The sample size calculated is still relatively applicable even if cluster sampling is employed because of the small prevalence. If the estimated prevalence is close to 50%, a delta of 25% is too large. It is better to reduce this to +5% or +10% of the prevalence. If cluster sampling is employed under such a condition, the value of 'deff' is usually greater than one. For example, in standard 30-cluster sampling for assessment of immunization coverage where the prevalence is estimated to be near 80%, 'deff' should be around 2. The population size in this case is usually large and a 99% confidence limit is required instead of 95%. In this case, the suggested calculation would be:
> n.for.survey(p =.8, delta =.1, deff=2, alpha=.01) Sample size for survey. Assumptions: Proportion = 0.8 Confidence limit = 99 % Delta = 0.1 from the estimate. Design effect = 2 Sample size = 212
With this total sample size of 212 and 30 clusters, the average size per cluster would be 212/30 = 7 subjects. This sample size would be used for a standard survey to assess immunization coverage in developing countries.
226
In a case-control study, the proportion (p1) of subjects exposed to a risk factor among the cases (diseased group) is compared against the proportion (p2) of subjects exposed among the controls (non-diseased group). In a cohort study, the probability (p1) of getting a disease among the exposed group is compared to the probability (p2) among the non-exposed group. In a randomised controlled trial, the probability (p1) of getting cured (or improving) among subjects given a new treatment is compared with the probability (p2) of getting cured (or improving) among subjects given the old treatment. Alpha is the probability of committing a Type I error. If the two groups actually have the same proportion at the population level (the null hypothesis is true), with the sample size from this calculation, there will be a chance of 'alpha' that the null hypothesis will be rejected. In other words, the difference in the two samples would be erroneously decided as statistically significant. As before, it is common practice to set the alpha value at 0.05. Power is the probability of rejecting the null hypothesis when it is false. In this situation it is the probability of detecting a statistically significant difference of proportions in the population, which is in fact as large as that in the sample. It is quite acceptable to have the power level set at 80%. Scientists allow a larger probability for a type I error than for type II error. Rejecting a new treatment that is actually better than an old one may probably be considered less serious than replacing the old treatment with a new one which in fact not better. For these three types of studies, the most efficient sample size (smallest size of total sample that can test the hypothesis) is achieved when the ratio between the two stratified groups is 1:1. For example, if the collection of data per subject is fixed, comparing two groups of treatment each of 50 subjects is much better than comparing 5 subjects in one group against 95 subjects in the other. In certain conditions, such as when a very rare disease is under investigation, it might be quicker to finish the study with more than one control per case. In addition, in a cross-sectional study, the status of a subject on exposure and outcome is not known from the beginning; the sample is non-contrived. The ratio cannot be set at 1:1 but will totally depend on the setting. Under these conditions where the ratios are not 1:1, the value of the ratio must be specified in the calculation. If a risk factor were expected to be as common as 50% among the diseased group
227
and 20% among the control group, the sample size for this case control study would be:
> n.for.2p(p1=.5, p2=.2) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0.05 0.8 0.5 0.2 1
The use of this function is not complicated, as only p1 and p2 are needed to be input. The other arguments will be set to the default values automatically. In conclusion, only 45 cases and 45 controls are needed to test the hypothesis of association. If the disease is rare, say only 10 cases per year, and the researcher want to complete the study early, he/she may increase the case:control ratio to 1:4
> n.for.2p(p1=.5, p2=.2, ratio=4) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0.05 0.8 0.5 0.2 4
Note that the ratio is n2/n1. This study can be finished in less than 3 years instead of originally 4 years. Increasing the ratio above this has only a small effect on reduction of number of cases but a remarkably high effect on increasing the number of controls. For example, a ratio of 1 case per 9 controls will reduce the required sample size to 23 cases (4 cases reduced) but increase the number of controls required to 207 (an increase of nearly 100). An increase in power from 0.8 to 0.9 also increases the requirement for the sample size considerably. Fixing the ratio at 1:1
228
The output is omitted, however 58 cases and 58 controls are required (an increase of 29% of the sample size required on both arms).
Setting up p1 and p2 for calculation of sample size for a case control study is straightforward. However, in some instances, there may be a demand to compute the sample size based on proportion of exposed in the general population (which is equal to the proportion among the controls due to the rarity of the disease) and the odds ratio. In other words, p2 and odds ratio are given. It remains necessary then to find p1. For example, if the proportion of exposures among the population (p2) is equal to 30%, and the odds ratio is 2, the proportion of exposures among the cases (p1) and the required sample size can be calculated as follows:
> p2 <- .3 > or <- 2 > odds2 <- p2/(1-p2) > odds1 <- or*odds2 > p1 <- odds1/(1+odds1) > p1 [1] 0.4615385 > n.for.2p(p1,p2) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0.05 0.8 0.4615385 0.3 1
The required sample size is larger than in the preceding example because the odds ratio to be detected is closer to unity. In other words, the level of difference to be detected is smaller.
229
230
Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0.05 0.8 0.2 0.05 4
The total sample size for this cross-sectional survey to test the hypothesis is 240 subjects. This will include 48 exposed and 192 non-exposed persons. This required sample size should be checked for adequacy of the other objective, i.e. to describe the prevalence of exposure, which is estimated to be 20%.
> n.for.survey(p=.2) Sample size for survey. Assumptions: Proportion = 0.2 Confidence limit = 95 % Delta = 0.1 from the estimate. Sample size = 61
The required sample size of the descriptive study is smaller than that for hypothesis testing. Thus, the latter (of 240 subjects) should be adopted.
Intuitively, the notation is straightforward. There are four compulsory arguments that a user must supply to the function, namely the two means and their
231
As an example, suppose a new therapeutic agent is expected to reduce the mean pain score from 0.8 to 0.6 in a group of subjects and the expected corresponding standard deviations are 0.2 and 0.25. To calculate the required sample size, type the following command:
> n.for.2means(mu1=.8, mu2=.6, sd1=.2, sd2=.25) Estimation of sample size for testing Ho: mu1==mu2 Assumptions: alpha = 0.05 power = 0.8 mu1 = 0.8 mu2 = 0.6 sd1 = 0.2 sd2 = 0.25 Estimated required sample size: n1 = 21 n2 = 21 n1 + n2 = 42
This anaesthesiological experiment would require 21 subjects in each group. In fact, the mathematical formula for the calculation of the sample size does not require the exact values of mu1 and mu2. If the difference in means and the standard deviations are fixed, changing the two means will have no effect on the calculated sample size. Thus the same results are obtained from the following command (output omitted).
> n.for.2means(mu1=.4, mu2=.2, sd1=.2, sd2=.25)
232
Health systems adopt LQAS mainly for surveillance of proportion of problems. For example, in the process of quality assurance of anti-TB drugs in southern Thailand, content assays and dissolution tests of the drug are rather expensive. The LQAS method was employed to calculate the minimal sample size that is still sufficient to test whether the quality is acceptable. Suppose a highest acceptable proportion of defective specimens is set at 1 percent. If the study suggests that the actual proportion is at this level or less, then the lot is accepted. Otherwise, the whole lot will be rejected. The actual proportion (whether it be higher or lower than this acceptable level) is not important. If the sample size is too small, say 20, then even if all randomly selected specimens were accepted, it would still not be certain that less than 1% of the whole lot was defective. If the sample size is too big, say 1000, then even if the percent defective is within the reasonable level, you have wasted all those specimens that were tested. This large sample size is excessive. With an optimal sample size, should any of the randomly selected specimens be defective, the acceptable proportion of the whole lot would be expected to be exceeded. One of the easiest ways to understand this is to look at the computation results.
> n.for.lqas(p=.01) Lot quality assurance sampling Method Population size Maximum defective sample accepted Probability of defect accepted Alpha Sample size required = = = = = = Normal approximation 10000 0 0.01 0.05 262
From this computation, the threshold for the defective proportion (p) is set at 1%. The final sample size is 262. The lot size is assumed to be 10,000 by default. The maximum defective sample accepted is 0 (again the default). This means that if any of the 262 specimens is defective, the proportion of 1% is considered to be exceeded and the lot is rejected. With this sample size, the researcher would take a random sample of 262 specimens and examine each one. If all of the specimens pass the tests, the remaining lot of 10,000-262 = 9,738 specimens can be marketed. Otherwise, all 10,000 will be rejected. There are few parameters controlling the sample size here. Alpha (the type I error rate) is usually set at 5%. This means that if the null hypothesis (the defective percentage is less than 1%) is true, there is a 5% chance that there would be at least one defective specimen among the whole sample of 262. If alpha is set to a stricter criterion, say 2.5%, the sample size will increase. The threshold proportion for a sample being accepted varies inversely with the sample size. If the threshold is increased, say to 3%, the required sample size would be reduced (only 87 would be needed).
233
The maximum defective sample accepted is set at 0 by default in order to minimize the sample size. In theory, this can be any number. However, the larger the number is, the larger the required sample size.
The odds ratio of 0.75 has a rather wide confidence interval. It might be of interest to know the power of the sample size for this particular study if the odds ratio is in fact 0.5 and the failure rate among the placebo group is the same.
> > > > > odds.placebo <- 20/30 odds.treat <- .5 * odds.placebo p.placebo <- 20/50 p.treat <- odds.treat/(1+odds.treat) power.for.2p(p1=p.treat, p2=p.placebo, n1=105, n2=50) alpha = 0.05 p1 = 0.25 p2 = 0.4 n1 = 105 n2 = 50 power = 0.4082
The sample size used in this study only had a 40% chance of finding a significant difference given that the treatment had an odds ratio of 0.5. The study was inconclusive. Note that the power depends on the size of difference to be detected. To obtain statistical significance for a large difference would require a smaller sample size than that for detecting a small difference if the power was the same.
234
With this relatively large sample size, the power to detect a difference of 5 points of IQ under these assumptions is approximately 90%.
Power = 0.8988
0.25 mu1 = 95, mu2 = 100 sd1 = 11.7, sd2 = 10.1 0.20 n1 = 100, n2 = 100
0.15
0.00
0.05
-2
2 mu2-mu1
235
Exercises Problem 1.
Calculate the maximum sample size required to estimate the prevalence of respiratory tract infection, with a precision of 5%, in a target population consisting of children aged 1-5 years in a particular region of a developing country.
Problem 2.
A case-control study is carried out to determine the efficacy of a vaccine for the prevention of childhood tuberulosis with a placebo. Assume that 50% of the controls are not vaccinated. If the number of cases and controls are equal, what sample size is needed to detect, with 80% power and 5% type I error, an odds ratio of at least 2 in the target population?
Problem 3.
A randomised trial is to be conducted comparing two new treatments aimed at increasing the weights of malnourished children with a control group. The minimal worthwhile benefit is an increase in mean weight of 2.5kg, and the standard deviations of weight changes are beleived to be 3.5kg. What are the required sample sizes, assuming that the control group is twice as large as each of the two treatment groups and an 80% power is required for each comparison?
236
Data can be analysed interactively as shown in the previous chapters or in a batch mode as shown in this chapter.
237
read in the dataset is 'read.table' from the R base library. For other data file formats, type 'help.start()' . Choose 'packages' and then 'foreign'. Explore the class and description of variables using des(). Quickly explore summary statistics of the variables using summ(). Explore each variable one at a time using the command summ(varname). Pay attention to the minimum, maximum and look at the graph to see a detailed distribution. Explore categorical variables using codebook() and tab1(varname). Save the commands that have been typed using 'savehistory("...")'. The saved file should have a '.r' or '.rhistory' extension. This file stores all the commands that have been typed in. These commands will be used for further analysis. Note that 'varname' and 'filename' in the above list should be replaced with the appropriate variable name and file name. Commands typed in during the interactive modes often contain mistakes. Since these commands will be reused in the future, they should be 'cleaned up' using an appropriate text editor. The next step is to open the saved file with a text editor. Tinn-R and Crimson Editor are recommended.
Crimson Editor
There are many good text editors available for editing a command file. A good one should be able to show line numbers and matching brackets. The Notepad program that comes with Windows does not have these features and is thus not suitable for working with a long command file. The current recommended programs are Crimson Editor and Tinn-R, which are both public domain software. Instructions for installation are given in Chapter 1. Use Windows Explorer to create a new text file. By default, Windows will offer to name the file, say 'New Text Document.txt'. Do not accept this name. Instead choose a name appropriate to the purpose of the file, such as 'Chapter1' or 'HIV' and make sure you include the '.R' or '.r' extension. Double click this new file. If your computer's file associations have been successfully set up, your computer should open it with either Crimson Editor or Tinn-R. If not, right click and choose 'Open with' then choose Crimson Editor (cedt.exe) or Tinn-R (Tinn-R.exe). The following section is specific for Crimson Editor only. You may use this newly created file to customise Crimson Editor menus and file preferences. Choose 'View', 'Tool bars/Views'. Check 'Tool bar', 'MDI file tabs' and 'Status bar'. If you want to know what work they do, just uncheck them one by one.
238
Note that Crimson Editor can have multiple files opened simultaneously. Any file that has been changed but not yet saved will have a red dot in its MDI File tab. This turns green once the file has been saved. From the menu bar select 'Document', 'Syntax types'. See if R is in the list of known file types. If not, select 'Customize...' at the very bottom of the list. The 'Preference' dialog box will appear with 'Syntax Type' highlighted under the 'File' option. In the list of Syntax Types, scroll down until you see the first '-Empty-' position and select it with the mouse. Position the cursor in the 'Description' text box and type R. Next to 'Lang Spec', type 'R.spc', and for 'Keywords' type 'R.key'. Finally Click 'OK'. Language specification and key words for the R program will be available for any file opened with Crimson Editor. But the R command file is still not automatically associated with Crimson Editor yet. The user needs to activate this by clicking 'Document', 'Syntax types' and selecting 'R' from the list. Finally, for the line number, Click 'Tool's from the menu bar, then 'Preferences...'. In the Preferences box, highlight 'Visual'. Check 'Show line numbers', 'Highlight active line' and 'Highlight matching pairs'.
Tinn-R
The advantage of using Tinn-R over Crimson Editor is its ability to interface or interact with R itself. Users can type the commands into the Tinn-R editor and send them to the R console line by line, in blocks of lines, or even the whole command file. Tinn-R has many other nice features similar to Crimson Editor that make working with R easier and more convenient. Viewing line numbers is strongly recommended. This can be set nder the View menu. Those who like to use the function keys instead of the mouse can set the 'hotkeys' of R, under the R menu. The authors preference is to set F2 for sending a single line, F4 for sending the selected block, F5 for sending the current whole command file without prior saving and F6 for saving the file and sending as 'source'. The function key F3 is preserved for Searching (Find again).
239
Remove any duplicate commands. Check the structure of the commands. Make sure it includes proper order of key commands as suggested above (with 'zap()', 'use()', etc) . If you use Crimson Editor, you may copy blocks of commands and paste them into the R console. If you use Tinn-R, you can simply highlight the commands that you want to send to R and using the mouse, click on the send icon (or press the hotkey) to perform the operation. Copying and pasting has the advantage of seeing different colours of commands (red) and output (blue) on the R console. However, any mistake or error in the middle of a large block of commands may escape notice. If the block of commands contains an error, then saving and sending commands as source will stop at the line containing the first error. For example,
Error in parse(file, n = -1, NULL, "?") : syntax error at 3: library(nlme 4: use("Orthodont.dta")
Common syntax errors include unmatched brackets, unmatched or missing quotes and missing commas between function arguments. The report on syntax errors usually includes the line numbers of the (first) error. In the above example, the error occurs at line 3 (missing closing bracket). Simply return to the command file and make the appropriate correction. Even when all syntax errors have been removed, there may remain other types of command errors, such as typing mistakes in commands, objects not found or files not being able to be opened. In these situations, the console will show the results in the console up to the error line. However the line number will not be given. Switch back to the command file and correct the error then return to the R console and rerun the command 'source("filename.r", echo=TRUE)'. The lines that need to be skipped by R, such as author's comments or commands that the analyst want to skip for the time being can begin with '#'. It is highly recommended that comments be included throughout the command file to enable other readers to follow easily. See the example command files that come with the software for more details. The amount of commands typed into the command file should be optimal. It is a good practice to have the newly added command lines containing one set of related actions. For example, commands to create a new categorical variable from a continuous variable and to check the distribution of this new variable (using 'tab1(newvar)') should be kept together. Executing the command file at this stage will allow the analyst to check this part of results instantly. Once the new variable is assured, the line 'tab1(newvar)' may not be necessary and can be subsequently deleted or skipped by placing a '#' before it. One of R's advantages is in its graphing capabilities. Graphing however can involve many steps and may require the addition of extra graphical parameters. It is a good idea to start with a simple graph in a command. Other parameters such as 'pch'
240
(point character), 'lty' (line type), 'xlab' (X-axis label), 'col' (colour) etc, can be added in the next round of command editing. Eventually, a good graph may need several lines of commands to produce it.
Control Flow
The strength of R is in its programming facilities. Looping using the 'for()' construct is very powerful. For example, if 100 lines are to be drawn joining systolic ('sbp') and diastolic blood pressure ('dbp') of the same 100 individuals, without the 'for()' loop, the programmer would need to type 100 commands. In R these commands can be nested within a 'for()' loop. (See the exercise at the end of chapter 12). Sort all variables by the order of SBP.
> sortBy(sbp)
The last command draws 100 lines, one for each individual. When 'i' is equal to 1 the line connects sbp and dbp of the first person. When 'i' is 2, the same applies to the second person, and so forth until the 100th subject.
241
just before the to-be-bypassed section, and one line with a closing curly bracket
}
at the end of that section. Since 1 is not equal to 2 the whole section contained by the curly brackets will be skipped. The main problem with this method is finding and removing the matching curly brackets when the bypass is no longer required and the command file has been unused for a long time. Crimson Editor and Tinn-R have a highlighting facility for matching brackets but the opening and the closing ones sought may be very far apart with several other curly brackets nested inside. To prevent this confusion, several blank lines should be inserted before the command line 'if(1==2){' and after the matching closing bracket. These blank lines will make the skipped section easily visualised.
242
the subsequent output text to a file, named 'myFile.txt'. See 'help(sink)' for more details about the usage of this function. To return to interactive mode, i.e. to stop diverting output to the file, issue the command 'sink()'. The use of 'sink' smay be incorporated into the command file or carried out manually. A complication of using 'sink' arises when there is an error in subsequent commands. Since the results are diverted to a file, not to the screen, the use will not recognize the error and the process stop in the middle of confusion. If this happens, the solution is to type 'sink()' at the console. This will return the route back to the screen. The errors are then investigated in the output file. To prevent this, 'sink' should used only when all the process in the command file must have been tested to be error free e.g. no 'xxx' allowed. The command 'sink(file = "myFile.txt") can then be placed at the beginning of the command file and 'sink()' placed at the end of the file. Then submit the command file to R with the command 'source("command file"). Perhaps the simplest and best method to save the text output is to click 'File' at the menu bar and choose 'Save to File...'. This will save all output currently in the console to a text file. The default destination file is 'lastsave.txt' but this can easily be changed.
Note: This last method will not save output if the 'clear console' command has been issued. In addition, there is a limit to the number of lines that can be saved. R limits the console window to the last 5,000 or so lines that can be saved. Therefore use this method only if your output is not very long.
Saving a graph
Routing of a graph to a file is simpler than routing the output text. Copying a graph to the clipboard and then pasting it to a program such as a text document or a PowerPoint presentation slide is simple. Click at the graph window and choose 'File' from the menu bar and 'Copy to the clipboard. Choose as a Bitmap or Metafile if the destination software can accept this format. A Metafile is slightly smaller in size and has a sharper line. The Bitmap format may not have a sharp picture when the size is enlarged. Alternatively, the graph can be saved in various other formats, such as JPEG, postscript or PDF. To save a graph when commands are run from a source file, simply type 'xxx' after the graphing command to halt further execution of commands. Then copy or save the graph as mentioned above. Alternatively, instead of showing the graph on the screen, the graph can be routed to a file by issuing one of the following graphics device commands:
bmp("filename.bmp") jpeg("filename.jpg") png("filename.jpg") win.metafile("filename.wmf")
243
Each of these commands sets up the graphics device and must be followed by a command that creates the actual graph. When the commands that create the graph are executed, it is important that the device is turned off in order to write the graph contents to the file and reroute future graphical output to the screen.
dev.off()
This rerouting method is useful because the whole process of the command file need not be interrupted in the middle by the method mentioned in the preceding paragraph. The concept of turning the graphics device off after creating the graph is similar to using the 'sink' command, which requires a final sink() to save and close the file. The commands below create a summary graph of the variable 'age' from the Outbreak dataset in Epicalc. The graph is routed to a file called graph1.jpg.
> > > > > > zap() data(Outbreak) use(Outbreak) jpeg("graph1.jpg") summ(age) dev.off()
The re-routing process can be done either interactively or inside a command file if there are no mistakes inside the graphics commands.
244
The datasets given in the Epicalc package and used in this book are relatively small, both in number of records and the number of variables. In real life, a data analyst often faces over 50 variables and several thousand records. The requirements for such analytical processing include a large amount of computing memory, fast CPU, large hard disk space, and efficient data handling strategies. Without these requirements, data analysis may take too long or may not even be possible.
Clearing R memory
R can handle many objects in one session. If the amount of memory is limited, it is a good practice to clear all unnecessary objects from the working environment and detach from all unnecessary data frames. Therefore, it is advisable to start any new project with the zap() command.
> zap()
The first variable is called 'id'. The naming of the remaining 160 variables can be achieved using two nested for loops and the built-in R constant letters , which consists of the lower-case letters of the English alphabet. The outer loop generates the first character of the variable names (a h). The inner loop then pastes the numbers 1 20 to these letters, separating the letters and numbers with a full stop.
> namesVar <- NULL
245
> for (i in letters[1:8]) { for(j in 1:20){ namesVar <- c(namesVar, paste(i, j, sep=".")) } } > names(data1)[2:161] <- namesVar
Then give a variable description to each variable, using the attr function. This process should only take a few seconds, depending on the speed of your computer.
> attr(data1, "var.labels")[1] <- "ID number" > for(i in 2:161){ attr(data1, "var.labels")[i] <- paste("Variable No.", i) } > use(data1)
Only the first 10 variables, their class and description will be shown. Then we move to see the next twenty.
> des(select=21:40)
... and so forth. Glancing at about 20 variables at a time will allow users to see the variable descriptions more carefully, without having to scroll up and down the screen. If one wants to see only the variables names that start with "a", type:
> des(select="a*")
In these case, there are 20 of them. To look at the variable descriptions of variables starting with "a." followed by only one character, type:
> des(select="a.?")
246
The data frame '.data' will be changed from having 30,000 to having only 300 records with the same number and description of variables, as can be seen from
> des(.data)
which suggests that .data is just a subset of the original one. If one wants to use the original data frame, simply type
> use(data1)
An alternative to specifying the number of records to randomly keep is to specify a percentage of the original records. This is done by specifying a number between 0 and 1 for the 'sample' argument.
> keepData(sample=0.01)
The above command would again keep only 300 (1%) of the original number of records. The criteria for keeping records can also be specified using the 'subset' argument:
> keepData(subset=a.1 < 0)
You will see a reduction of the total records but not the variables.
> des()
The reduction is about a half since the variable 'a.1' was generated from a standard normal distribution, which has a mean of 0 and is symmetric about this mean. This method of selecting subsets of records can be applied to a real data frame, such as keeping the records of only one sex or a certain age group.
Data exclusion
The keepData function can also be used to exclude variables. Return to the original data frame and exclude the variables between 'a.1' and 'g.20'.
> use(data1) > keepData(exclude = a.1:g.20) > des()
Variables from 'a.1' to 'g .20' have been excluded but note that the number of records remains the same. To exclude the last 10 items of each section, the wildcard feature of Epicalc can be
247
exploited.
> use(data1) > keepData(exclude = "????") > des()
All the variables with a name of length four characters have been removed. As mentioned before, if the size of the data frame is large, the analyst can choose one or more of the above strategies to reduce the size. Further analysis can then be carried out more quickly. If all the commands are documented in a file as suggested by the previous chapter, and the commands are well organized, the first few lines of the file can then be edited to use the full original data frame in the final analysis.
248
Solutions to Exercises
Chapter 1
Problem 1
> p <- 0.3 > delta <- 0.05 > n <- 1.96^2*p*(1-p)/delta^2 ; n # 322.6944.
Inf
Note that in R the function 'c' is used to combine values into a vector. You will discover that this function is very useful and is used throughout this book.
Chapter 2
Problem 1.
> sum(1:100*1:100) [1] 338350 # or sum((1:100)^2)
Problem 2.
> x <- 1:1000 > x7 <- x[x/7==trunc(x/7)] > sum(x7) [1] 71071 # or x7 <- x[x%%7==0]
249
Problem 3.
> ht <- c(120,172,163,158,153,148,160,170,155,167) > names(ht) <- c("Niece", "Son", "GrandPa", "Daughter", "Yai", "GrandMa", "Aunty", "Uncle", "Mom", "Dad") > wt <- c(22,52,71,51,51,60,50,67,53,64) > bmi <- wt/(ht/100)^2 > sort(bmi) Niece Son Aunty1 Daughter Yai 15.27778 17.57707 19.53125 20.42942 21.78649 Mom Dad Uncle GrandPa GrandMa 22.06035 22.94812 23.18339 26.72287 27.39226 > summary(bmi) Min. 1st Qu. 15.28 19.76 > sd(bmi) [1] 3.742951
Median 21.92
Max. 27.39
In conclusion, `Niece' has the lowest BMI at 15.27 kg/m2 and `GrandMa' has the highest BMI of 27.39 kg/m2. The average of the BMI is 21.7 kg/m2 and the standard deviation is 3.7 kg/m2.
Chapter 3
Problem 1 There is more than one correct method. First method
> a1 <- rbind(1:10, 11:20) > a1
Second method
> a2 <- matrix(1:20, nr=2, byrow=TRUE) > a2
Third method
> a2 <- cbind(1:20, nr=2) > a2
Problem 2
> a1[,seq(from=1, to=10, by=2)]
250
Problem 3
> > > > > > > > > > > table1 <- cbind(c(15,30), c(20,22)); table1 rownames(table1) <- c("Exposed","Non-exposed") colnames(table1) <- c("Diseased","Non-diseased") table1 help(chisq.test) help(fisher.test) chisq.test(table1) # with Yates' continuity correction chisq.test(table1, correct=FALSE) # without fisher.test(table1) # default atlernative is "two.sided" fisher.test(table1, alternative="greater") fisher.test(table1, alternative="less")
Chapter 5
Values of individual elements on the scale Dotchart Dotplot Box plot The original values are all kept Each value is forced to fall into one of the bins. Only the outlying values are displayed. Others are grouped into parts of the box.
Power to discriminate different values Dotchart Dotplot Boxplot Discrimination power is high. Even a small difference can be noticed if the sample size is not large. Since adjacent values are often forced into the same bin, the power of discrimination is lost. Poor discrimination power as most of the dots disappear in the box.
Perception for frequency distribution of the values Dotchart Empty space in the graph promptly conveys the information that there is no data in the area. Flat or slow rising indicates low frequency whereas sharp or steep rising indicates high frequency. Viewers must be educated to give proper interpretation. Information on relative frequency is best conveyed by this graph. No need for education for interpretation. The length of the box is counter-intuitive. Since the box is divided into two parts with more or less the same amount of data, a short part means high density and a long part means low density. Many people do not have this knowledge to interpret the result.
Dotplot Boxplot
251
Information on sample size in each stratum Dotchart Dotplot Boxplot Thickness of strata determined by the sample size. Thickness of strata determined by the height of the most frequent bin, therefore, it can be visually distorted. When `varwidth=TRUE', as indicated in the command, the width of each box is determined by its sample size but not in linear proportion.
Missing values Dotchart Dotplot Boxplot Missing values are placed as empty space on the top of each stratum Missing values are not shown. Missing values are not shown.
Suitability related to sample size and number of strata Dotchart Most suitable when the sample size is not too large e.g. < 200. Large number of strata can be a problem, especially when the sample sizes among strata are grossly imbalanced. Similar problem with `summ(var)' on the issue of stratification. However, `dotplot' is more friendly when the sample size is large Bearing only 5 values of a vector, this kind of graph is not burdened by a large sample size. In stratification analysis, sample sizes of strata are not proportional to the box width even if `varwidth=TRUE' is imposed. Thus the graph can accommodate these problems quite well. On the other hand, length of the box may mislead the sample size as mentioned. Overall information on sample size is generally lacking on box plots. Median knot is introduced to indicate 95% confidence interval of the median. A smaller knot indicates a larger sample size or lower level of dispersion. However, the use of a knot is not popular.
Dotplot Boxplot
Chapter 6
> zap() > data(Timing) > use(Timing) > bed.day <- ifelse(bedhr > 20, 12, 13) > bed.time <- ISOdatetime (year=2004, month=12, day=bed.day, hour=bedhr, min=bedmin, sec=0, tz="") > woke.up.time <- ISOdatetime (year=2004, month=12, day=13, hour=workhr, min=workmin, sec=0, tz="")
252
> arrival.time <- ISOdatetime (year=2004, month=12, day=13, hour=arrhr, min=arrmin, sec=0, tz="") > from.woke.to.work <- arrival.time - woke.up.time > summ(from.woke.to.work) > sort.by(bed.time) > par(bg="cornsilk") > plot(bed.time, 1:length(bed.time), xlim=c(min(bed.time), max(arrival.time)), pch=18, col="blue", ylab=" ", yaxt="n") > points(woke.up.time, 1:length(woke.up.time), pch=18, col="red") > points(arrival.time, 1:length(arrival.time), pch=18, col="black") > abline(h=1:length(arrival.time), lty=3) > title(main="Distribution of Bed time and woke up time") > title(ylab="Subject sorted by bed time") > legend("topleft", legend=c("Bed time", "woke up time", "arrival time"), pch=18, col=c("blue","red","black"), bg="cornsilk")
Distribution of Bed time and w oke up time
Bed time woke up time arrival time
23:00
01:00
03:00
05:00
07:00
09:00
Chapter 7
No. As seen from
> addmargins(table(.data$onset, .data$case))
Three non-cases had reported onset time. The `onset' that had been changed was the
253
itself. In this command, both `onset' and `case' were those in the second position of the search path `search()', which was an `attached' copy of `.data'. From the command
> onset[!case] <- NA
there would be three copies of `onset'. The first and the second one in `.data' and in `search()[2]' which is not changed. These two copies are then different from the free vector which was created/modified by the command. To get a permanent effect, the `recode' command in Epicalc should be used.
> recode(onset, !case, NA)
By this method, the free vector `onset' will be removed. The vectors in `.data' and in `search()[2]' would also be automatically synchronised to the new value. However, the variable `time.onset', a POSIXt object, does not have this problem. Using this variable in the `.data' in the next chapter would give no problem.
Chapter 8
Both `beefcurry' and `saltegg' have significant attributable risk and risk ratio. One might think that these foods would have been contaminated. In fact, the increase in risk from consumption of these is due to confounding. This is discussed in the next chapter.
Chapter 9
> cc(case, water) # OR =1.14, 95%CI = 0.47, 2.85 > table(case, eclair.eat, water)
Note one zero cell for a case who ate neither eclairs nor water. The following subsequent commands give MH odds ratio but not stratum specific OR and the homogeneity test results.
> mhor(case, eclair.eat, water) # MH OR = 24.3, 95% CI = 14.11, 41.7 > mhor(case, water, eclair.eat)
254
# MH OR = 1.56, 95% CI =
0.60, 4.06
For stratification with beef curry, there is no problem with any cell with zero counts. The homogeneity test could be done without any serious problems.
> table(case, beefcurry, water) > mhor(case, beefcurry, water)
Graphs cross, homogeneity test P value = 0.016 Note the strong interaction of beef curry with eclair and with water, which needs a biological explanation.
Chapter 11
> > > > > des() plot(smoke, log(deaths)) plot(SO2, log(deaths)) plot(log(smoke), log(deaths)) plot(log(SO2), log(deaths))
The R-squared of `lm4' is equal to the following model (using log base 2):
> lm5 <- lm(log2(deaths) ~ log2(SO2)) > summary(lm4)$r.squared # 0.66
The coefficients of log(SO2) from `lm4' and of log2(SO2) from `lm5' are the same: 0.45843. For every unit increment of log2(SO2), the log2(deaths) increases by 0.458 units. Similarly, for every unit increment of loge(SO2), the loge(SO2) also increases by 0.458 units. This coefficient is thus independent of the base of logarithm. This means that the relationship between these two variables is on the power scale. Given x is a positive number, for every increment of SO2 by x times, the number of deaths will increase by x 0.45843 times.
> plot(log2(SO2), log2(deaths)) > abline(lm5)
255
From the regression coefficient and the graph. When the SO2 concentration in the air is doubled, the number of deaths will increase by 2 0.45843 or 1.374 times. The modelling for outcome variable that is discrete counting number can be more appropriately dealt with Poisson regression in chapter 19.
Chapter 12
> zap() > data(BP1) > use(BP1) > age.in.days <- as.Date("2001-03-12") - birthdate > age <- as.numeric(age.in.days)/365.25 > sort.by(sbp) > plot(sbp, ylim=c(0, max(sbp)), pch=" ", ylab="blood pressure") > for(i in 1:length(sbp)) { lines(x=c(i,i), y=c(sbp[i], dbp[i]), col=unclass(sex)[i]) } > title(main="Systolic and diastolic blood pressure of the subjects") > summary(lm(dbp ~ sex + age)) ======================= Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 48.9647 9.4928 5.158 1.32e-06 sexfemale 7.2243 4.0798 1.771 0.0797 age 0.9412 0.1813 5.192 1.14e-06 =======================
After adjusting for age, the difference between sexes is not statistically significant.
Chapter 13
All the conclusions are independent of the base for logarithm and must be the same.
> log2money <- log2(money) > summary(lm6 <- lm(log2money ~ age + age2)) ========================== Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.340996 1.124481 0.303 0.769437 age 0.416419 0.058602 7.106 0.000101 age2 -0.004211 0.000668 -6.304 0.000232 --> coef(lm6)
256
(Intercept) age age2 0.340996352 0.416418830 -0.004211267 > coef(lm4) (Intercept) age age2 0.102650130 0.125354559 -0.001267718 > coef(lm4) / coef(lm6) (Intercept) age age2 0.30103 0.30103 0.30103
The unit in horizontal axis in model `lm4' is 30% that in `lm6'. The proportion is log 2 of base 10.
> log10(2) [1] 0.30103
In computing the expected age where money is carried in the maximum amount:
> > > > > > a1 <- lm6$coefficients[3] b1 <- lm6$coefficients[2] c1 <- lm6$coefficients[1] x1 <- -b1/(2*a1); x1 # 49.44104 y1 <- a1 * x1^2 + b1 * x1 + c1 y1; 2^y1 # 1590.304
Money carried is a maximum at the age of 49.4 and the estimate is 1590.3 baht. These results are the same as those from `lm4', which uses logarithm base 10.
Chapter 14
The following commands are from a previous chapter.
> > > > > > > > data(BP1) use(BP1) des() age.in.days <- as.Date("2001-03-12") - birthdate age <- as.numeric(age.in.days)/365.25 saltadd1 <- saltadd levels (saltadd1) <- c("no", "yes", "missing") saltadd1[is.na(saltadd)] <- "missing"
257
Of the three models, `glm2' has the lowest AIC. Of the three models it is therefore the best.
> summary(glm2) =============== Coefficients: Estimate Std. Error t (Intercept) 63.1291 15.7645 age 1.5526 0.3118 saltaddyes 22.9094 6.9340 --Null deviance: 109757 on 79 Residual deviance: 73192 on 77 AIC: 780.53
Chapter 15
Problem 1
> logistic.display(glm(case ~ eclair.eat:beefcurry + sex, family = binomial, data = complete.data)) OR sex 1.611 eclair.eatFALSE:beefcurry 0.146 eclair.eatTRUE:beefcurry 4.696 lower 1.178 0.064 2.716 upper P value 2.203 0.003 0.333 0.000 8.119 0.000
The model has only two terms related to eclair and beef curry. The last row contains the answer. Problem 2
> > > > > > > > > > zap() data(ANCtable) attach(data1) death <- factor (death, labels=c("no","yes")) anc <- factor (anc, labels=c("old","new")) clinic <- factor (clinic, labels=c("A","B")) data1 <- data.frame(death, anc, clinic, Freq) data1 xtable <- xtabs(Freq~death+anc+clinic) mhor(mhtable=xtable)
Problem 3
> zap()
258
read.table("hakimi.dat", header=TRUE)-> hakimi summ(hakimi) hakimi$treatment <- 2-hakimi$treatment table(hakimi$treatment) attach(hakimi) cc(dead, treatment)
treatment dead 0 1 Total 0 196 204 400 1 28 37 65 Total 224 241 465 OR = 1.269 95% CI = 0.725 2.242 Chi-squared = 0.786 , 1 d.f. , P value = 0.375 Fisher's exact test (2-sided) P value = 0.423 > mhor(dead, treatment, malpres, graph=TRUE) Stratified analysis by malpres OR lower lim. upper lim. P value malpres 0 0.672 0.335 1.32 0.2655 malpres 1 6.688 0.940 81.48 0.0386 M-H combined 0.911 0.514 1.62 0.7453 M-H Chi2(1) = 0.105 , P value = 0.745 Homogeneity test, chi-squared 1 d.f.=5.596, P value=0.018
The crude and adjusted odds ratios are different, however the homogeneity test is significant indicating that the strata specific odds ratios can not be combined. When malpres=1, the effect of treatment on death is significant.
> summary(glm(dead ~ > summary(glm(dead ~ > summary(glm(dead ~ > summary(glm(dead ~ binomial)-> model4) > step(model4) treatment, binomial)->model1) treatment + malpres, binomial)->model2) treatment*malpres, binomial)->model3) treatment*malpres+birthwt*treatment,
We conclude that a significant interaction is evident between treatment and malpres. Birthweight is significant. The best model is found to be:
> m <- glm(dead~treatment*malpres+birthwt,binomial) > logistic.display(m) Odds ratio lower lim. upper lim. Pr(>|z|) treatment 0.5990 0.3133 1.1454 0.1212 malpres 1.5702 0.2875 8.5748 0.6024 birthwt 0.9986 0.9980 0.9993 0.0001 treatment:malpres 14.4467 2.0212 103.2609 0.0078 Log-likelihood = -154.53345 No. of observations = 465 AIC value = 319.0669
259
Problem 4
> m1 <- glm(case ~ hia+as.integer(gravi), binomial) > logistic.display(m1) OR lower95ci upper95ci P value hiaever IA 3.694 2.532 5.391 0.000 as.integer(gravi) 1.000 0.780 1.284 0.997 Log-likelihood = -429.38634 No. of observations = 723 AIC value = 864.7727
Since the P value of the numeric form of gravidity is not significant, there is no evidence of a linear trend or dose-response relationship between gravidity and risk of ectopic pregnancy.
Chapter 16
Problem 1
> > > > > zap() library(survival) use("vc1to6.dta") match.tab(case, alcohol, strata = matset) summary(clogit(case ~ alcohol + strata(matset)))
Problem 2
> clogit3 <- clogit(case ~ smoking + alcohol + rubber + strata(matset)) > clogit2 <- clogit(case ~ alcohol + rubber + strata(matset)) > clogit1 <- clogit(case ~ alcohol + strata(matset)) > clogit3$loglik > clogit2$loglik > clogit1$loglik > clogit3 =============== Likelihood ratio test=12 on 3 df, p=0.00738 n=119 > clogit2 =============== Likelihood ratio test=11.5 on 2 df, p=0.00314 n=119 > clogit1 =============== Likelihood ratio test=11.1 on 1 df, p=0.000843 n=119
The conditional log likelihood and the likelihood ratio test of `clogit1', despite being the smallest among the three, has the lowest degrees of freedom. This model contains only `alcohol', which is highly statistically significant whereas all other two independent variables are not. All of these facts suggest that `clogit1' should be
260
the model of choice. We can confirm this by using the likelihood ratio test:
> lrtest(clogit3, clogit2) Likelihood ratio test for Cox regression & conditional logistic regression Chi-squared 1 d.f. = 0.4743344 , P value = 0.491
Having one more degree of freedom with a small increase in likelihood is not worthwhile. Therefore, `clogit2' should be better than `clogit3'. The independent variable `smoking' is now removed. Similarity, we now test whether to keep `rubber'.
> lrtes(clogit2, clogit1) Likelihood ratio test for Cox regression & conditional logistic regression Chi-squared 1 d.f. = 0.383735 , P value = 0.5356
Again, the models `clogit2' and `clogit1' are not statistically significant. The current choice should be `clogit1'. Drinking alcohol is the only significant predictor for oesophageal cancer.
Chapter 17
Set up the data:
> > > > > > > > > > zap() outcome <- gl(n=3, k=4) levels (outcome) <- c("nochange","immuned","dead") vac <- gl(n=2, k=2, length= 12) levels (vac) <- c("placebo","vaccine") agegr <- gl(n=2, k=1, length=12) levels (agegr) <- c("young","old") total <- c(25,15,4,8,1,0,25,35,3,1,2,1) .data <- data.frame(outcome, vac, agegr, total) .data
Problem 1
> table1 <- xtabs(total ~ agegr+vac, data=.data) > table1 > cc(cctable=table1) # OR = 2.552, P value = .023
Problem 2
> table2 <- xtabs(total~agegr+outcome, data=.data) > table2
261
Problem 3
> table3 <- xtabs(total ~ outcome + vac, data=.data) > table3 > fisher.test(table3) # p-value < 2.2e-16 > multi3 <- multinom(outcome ~ vac + agegr, weights=total, data=.data) > s3 <- summary(multi3) > mlogit.display(multi3) # AIC = 137.13
The model `multi4' has a lower AIC than that of `multi3'. Age group is therefore not appropriate to be in the model. From the last command, it is concluded that vaccine increases the chance of getting immune with a highly significant odds ratio of 200. It should also be noted that the vaccine also (non-significantly) increases the chances of death.
Chapter 18
> > > > > > > zap() library(nnet) library(MASS) male <- c(rep(0, times=6), rep(1, times=6)) drug <- rep(c(0,1), times=6) pain <- rep(1:3, times=4) total <- c(3,5,15,10,5,7,8,5,10,10,10,2)
Shows a significant effect of drug in severe pain only. AIC = 191.623. For ordinal logistic regression:
> model.ord <- polr(pain.ord ~ drug + male, weights=total) > summary(model.ord)
The AIC = 189.037, which is better (lower) than the polytomous model.
262
> ordinal.or.display(model.ord)
In conclusion, both drugs and being male have significant reduction on pain.
Chapter 19
> .data <- read.table("montana.csv", header=TRUE) > arsenic1 <- arsenic!="<1 year" > model.final <- step(glm(respdeath ~ agegr + period + arsenic1 + start, offset=log(personyrs), poisson, data = .data)) > summary(model.final) > poisgof(model.final) > idr.display(model.final)
Note that using `arsenic1' in the model is better than using `arsenic' suggesting no evidence of a dose-response relationship. Moreover, workers who started to work from 1925 had significantly lower risk than those who had started earlier.
Chapter 20
Problem 1
> model.bang1 <- glmmPQL(user ~ urban+ age_mean+ living.children, random=~1 | district, binomial, data=.data) > summary(model.bang1)
Note that urban women have two times the odds of using contraceptives compared to rural women. A one year increment of age is associated with about a 3 percent reduction of odds of use. Problem 2 From the last output, increasing the number of living children does not have a linear dose-response relationship with use. The odds almost doubles if the woman had two children and almost triples for women with three living children. However, as the number exceeds three, the odds of use does not further increase. Problem 3
> model.bang2 <- glmmPQL(user ~ urban + age_mean + living.children, random = ~ age_mean | district, family=binomial,data=.data) > logLik(model.bang1) # -4244.312 (df=8) > logLik(model.bang2) # -4243.606 (df=10) > lrtest(model.bang1, model.bang2) # P value=0.4933
263
The evidence of age having a different effect in urban and rural areas is not found.
Chapter 21
> zap(); use("compaq.dta") > des(); summ()
Problem 1
> summ(year) > summ(year, by = status) > abline(v=c(5,6)) > dotplot(year, col=status+1) # colour code: 0=red for censored, 1=black for dead. See help(palette) for details.
Note that deaths are uniformly distributed in the first five years where there were only two censored observations. On the other hand, there was a lot censoring between the 5th and the 6th years where there were very few deaths. The second peak of censoring came after the 10th year. There is one patient who survived 15.8 years and was censored at the time the study ended. The strange alternating clustering of deaths and censoring would not be detected if the exploratory analysis was not done carefully. Problem 2
> surv.ca <- Surv(year, status) > plot(survfit(surv.ca ~ hospital), col = c("red", "blue"), legend.text = levels (hospital), main="Breast Cancer Survival")
264
0.0
10
15
Note the very dense censoring immediately after the 5th and the 10th years. Problem 3
> > > > survdiff(surv.ca survdiff(surv.ca survdiff(surv.ca survdiff(surv.ca ~ ~ ~ ~ hospital) hospital + strata(stage)) hospital + strata(agegr)) hospital + strata(ses))
The difference of survival between patients from the two types of the hospitals is highly significant despite the adjustments. Note that adjustment can only be done one variable at a time using this approach. Multivariate adjustment using Cox regression is presented in chapter 22.
Chapter 22
Problem 1
> coxph(surv.ca ~ hospital + stage model5 > cox.zph(model5) # Global test p > coxph(surv.ca ~ hospital + stage model6 > cox.zph(model6) # Global test p + strata(ses) + agegr) -> value = 0.00802 + ses + strata(agegr)) -> value = 0.00494
Models based on stratification by socio-economic status and by age still violate the proportional hazard assumption.
265
Problem 2
> plot(cox.zph(model4), var = 1)
-4
-2
0.24
0.99
1.8
2.6
3.4
4.3
9.2
Time
The hazard ratio looks relative stable and slightly on the negative side for most of the time period. A notable feature of the graph is that there are two clusters of residuals. Some extreme positive values are sparsely found at the top of the plot whereas the majority lie in another cluster within 0 to -3 units of beta. This may suggest that the data actually came from more than one group of patients. Unfortunately, we could not further investigate this finding.
Chapter 23
Problem 1 An estimate of the population prevalence is not known. However, we can obtain a range of sample sizes required corresponding to a range of values for p, say from 0.1 to 0.9.
> p <- seq(0.1,0.9,0.1) > d <- 0.05 > n.for.survey (p, delta = d) Sample size for survey. Assumptions: Proportion = 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Confidence limit = 95 %
266
= 0.05 from the estimate. = 138 246 323 369 384 369 323 246 138
We see from the output above that the maximum sample size required is found when p is equal to 0.5. This is true for any survey where the estimated prevalence is not known beforehand and the precision is fixed. For these situations, the safest bet is to assume that p = 0.5. Problem 2
> > > > p2 <- 0.5; or <- 2 odds2 <- p2/(1-p2) odds1 <- or*odds2 p1 <- odds1/(1+odds1); p1 [1] 0.6666667 > n.for.2p(p1,p2) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha = 0.05 power = 0.8 p1 = 0.6666667 p2 = 0.5 n2/n1 = 1 Estimated required sample size: n1 = 148 n2 = 148 n1 + n2 = 296
Nearly 300 subjects are needed. Problem 3 The worthwhile benefit is 2.5kg and since we don't know the actual means in the three groups, we can substitue any values for mu1 and mu2, so long as the difference is 2.5. Also, given that we are performing two comparisons, a reasonable type I error level (alpha) would be 0.02, instead of the conventional 0.05. The required sample sizes can then be obtained as follows:
> n.for.2means(mu1=50, mu2=52.5, sd1=3.5, sd2=3.5, ratio = 2, alpha=0.02) Estimation of sample size for testing Ho: mu1==mu2 ====== assumptions omitted ====== Estimated required sample size: n1 = 31 n2 = 61 n1 + n2 = 92
Thus 61 controls are required, whereas 31 are each required in the two treatment groups, giving a total sample size required of 123. Note that if the standard deviations in each group are increased to 4.5kg, the required sample size is increased to 200.
267
268
Index
A
Arrays ............ 32, 33, 35, 36, 40, 41, 109 Attributable risk .............................94, 97 Attributes 41, 43, 46, 123, 147, 148, 151, 175, 206, 207, 219, 227
E
Effect modification ....................103, 137 Extracting.......................................33, 46
F C
Calculator.............................................14 Chi-squared test .... 40, 99, 102, 103, 104, 105, 165, 182, 193, 222, 231 Class...... 27, 39, 42, 67, 70, 77, 131, 147, 187, 217, 219, 247 Codebook............... 46, 53, 216, 230, 247 Colour .. iv, 29, 65, 86, 87, 135, 137, 142, 250 Comments ............................................19 Concatenating ................................24, 25 Confidence interval22, 99, 100, 148, 149, 150, 156, 157, 160, 161, 166, 168, 171, 172, 173, 175, 180, 188, 197, 199, 200, 213, 220, 222, 226, 234, 243 Conflicts...............................................15 Confounding 98, 100, 101, 102, 103, 105, 158, 162, 165, 167, 192, 223, 225, 263 Covariance matrix................ 36, 148, 149 CRAN ................................ 10, 14, 16, 21 Cross-tabulation ............. 36, 40, 101, 157 Cumulative hazard rate ......................220 Factor levels 28, 29, 41, 46, 55, 116, 132, 142, 178, 180, 182, 183, 191, 192, 195, 201, 202, 216, 221 Factors.. 28, 29, 41, 61, 64, 115, 182, 195 Family, in glm...... 64, 138, 150, 193, 201 Format... 16, 41, 68, 69, 70, 79, 116, 129, 161, 162, 182, 217, 246, 252 F-test ..........................................124, 125 Functions..............................................17
G
Generalized linear model ...175, 189, 193 Goodness of fit...........................193, 200
H
Help..... 15, 16, 25, 65, 69, 109, 150, 157, 193, 247, 252
I
Incidence density........................195, 196 Incubation period .....................67, 84, 86 Index vector .........................25, 107, 109 Interaction ... 98, 103, 104, 105, 134, 135, 137, 143, 151, 158, 160, 161, 165, 209, 211 ISOdatetime ...................................72, 86
D
Data entry............. 43, 110, 115, 119, 216 Data frames . 41, 42, 46, 56, 85, 116, 143, 155 Design effect ......................................235 Dose-response relationship .. 96, 183, 194 Dotplot ................... 63, 65, 66, 82, 84, 92 Duplication ................................ 106, 107
269
K
Kaplan-Meier curve ................... 219, 224
L
Labelling ........................ 43, 65, 113, 115 Language.. 10, 11, 12, 16, 21, 68, 69, 248 Life table.................................... 218, 219 Likelihood ratio test ........... 176, 269, 270 Linear model36, 123, 126, 146, 149, 150, 151, 175, 189, 193, 209 Locale ..................................................68 Logical .................................................20 Logit.... 22, 152, 153, 154, 155, 173, 174, 179 Lot quality assurance sampling.. 241, 242
195, 196, 202, 218, 233, 234, 235, 236, 238, 239, 243, 245 Power determination ..........................243 Prevalence ... 22, 152, 154, 196, 233, 234, 235, 239, 240, 241, 245, 276 Proportional hazard assumption 225, 226, 227, 230, 231 Protective efficacy................................96 Pyramid..........................................93, 94
R
R Objects... 10, 18, 20, 21, 23, 24, 27, 32, 38, 43, 51, 94, 189, 219, 225, 249, 251 Random effects . 201, 202, 203, 204, 205, 208, 209, 273 Recoding ...................... 89, 111, 119, 132 Referent level ............. 145, 158, 167, 182 Reshaping data ... 119, 170, 172, 173, 174 Residuals... 124, 125, 126, 127, 128, 133, 139, 146, 147, 149, 151, 205, 206, 209, 232 Risk ratio.................... 94, 95, 96, 97, 196 Rprofile.site file.12, 15, 16, 23 R-squared .. 124, 125, 132, 133, 135, 139, 140, 142, 264
M
Mantel-Haenszel 102, 157, 171, 172, 222 Matching ..... 94, 169, 171, 172, 174, 247, 248, 251 Matrix ... 36, 37, 145, 148, 156, 182, 195, 197 Memory................ 14, 38, 43, 50, 51, 246 Missing values 30, 55, 79, 81, 83, 84, 85, 89, 98, 108, 110, 112, 119, 120, 130, 132, 159, 261 Mixed effects modelling ....................201
S
Scatter plots........................120, 121, 129 Search.. 14, 16, 17, 23, 49, 50, 51, 52, 89, 111, 115, 150 Stratified analysis...... 101, 156, 165, 202, 209, 231 Subscripts................... 25, 26, 33, 47, 109 Survey . 22, 130, 131, 185, 197, 213, 217, 233, 234, 235, 239, 240, 276 Syntax errors ........................................18
N
Negative binomial regression.............197
O
Offset ................................. 193, 195, 234 One-way tabulation............................117 Overdispersion ........................... 197, 199
T
Transforming......................................110 Transposition........................................34 TRUE and FALSE ...............................20
P
Packages ..............................................13 Population ... 22, 67, 93, 95, 96, 152, 182,
270
U
Update................................................119
W
Warnings................................15, 35, 228
V
Vectors... 23, 24, 26, 27, 35, 81, 161, 263
271
Epicalc Functions
aggregate.numeric cc codebook des detachAllData dotplot followup.plot kap keepData label.var logistic.display lookup lroc lrtest lsNoFunction matchTab n.for.2means n.for.2p n.for.survey pack poisgof power.for.2means power.for.2p pyramid recode setTitle shapiro.qqnorm sortBy summ tab1 tabpct titleString use zap Compute summary statistics of a numeric variable Odds ratio calculation and graphing Codebook of a data frame Desription of a data frame or a variable Detach all data frame Dot plot Longitudinal followup plot Kappa statistic Keep a subset of variables or records Label a variable, pack variables into a data frame and sort all data Tables for multivariate odds ratio, incidence density etc Recode several values of a variable ROC curve Likelihood ratio test List non-function objects Matched tabulation Sample size calculation Sample size calculation Sample size calculation Pack the data frame adding free vectors with same length Goodness of fit test for modeling of count data Power calculation for two sample means and proportions Power calculation for two sample means and proportions Population pyramid Recode variable(s) Setting language of Epicalc graph title Qqnorm plots with Shapiro-Wilk's test Sort the data frame Summary with graph One-way tabulation Two-way tabulation Replace commonly used words in Epicalc graph title Quick command to read in data Remove and detach all objects
272
Epicalc Datasets
ANCdata ANCtable BP Compaq Decay DHF99 Ectopic Familydata Hakimi HW93 Marryage Montana Oswego Outbreak Planning Sleep3 SO2 Suwit Timing VC1to1 Dataset on effect of new antenatal care method on mortality Dataset on effect of new ANC method on mortality (as a table) Dataset on blood pressure and determinants Dataset on cancer survival Dataset on tooth decay and mutan streptococci Dataset for exercise on predictors for mosquito larva infestation Dataset of a case-control study looking at history of abortion as a risk factor for ectopic pregnancy Dataset of a hypothetical family Dataset on effect of training personnel on neonatal mortality Dataset from a study on hookworm prevalence and intensity in 1993 Dataset on age at marriage Dataset on arsenic exposure and respiratory deaths Dataset from an outbreak of food poisoning in US Dataset from an outbreak of food poisoning on a sportsday, Thailand 1990. Dataset for practicing cleaning, labelling and recoding Dataset on sleepiness in a workshop Dataset on air pollution and deaths in UK Hookworm infection and blood loss: SEAJTM 1970 Dataset on time going to bed, waking up and arrival at the workshop Datasets on a matched case-control study of esophageal cancer
273