2018 Book DataScienceAndPredictiveAnalyt
2018 Book DataScienceAndPredictiveAnalyt
Dinov
Data
and
Science
Analyti
Predictive
cs
Biomedical and Health
using
Applications
R
Data Science and Predictive Analytics
Ivo D. Dinov
Ivo D. Dinov
University of Michigan–Ann Arbor
Ann Arbor, Michigan, USA
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
... dedicated to my lovely and
encouraging wife, Magdalena, my witty and
persuasive kids, Anna-Sophia and Radina,
my very insightful brother, Konstantin, and
my nurturing parents, Yordanka and Dimitar
...
Foreword
vii
viii
Foreword
second-order tensors (matrices); (2) illustrate variety of matrix operations and their
interpretations; (3) demonstrate linear modeling and solutions of matrix equations;
and (4) discuss the eigen-spectra of matrices.
Chapter 6 (Dimensionality Reduction) starts with a simple example reducing
2D data to 1D signal. We also discuss (1) matrix rotations, (2) principal
component analysis (PCA), (3) singular value decomposition (SVD), (4)
independent component analysis (ICA), and (5) factor analysis (FA).
The discussion of machine learning model-based and model-free techniques
commences in Chap. 7 (Lazy Learning – Classification Using Nearest
Neighbors). In the scope of the k-nearest neighbor algorithm, we present (1) the
general concept of divide-and-conquer for splitting the data into training and
validation sets, (2) evaluation of model performance, and (3) improving prediction
results.
Chapter 8 (Probabilistic Learning: Classification Using Naive Bayes) presents
the naive Bayes and linear discriminant analysis classification algorithms,
identifies the assumptions of each method, presents the Laplace estimator, and
demonstrates step by step the complete protocol for training, testing, validating,
and improving the classification results.
Chapter 9 (Decision Tree Divide and Conquer Classification) focuses on
decision trees and (1) presents various classification metrics (e.g., entropy,
misclassification error, Gini index), (2) illustrates the use of the C5.0 decision
tree algorithm, and (3) shows strategies for pruning decision trees.
The use of linear prediction models is highlighted in Chap. 10 (Forecasting
Numeric Data Using Regression Models). Here, we present (1) the fundamentals
of multivariate linear modeling, (2) contrast regression trees vs. model trees, and
(3) present several complete end-to-end predictive analytics examples.
Chapter 11 (Black Box Machine-Learning Methods: Neural Networks and
Support Vector Machines) lays out the foundation of Neural Networks as silicon
analogues to biological neurons. We discuss (1) the effects of network layers and
topology on the resulting classification, (2) present support vector machines
(SVM), and (3) demonstrate classification methods for optical character
recognition (OCR), iris flowers clustering, Google trends and the stock market
prediction, and quantifying quality of life in chronic disease.
Apriori Association Rules Learning is presented in Chap. 12 where we discuss
(1) the foundation of association rules and the Apriori algorithm, (2) support and
confidence measures, and (3) present several examples based on grocery
shopping and head and neck cancer treatment.
Chapter 13 (k-Means Clustering) presents (1) the basics of machine learning
clustering tasks, (2) silhouette plots, (3) strategies for model tuning and
improvement, (4) hierarchical clustering, and (5) Gaussian mixture modeling.
General protocols for measuring the performance of different types of
classification methods are presented in Chap. 14 (Model Performance
Assessment). We discuss (1) evaluation strategies for binary, categorical, and
continuous outcomes; (2) confusion matrices quantifying classification and
prediction accuracy; (3) visualization of algorithm performance and ROC curves;
and (4) introduce the foundations of internal statistical validation.
Foreword ix
neurons and networks, (3) neural nets for computing exclusive OR (XOR) and
negative AND (NAND) operators, (3) classification of handwritten digits, and
(4) classification of natural images.
We compiled a few dozens of biomedical and healthcare case-studies that are
used to demonstrate the presented DSPA concepts, apply the methods, and
validate the software tools. For example, Chap. 1 includes high-level driving
biomedical challenges including dementia and other neurodegenerative diseases,
substance use, neuroimaging, and forensic genetics. Chapter 3 includes a traumatic
brain injury (TBI) case-study, Chap. 10 described a heart attacks case-study, and
Chap. 11 uses a quality of life in chronic disease data to demonstrate optical
character recognition that can be applied to automatic reading of handwritten
physician notes. Chapter 18 presents a predictive analytics Parkinson’s disease
study using neuroimaginggenetics data. Chapter 20 illustrates the applications of
natural language processing to extract quantitative biomarkers from unstructured
text, which can be used to study hospital admissions, medical claims, or patient
satisfaction. Chapter 23 shows examples of predicting clinical outcomes for
amyotrophic lateral sclerosis and irritable bowel syndrome cohorts, as well as
quantitative and qualitative classification of biological images and volumes.
Indeed, these represent just a few examples, and the readers are encouraged to try
the same methods, protocols and analytics on other research-derived, clinically
acquired, aggregated, secondary-use, or simulated datasets.
The online appendices (http://DSPA.predictive.space) are continuously
expanded to provide more details, additional content, and expand the DSPA
methods and applications scope. Throughout this textbook, there are cross-
references to appropriate chapters, sections, datasets, web services, and live
demonstrations (Live Demos). The sequential arrangement of the chapters
provides a suggested reading order; however, alternative sorting and pathways
covering parts of the materials are also provided. Of course, readers and
instructors may further choose their own coverage paths based on specific
intellectual interests and project needs.
Preface
Genesis
Since the turn of the twenty-first century, the evidence overwhelming reveals
that the rate of increase for the amount of data we collect doubles each 12–14
months (Kryder’s law). The growth momentum of the volume and complexity of
digital information we gather far outpaces the corresponding increase of
computational power, which doubles each 18 months (Moore’s law). There is a
substantial imbalance between the increase of data inflow and the corresponding
computational infrastructure intended to process that data. This calls into question
our ability to extract valuable information and actionable knowledge from the
mountains of digital information we collect. Nowadays, it is very common for
researchers to work with petabytes (PB) of data, 1PB ¼ 1015 bytes, which may
include nonhomologous records that demand unconventional analytics. For
comparison, the Milky Way Galaxy has approximately 2 1011 stars. If each star
represents a byte, then one petabyte of data correspond to 5,000 Milky Way
Galaxies.
This data storage-computing asymmetry leads to an explosion of innovative
data science methods and disruptive computational technologies that show
promise to provide effective (semi-intelligent) decision support systems.
Designing, understanding and validating such new techniques require deep within-
discipline basic science knowledge, transdisciplinary team-based scientific
collaboration, openscientific endeavors, and a blend of exploratory and
confirmatory scientific discovery. There is a pressing demand to bridge the
widening gaps between the needs and skills of practicing data scientists, advanced
techniques introduced by theoreticians, algorithms invented by computational
scientists, models constructed by biosocial investigators, network products and
Internet of Things (IoT) services engineered by software architects.
xii
xi
Preface
Purpose
Limitations/Prerequisites
Prior to diving into DSPA, the readers are strongly encouraged to review the
prerequisites and complete the self-assessment pretest. Sufficient remediation
materials are provided or referenced throughout. The DSPA materials may be used
for variety of graduate level courses with durations of 10–30 weeks, with 3–4
instructional credit hours per week. Instructors can refactor and present the
materials in alternative orders. The DSPA chapters in this book are organized
sequentially. However, the content can be tailored to fit the audience’s needs.
Learning data science and predictive analytics is not a linear process – many
alternative pathways
http://socr.umich.edu/people/dinov/2017/Spring
DSPA_HS650/DSPA_CertPlanning.html
Preface xiii
Acknowledgements
The work presented in this textbook relies on deep basic science, as well as
holistic interdisciplinary connections developed by scholars, teams of scientists,
and transdisciplinary collaborations. Ideas, datasets, software, algorithms, and
methods introduced by the wider scientific community were utilized throughout
the DSPA resources. Specifically, methodological and algorithmic contributions
from the fields of computer vision, statistical learning, mathematical
optimization, scientific inference, biomedical computing, and informatics drove
the concept presentations, datadriven demonstrations, and case-study reports. The
enormous contributions from the entire R statistical computing community were
critical for developing these resources. We encourage community contributions to
expand the techniques, bolster their scope and applications, enhance the collection
of case-studies, optimize the algorithms, and widen the applications to other data-
intense disciplines or complex scientific challenges.
The author is profoundly indebted to all of his direct mentors and advisors for
nurturing my curiosity, inspiring my studies, guiding the course of my career, and
providing constructive and critical feedback throughout. Among these scholars are
Gencho Skordev (Sofia University); Kenneth Kuttler (Michigan Tech
University); De Witt L. Sumners and Fred Huffer (Florida State University); Jan
de Leeuw, Nicolas Christou, and Michael Mega (UCLA); Arthur Toga (USC); and
Brian Athey, Patricia Hurn, Kathleen Potempa, Janet Larson, and Gilbert Omenn
(University of Michigan).
Many other colleagues, students, researchers, and fellows have shared their
expertise, creativity, valuable time, and critical assessment for generating,
validating, and enhancing these open-science resources. Among these are
Christopher Aakre, Simeone Marino, Jiachen Xu, Ming Tang, Nina Zhou, Chao
Gao, Alexandr Kalinin, Syed Husain, Brady Zhu, Farshid Sepehrband, Lu Zhao,
Sam Hobel, Hanbo Sun, Tuo Wang, and many others. Many colleagues from the
Statistics Online Computational Resource (SOCR), the Big Data for Discovery
Science (BDDS) Center, and the Michigan Institute for Data Science (MIDAS)
provided encouragement and valuable suggestions.
The development of the DSPA materials was partially supported by the US
National Science Foundation (grants 1734853, 1636840, 1416953, 0716055, and
1023115), US National Institutes of Health (grants P20 NR015331, U54
EB020406, P50 NS091856, P30 DK089503, P30AG053760), and the Elsie
Andresen Fiske Research Fund.
The Data Science and Predictive Analytics (DSPA) resources are designed to help
scientists, trainees, students, and professionals learn the foundation of data
science, practical applications, and pragmatics of dealing with concrete datasets,
and to experiment in a sandbox of specific case-studies. Neither the author nor
the publisher have control over, or make any representation or warranties,
expressed or implied, regarding the use of these resources by researchers, users,
patients, or their healthcare provider(s), or the use or interpretation of any
information stored on, derived, computed, suggested by, or received through any
of the DSPA materials, code, scripts, or applications. All users are solely
xvi
responsible for deriving, interpreting, and communicating any information to (and
receiving feedback from) the user’s representatives or healthcare provider(s).
Users, their proxies, or representatives (e.g., clinicians) are solely responsible
for reviewing and evaluating the accuracy, relevance, and meaning of any
information stored on, derived by, generated by, or received through the
application of any of the DSPA software, protocols, or techniques. The author and
the publisher cannot and do not guarantee said accuracy. The DSPA resources,
their applications, and any information stored on, generated by, or received
through them are not intended to be a substitute for professional or expert
advice, diagnosis, or treatment. Always seek the advice of a physician or other
qualified professional with any questions regarding any real case-study (e.g.,
medical diagnosis, conditions, prediction, and prognostication). Never disregard
professional advice or delay seeking it because of something read or learned
through the use of the DSPA material or any information stored on, generated
by, or received through the SOCR resources.
All readers and users acknowledge that the DSPA copyright owners or
licensors, in their sole discretion, may from time to time make modifications to
the DSPA resources. Such modifications may require corresponding changes to
be made in the code, protocols, learning modules, activities, case-studies, and
other DSPA materials. Neither the author, publisher, nor licensors shall have any
obligation to furnish any maintenance or support services with respect to the
DSPA resources.
xv
The DSPA resources are intended for educational purposes only. They are not
intended to offer or replace any professional advice nor provide expert opinion.
Please speak to qualified professional service providers if you have any specific
concerns, case-studies, or questions.
All DSPA information, materials, software, and examples are provided for general
education purposes only. Persons using the DSPA data, models, tools, or services
for any medical, social, healthcare, or environmental purposes should not rely on
accuracy, precision, or significance of the DSPA reported results. While the
DSPA resources may be updated periodically, users should independently check
against other sources, latest advances, and most accurate peer-reviewed
information.
Please consult appropriate professional providers prior to making any lifestyle
changes or any actions that may impact those around you, your community, or
various real, social, and virtual environments. Qualified and appropriate
professionals represent the single best source of information regarding any
Biomedical, Biosocial, Environmental, and Health decisions. None of these
resources have either explicit or implicit indication of FDA approval!
Any and all liability arising directly or indirectly from the use of the DSPA
resources is hereby disclaimed. The DSPA resources are provided “as is” and
without any warranty expressed or implied. All direct, indirect, special, incidental,
consequential, or punitive damages arising from any use of the DSPA resources or
materials contained herein are disclaimed and excluded.
Notations
http://www.socr.umich.edu/
people/dinov/courses/ Some of these Live Demos require modern Java and JavaScript
DSPA_Topics.html enabled browsers and Internet access.
require(ggplot2)
# Comments Loading required package:
ggplot2
Data_R_SAS_SPSS_Pubs <-
R fragments of code, reported results in the output shell, or
read.csv('https://umich.edu/data', comments. The complete library of all code presented in the
header=T)
textbook is available in electronic format on the DSPA site.
df <-
data.frame(Data_R_SAS_SPSS_Pubs) Note that:
# convert to long format df <- “#“ is used for comments,
melt(df , id.vars = 'Year', “##” indicates R textual output, the R code is
variable.name = 'Software')
color-coded to identify different types of
ggplot(data=df, aes(x=Year, y=value,
color=Software, group = comments, instructions, commands and
Software)) + parameters,
geom_line() + Output like “## … ##” suggests that some of
geom
## 3 1 the R output is deleted or compressed to save
a … space, and
## 20 3 c indenting is used to visually determine the scope
data_long of a method, command, or an expression
## CaseID Gender Feature
Measurement
## 1 1 M Age
xviii
5.0
## 2 2 F Age In an asymptotic or limiting sense, tending to, convergence, or
6.0 approaching a value or a limit.
Left hand size is substantially smaller or larger than the Depending on the context, model definition, similar to,
right approximately equal to, or equivalent (in probability distribution
sense).
hand side.
A standard reference notation to functions members of
package::function
specific R packages.
https://umich.instructure.com/courses/38100/files/folder/Ca
Case-studies se_Studies
Electronic Materials http://DSPA.predictive.space
Also see the Glossary and the Index, located in the end of the book.
xvii
Contents
1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.6 Neurodegeneration . . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.2.9 Neuroimaging-Genetics . . . . . . . . . . . . . . . . . . . . . . . 7
2 Foundations of R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
xx Contents
2.3 Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.14 Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.15 Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.20 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Contents xxi
2.21 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.21.1 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.23 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.23.2 R Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.24.5 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.24.6 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3 Managing Data in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4 Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
xxiv Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
6 Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Contents xxxiii
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
13 k-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
443
Adults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
xxxiv Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Validation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
16.1.5 Reading and Writing XML with the XML Package . . . 523
bigrf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
xliv Contents
(Inverse-CDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
Contents xlv
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
Chapter 1
Motivation
This textbook is based on the Data Science and Predictive Analytics (DSPA)
course taught by the author at the University of Michigan. These materials
collectively aim to provide learners with a solid foundation of the challenges,
opportunities, and strategies for designing, collecting, managing, processing,
interrogating, analyzing, and interpreting complex health and biomedical datasets.
Readers that finish this textbook and successfully complete the examples and
assignments will gain unique skills and acquire a tool-chest of methods, software
tools, and protocols that can be applied to a broad spectrum of Big Data problems.
The DSPA textbook vision, values, and priorities are summarized below:
• Vision: Enable active learning by integrating driving motivational challenges
with mathematical foundations, computational statistics, and modern
scientific inference.
• Values: Effective, reliable, reproducible, and transformative data-driven
discovery supporting open science.
• Strategic priorities: Trainees will develop scientific intuition, computational
skills, and data-wrangling abilities to tackle big biomedical and health data
problems. Instructors will provide well-documented R-scripts and software
recipes implementing atomic data filters as well as complex end-to-end
predictive big data analytics solutions.
Before diving into the mathematical algorithms, statistical computing methods,
software tools, and health analytics covered in the remaining chapters, we will
discuss several driving motivational problems. These will ground all the
subsequent scientific discussions, data modeling techniques, and computational
approaches.
2 1 Motivation
For each of the studies below, we illustrate several clinically relevant scientific
questions, identify appropriate data sources, describe the types of data elements,
and pinpoint various complexity challenges.
• Predict the clinical diagnosis of patients using all available data (with and
without the unified Parkinson’s disease rating scale (UPDRS) clinical
assessment, which is the basis of the clinical diagnosis by a physician).
• Compute derived neuroimaging and genetics biomarkers that can be used to
model the disease progression and provide automated clinical decisions
support.
• Generate decision trees for numeric and categorical responses (representing
clinically relevant outcome variables) that can be used to suggest an
appropriate course of treatment for specific clinical phenotypes (Fig. 1.2).
Data
Source
ADNI Clinical data: demographics, clinical assessments, cognitive
1.2 Examples of Driving Motivational Problems and Challenges 3
Archive assessments; Imaging data: sMRI, fMRI, DTI, PiB/FDG ADNI provides interesting
PET; Genetics data: Illumina SNP genotyping; Chemical data modalities, multiple
biomarker: lab tests, proteomics. Each data modality comes cohorts (e.g., early-onset,
with a different number of cohorts. Generally, mild, and severe dementia,
. For instance, previously conducted ADNI studies controls) that allow effective
with [ doi: 10.3233/JAD-150335, doi: model training and validation
10.1111/jon.12252, doi: 10.3389/fninf.2014.00041]. NACC Archive.
PPMI Demographics: age, medical history, sex; Clinical data: The longitudinal PPMI dataset
Archive physical, verbal learning and language, neurological and including clinical, biological, and
olfactory (University of Pennsylvania Smell imaging data (screening, baseline,
Identification Test, UPSIT) tests, vital signs, MDS- 12, 24, and 48 month follow-ups)
UPDRS scores (Movement Disorder; Society-Unified may be used conduct model-based
Parkinson's predictions as well as model-free
Disease Rating Scale), ADL (activities of daily living), classification and forecasting
Montreal Cognitive Assessment (MoCA), Geriatric analyses.
Depression Scale (GDS-15); Imaging data: structural
MRI; Genetics data: lllumina ImmunoChip (196,524
variants) and NeuroX (covering 240,000 exonic variants)
with 100% sample success rate, and 98.7% genotype
success rate genotyped for APOE e2/e3/e4. Three
cohorts of subjects; Group 1 = {de novo PD Subjects
with a diagnosis of PD for two years or less who are not
taking PD medications}, N1 = 263; Group 2 = {PD
Subjects with Scans without Evidence of a Dopaminergic
Deficit (SWEDD)}, N2 = 40; Group 3 = {Control
Subjects without PD who are 30 years or older and who
do not have a first degree blood relative with PD}, N3 =
127.
Fig. 1.2 Outline of a Parkinson’s disease case-
study
• Is the Risk for Alcohol Withdrawal Syndrome (RAWS) screen a valid and
reliable tool for predicting alcohol withdrawal in an adult medical inpatient
population?
4 1 Motivation
• What is the optimal cut-off score from the AUDIT-C to predict alcohol
withdrawal based on RAWS screening?
• Should any items be deleted from, or added to, the RAWS screening tool to
enhance its performance in predicting the emergence of alcohol withdrawal
syndrome in an adult medical inpatient population? (Fig. 1.3)
Data
Sample Size/Data Type Summary
Source
ProAct Over 100 clinical variables are recorded for all The time points for all longitudinally
Archive subjects including: Demographics: age, race, varying data elements will be
medical history, sex; Clinical data: Amyotrophic aggregated into signature vectors.
Lateral Sclerosis Functional Rating Scale This will facilitate the modeling and
(ALSFRS), adverse events, onset_delta, prediction of ALSFRS slope
onset_site, drugs use (riluzole). The PRO-ACT changes over the first three months
training dataset contains clinical and lab test (baseline to month 3).
information of 8,635 patients. Information of
2,424 study subjects with valid gold standard
ALSFRS slopes will be used in out processing,
modeling and analysis.
Fig. 1.4 Outline of an amyotrophic lateral sclerosis (Lou Gehrig’s disease) case-study
• Identify the most highly significant variables that have power to jointly
predict the progression of ALS (in terms of clinical outcomes like ALSFRS and
muscle function).
• Provide a decision tree prediction of adverse events based on subject phenotype
and 0–3-month clinical assessment changes (Fig. 1.4).
1.2.6 Neurodegeneration
with mild cognitive impairment (MCI), and 225 asymptomatic normal controls
(NC). Their sMRI data were parcellated using BrainParser, and the 80 most
important neuroimaging biomarkers were extracted using the global shape analysis
pipeline workflow. Using a pipeline implementation of Plink, the authors
obtained 80 SNPs highly associated with the imaging biomarkers. The authors
observed significant
number of repeat units present or by the length of the repeat sequence. STRs are
surrounded by nonvariable segments of DNA known as flanking regions. The
STR allele in Fig. 1.7 could be denoted by “6”, as the repeat unit (GATA) repeats
6 times, or as 70 base pairs (bps) because its length is 70 bases in length, including
the starting/ ending flanking regions. Different alleles of the same STR may
correspond to different number of GATA repeats, with the same flanking
regions.
Fig. 1.6 Indices of the 56 regions of interest (ROIs): A and B – extracted by the BrainParser
software using the LPBA40 brain atlas
1.2 Examples of Driving Motivational Problems and Challenges 7
1.2.9 Neuroimaging-Genetics
clinical, and cognitive data. A unique feature of this architecture is the graphical
user interface to the Pipeline environment. Through its client-server architecture,
the Pipeline environment provides a graphical user interface for designing,
executing, monitoring, validating, and disseminating complex protocols that utilize
diverse suites of software tools and web services. These pipeline workflows are
represented as portable Extensible Markup Language (XML) objects, which
transfer the execution instructions and user specifications from the client user
machine to remote pipeline servers for distributed computing. Using Alzheimer’s
and Parkinson’s data, this study provides examples of translational applications
using this infrastructure (Figs. 1.8 and 1.9).
Fig. 1.8 A collage of modules and pipeline workflows from genomic sequence analyses
1.2 Examples of Driving Motivational Problems and Challenges 9
Table 1.1 The characteristic six dimensions of Big biomedical and healthcare data
BD dimensions Necessary techniques, tools, services, and support
infrastructure
Size Harvesting and management of vast amounts of data
Complexity Wranglers for dealing with heterogeneous data
Incongruency Tools for data harmonization and aggregation
Multisource Transfer and joint modeling of disparate elements
Multiscale Macro to meso- to microscale observations
Incomplete Reliable management of missing data
researchers, practitioners, and policy makers alike. A review of many biomedical,
health informatics, and clinical studies suggests that there are indeed common
characteristics of complex big data challenges. For instance, imagine analyzing the
observational data of thousands of Parkinson’s disease patients, based on tens of
thousands of signature biomarkers derived from multisource imaging, genetics,
and clinical, physiologic, phenomics, and demographic data elements. IBM had
defined the qualitative characteristics of Big Data as 4 Vs: Volume, Variety,
Velocity, and Veracity (there are additional V-qualifiers that can be added).
More recently (PMID:26998309) we defined a constructive characterization
of Big Data that clearly identifies the methodological gaps and necessary tools to
handle such archives, Table 1.1.
The pipeline environment provides a large tool chest of software and services that
can be integrated, merged, and processed. The Pipeline workflow library and the
workflow miner illustrate much of the functionality that is available. Java-based
and HTML5 webapp graphical user interfaces (GUIs) provide access to a powerful
4,000 core grid compute server (Fig. 1.10).
12 1 Motivation
1.7 Examples of Data Repositories, Archives, and Services
There are many sources of data available on the Internet. A number of them
provide open access to the data based on FAIR (Findable, Accessible,
Interoperable, Reusable) principles. Below are examples of open-access data
sources that can be used to test the techniques presented in this textbook. We
demonstrate the tasks of retrieval, manipulation, processing, analytics, and
visualization using example datasets from these archives.
• SOCR Wiki Data, http://wiki.socr.umich.edu/index.php/SOCR_Data
• SOCR Canvas datasets,
https://umich.instructure.com/courses/38100/files/folder/ data
1.8 DSPA Expectations
http://pipeline.loni.usc.edu/webapp
Fig. 1.10 The pipeline environment provides a client-server platform for designing, executing,
tracking, sharing, and validating complex data analytic protocols
There are many different classes of software that can be used for data
interrogation, modeling, inference, and statistical computing. Among these are R,
Python, Java, C/C++, Perl, and many others. The table below compares R to
various other statistical analysis software packages and more detailed comparison
is available online (Fig. 2.1),
https://en.wikipedia.org/wiki/Comparison_of_statistical_packages.
The reader may also review the following two comparisons of various
statistical computing software packages:
• UCLA Stats Software Comparison
• Wikipedia Stats Software Comparison
Let’s start by looking at an exemplary R script that shows the estimates of the
citations of three statistical computing software packages over two decades (1995–
2015). More details about these command lines will be presented in later chapters.
However, it’s worth looking at the four specific steps, each indicated by a
16 2 Foundations of R
© Ivo D. Dinov 2018 13
I. D. Dinov, Data Science and Predictive Analytics, https://doi.org/10.1007/978-3-319-72347-
1_2
Statistical
Advantages Disadvantages
Software
R R is actively maintained ( developers, Mostly scripting language.
packages). Excellent connectivity to various types of data and other Steeper learning curve
systems. Versatile for solving problems in many domains. It's free,
open-source code. Anybody can access/review/extend the source
code. R is very stable and reliable. If you change or redistribute the R
source code, you have to make those changes available for anybody
else to use. R runs anywhere (platform agnostic). Extensibility: R
supports extensions, e.g., for data manipulation, statistical modeling,
and graphics. Active and engaged community supports R.
Unparalleled question-andanswer (Q&A) websites. R connects with
other languages (Java/C/JavaScript/Python/Fortran) & database
systems, and other programs, SAS, SPSS, etc. Other packages have
add-ons to connect with R. SPSS has incorporated a link to R, and
SAS has protocols to move data and graphics between the two
packages.
SAS Large datasets. Commonly used in business & Government Expensive. Somewhat dated
programming language.
Expensive/proprietary
Stata Easy statistical analyses Mostly classical stats
SPSS Appropriate for beginners Simple interfaces Weak in more cutting edge
statistical procedures lacking in
robust methods and survey
methods
Fig. 2.1 Comparison of several statistical software platforms (R, SAS, Stata, SPSS)
150000
50000
Software
R
SAS
SPSS
17
1995 2000 2005 2010 2015
Year
line of code: (1) we start by loading 2 of the necessary R packages for data
transformation (reshape2) and visualization (ggplot2); (2) loading the software
citation data from the Internet; (3) reformatting the data; and (4) displaying the
composite graph of citations over time (Fig. 2.2).
2.2 Getting Started
require(ggplot2)
require(reshape2)
Data_R_SAS_SPSS_Pubs <-
read.csv('https://umich.instructure.com/files/2361245
/download?download_frd=1',
header=T) df <-
data.frame(Data_R_SAS_SPSS_Pubs)
# convert to long format (http://www.cookbook-
r.com/Manipulating_data/Convert ing_data_between_wide_and_long_format/)
df <- melt(df , id.vars = 'Year', variable.name = 'Software')
ggplot(data=df, aes(x=Year, y=value, color=Software, group = Software))
+ ge om_line() + geom_line(size=4) + labs(x='Year', y='Citations')
2.2 Getting Started
R is a free software that can be installed on any computer. The ‘R’ website is:
http:// R-project.org. There you can download the shell-based R-environment
following this protocol:
• click download CRAN in the left bar
• choose a download site
• choose your operation system (e.g., Windows, Mac, Linux)
• click base
• choose the latest version to Download R (3.4, or higher (newer) version for
your specific operating system, e.g., Windows).
For many readers, it’s best to also install and run R via RStudio GUI (graphical
user interface). To install RStudio, go to: http://www.rstudio.org/ and do the
following:
• click Download RStudio
• click Download RStudio Desktop
• click Recommended For Your System
• download the .exe file and run it (choose default answers for all questions)
18 2 Foundations of R
2.2.3 RStudio GUI Layout
2.3 Help
R provides documentations for different R functions. The function call to get these
documentations is help(). Just put help(topic) in the R console and you can get
detailed explanations for each R topic or function. Another way of doing it is to
call ?topic, which is even easier, or more generally ??topic.
19
For example, if we want to check the function for linear models (i.e. function
lm ()), we can use the following function.
help(lm)
?lm
2.4 Simple Wide-to-Long Data format Translation
rawdata_wide
Popular data generation functions are c(), seq(), rep(), and data.frame().
Sometimes, we may also use list() and array() to generate data.
c()
c() creates a (column) vector. With option recursive¼T, it descends through lists
combining all elements into one vector.
a<-c(1, 2, 3, 5, 6, 7, 10, 1,
4) a
## [1] 1 2 3 5 6 7 10 1 4
c(list(A = c(Z = 1, Y = 2), B = c(X = 7), C = c(W = 7, V=3, U=-
1.9)), recurs ive = TRUE)
21
## A.Z A.Y B.X C.W C.V C.U
## 1.0 2.0 7.0 7.0 3.0 -1.9
2.5 Data Generation
When combined with list(), c() successfully created a vector with all the
information in a list with three members A, B, and C.
seq(from, to)
seq(from,to) generates a sequence. Adding option by¼ can help us specify
increment; Option length¼ specifies desired length. Also, seq(along¼x) generates
a sequence 1,2,...,length(x). This is used for loops to create ID for each element in
x.
seq(1, 20, by=0.5)
## [1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5
7.0 7.
5
## [15] 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5
14.0 14.
5
## [29] 15.0 15.5 16.0 16.5 17.0 17.5 18.0 18.5 19.0 19.5
## [1] 1 2 3 4
rep(x, times)
rep(x,times) creates a sequence that repeats x a specified number of times. The
option each¼ allows us to repeat first over each element of x certain number of
times.
rep(c(1, 2, 3), 4)
## [1] 1 2 3 1 2 3 1 2 3 1 2 3 rep(c(1, 2, 3), each=4)
## [1] 1 1 1 1 2 2 2 2 3 3 3 3
l$a[[2]]
## [1] 2
l$b ##
[1] "hi"
Note that R uses 1-based numbering rather than 0-based like some other
languages (C/Java), so the first element of a list has index 1.
array(x, dim¼)
array(x, dim¼) creates an array with specific dimensions. For example,
dim¼c(3,4,2) means two 3 4 matrices. We use [] to extract specific elements in
the array. [2,3,1] means the element at the second row third column in the first
page. Leaving one number in the dimensions empty would help us to get a
specific row, column or page. [2,,1] means the second row in the 1st page. See
this image (Fig. 2.3):
2.5 Data Generation
23
The first pair of functions we will talk about are load(), which helps us reload
datasets written with the save() function.
Let’s create some data first.
data(x) loads the specified data sets and library(x) loads the necessary
add-on packages.
data("iris")
summary(iris)
## Sepal.Length Sepal.Width Petal.Length
Petal.Width ## Min. :4.300 Min. :2.000 Min. :
1.000 Min. :0.100 ## 1st Qu.:5.100 1st Qu.:2.800
1st Qu.:1.600 1st Qu.:0.300 ## Median :5.800
Median :3.000 Median :4.350 Median :1.300 ##
Mean :5.843 Mean :3.057 Mean :3.758 Mean :
1.199 ## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100
3rd Qu.:1.800 ## Max. :7.900 Max. :4.400
Max. :6.900 Max. :2.500
## Species
## setosa :50
## versicolor:50
## virginica :50
read.table(file) reads a file in table format and creates a data frame from it.
The default separator sep¼ "" is any whitespace. Use header¼TRUE to read the
first line as a header of column names. Use as.is¼TRUE to prevent character
vectors from being converted to factors. Use comment.char¼ "" to prevent “#”
from being interpreted as a comment. Use skip¼n to skip n lines before reading
data.
See the help for options on row naming, NA treatment, and others.
Let’s use read.table() to read a text file in our class file.
data.txt<-
read.table("https://umich.instructure.com/files/1628628/download?d
ownload_frd=1", header=T, as.is = T) # 01a_data.txt
summary(data.txt)
## Name Team Position
Height ## Length:1034 Length:1034 Length:1034
Min. :67.0 ## Class :character Class :character Class
:character 1st Qu.:72.0 ## Mode :character Mode
:character Mode :character Median :74.0 ##
Mean :73.7
## 3rd Qu.:75.0
##
Max. :83.0
## Weight Age
## Min. :150.0 Min. :20.90
## 1st Qu.:187.0 1st
Qu.:25.44 ## Median :200.0
Median :27.93 ## Mean :
25
201.7 Mean :28.74
## 3rd Qu.:215.0 3rd Qu.:31.23
## Max. :290.0 Max. :48.52
2.6 Input/Output (I/O)
Expression Explanation
x[i,j] Element at row i, column j
x[i,] Row i
x[,j] Column j
x[,c(1,3)] Columns 1 and 3
x["name",] Row named "name"
Table 2.3 Matrix indexing in R
The following functions will test if the each data element is a specific type:
is.na(x), is.null(x), is.array(x), is.data.frame(x), is. numeric(x), is.complex(x),
is.character(x), ...
For a complete list, type methods(is) in R console. The output for these
functions are a bunch of TRUE or FALSE logical statements. One statement for
one element in the dataset.
length(x) gives us the number of elements in x.
## [1] 4
class(x) get or set the class of x. Note that we can use unclass(x) to remove
the class attribute of x.
## [1] 1 1 2 3 5 10 40 rev(sort(x))
## [1] 40 10 5 3 2 1 1
30 2 Foundations of R
cut(x, breaks) divides x into intervals with same length (sometimes factors).
breaks is the number of cut intervals or a vector of cut points. Cut divides the
range of x into intervals coding the values in x according to the intervals they fall
into.
x
## [1] 1 5 2 1 10 40 3
cut(x, 3)
x
## [1] 1 5 2 1 10 40 3 which(x==2)
## [1] 3
na.omit(df)
## a b
## 1 1 1
## 2 2 3
## 4 4 9
## 5 5 8
unique(x) If x is a vector or a data frame, it returns a similar object but with the
duplicate elements suppressed.
df1<-data.frame(a=c(1, 1, 7, 6, 8), b=c(1, 1, NA, 9,
8)) df1
## a b
## 1 1 1
## 2 1 1
## 3 7 NA
## 4 6 9
31
## 5 8 8
unique(df
1)
## a b
## 1 1 1
## 3 7 NA
## 4 6 9
## 5 8 8
table(x) returns a table with the different values of x and their frequencies
(typically for integers or factors). Also check prob.table().
v<-c(1, 2, 4, 2, 2, 5, 6, 4, 7, 8, 8) table(v)
## v
## 1 2 4 5 6 7 8
## 1 3 2 1 1 1 2
subset(x, ...) returns a selection of x with respect to criteria ... (typically ... are
comparisons like x$V1<10). If x is a data frame, the option select¼ gives the
variables to be kept or dropped using a minus sign.
sub<-subset(df1, df1$a>5); sub
## a b
## 3 7 NA
## 4 6 9
## 5 8 8
sub<-subset(df1, select=-
a) sub
## b
## 1 1
## 2 1
## 3 NA
## 4 9
## 5 8
sample(x, size) resamples randomly and without replacement size elements in the
vector x, the option replace¼TRUE allows to resample with replacement.
v
## [1] 1 2 4 2 2 5 6 4 7 8 8
## [1] 7 8 1 6 1 1 7 8 1 7 8 1 6 7 8 7 1 6 8 8
prop.table(x, margin¼) table entries as fraction of marginal table.
prop.table(table(v))
## v
## 1 2 4 5 6
7 ## 0.09090909 0.27272727 0.18181818 0.09090909 0.09090909
0.09090909
## 8
## 0.18181818
32 2 Foundations of R
2.11 Math Functions
Basic math functions like sin, cos, tan, asin, acos, atan, atan2, log, log10, exp. and
“set” functions union(x, y), intersect(x, y), setdiff(x,y), setequal(x,y),
is.element(el,set) are available in R.
lsf.str("package:base") displays all base functions built in a specific R
package (like base).
Also we have the Table 2.4 of functions that you might need when using R for
calculations.
Note: many math functions have a logical parameter na.rm. ¼ FALSE to
specify missing data (NA) removal.
2.11 Math Functions
The following table summarizes basic operation functions. We will discuss this
topic in detail in Chap. 5 (Table 2.5).
mat1 <- cbind(c(1, -1/5), c(-1/3,
1)) mat1.inv <- solve(mat1)
df1
## a b
## 1 1 1
## 2 1 1 ## 3 7 NA ## 4
6 9 ## 5 8 8 apply(df1,
2, mean, na.rm=T)
## a b
## 4.60 4.75
Note that we can add options for the FUN after the function. lapply(X, FUN)
apply FUN to each member of the list X. If X is a data frame,
it will apply the FUN to each column and return a list.
tapply(X, INDEX, FUN¼) apply FUN to each cell of a ragged array given by X
with indexes equals to INDEX. Note that X is an atomic object, typically a vector.
v
## [1] 1 2 4 2 2 5 6 4 7 8 8
fac <- factor(rep(1:3, length = 11), levels = 1:3)
table(fac)
## fac
## 1 2 3 ## 4 4
3 tapply(v, fac,
sum)
## 1 2 3
## 17 16 16
by(data, INDEX, FUN) apply FUN to data frame data subsetted by INDEX.
35
by(df1, df1[, 1], sum)
## df1[, 1]: 1
## [1] 4
## --------------------------------------------------------
## df1[, 1]: 6
## [1] 15
## --------------------------------------------------------
## df1[, 1]: 7
## [1] NA
## --------------------------------------------------------
## df1[, 1]: 8
## [1] 16
This code applies the sum function to df1 using column 1 as an index.
merge(a, b) merge two data frames by common columns or row names. We can
use option by¼ to specify the index column.
df2<-data.frame(a=c(1, 1, 7, 6, 8),
c=1:5) df2
## a c
## 1 1 1
## 2 1 2
## 3 7 3
## 4 6 4
## 5 8 5
df3<-merge(df1, df2,
by="a") df3
## a b c
## 1 1 1 1
## 2 1 1 2
## 3 1 1 1
## 4 1 1 2
## 5 6 9 4
## 6 7 NA 3
## 7 8 8 5
2.13 Advanced Data Processing xtabs(a ~ b, data ¼ x) a contingency table from
cross-classifying factors.
DF <- as.data.frame(UCBAdmissions)
## 'DF' is a data frame with a grid of the factors and the
counts ## in variable 'Freq'.
DF
## Admit Gender Dept
Freq ## 1 Admitted Male
A 512
## 2 Rejected Male A
313 ## 3 Admitted Female
A 89 …
## 23 Admitted Female F
24 ## 24 Rejected Female F
317
## Group.1 a
b c ## 1 1
10 10 8 ## 2
2 7 10 6 ## 3
3 8 NA 4
The above code applied the function sum to data frame df3 according to the
index created by list(rep(1:3,length¼7)).
stack(x, ...) transform data, stored as separate columns in a data frame or a list,
into a single column and unstack(x,...) is the inverse of stack().
stack(df3)
## values ind
## 1 1 a
## 2 1 a
## 3 1 a
…
## 20 3 c
## 21 5 c
unstack(stack(df
3))
## a b c
## 1 1 1 1
## 2 1 1 2
## 3 1 1 1
## 4 1 1 2
## 5 6 9 4
## 6 7 NA 3
## 7 8 8 5
reshape(x, ...) reshapes a data frame between "wide" format with repeated
measurements in separate columns of the same record and "long" format with the
repeated measurements in separate records. Use direction ¼ "wide" or
direction¼"long".
df4 <- data.frame(school = rep(1:3, each = 4), class =
rep(9:10, 6), time = rep(c(1, 1, 2, 2), 3),
score = rnorm(12))
37
wide <- reshape(df4, idvar = c("school", "class"), direction =
"wide") wide
## school class score.1
score.2 ## 1 1 9 -0.1575202
-1.415503816 ## 2 1 10
0.5804452 1.754559537 ## 5 2
9 0.1553872 1.693809827 ## 6 2
10 -0.7540783 0.478035367
## 9 3 9 -0.6490757
-0.002922609 ## 10 3 10
-0.2122064 0.276259031
long <- reshape(wide, idvar = c("school", "class"), direction =
"long") long
## school class time
score.1 ## 1.9.1 1 9 1
-0.157520208 ## 1.10.1 1 10
1 0.580445243 ## 2.9.1 2
9 1 0.155387189 ## 2.10.1
2 10 1 -0.754078345 ## 3.9.1
3 9 1 -0.649075721 ## 3.10.1
3 10 1 -0.212206430 ## 1.9.2
1 9 2 -1.415503816 ## 1.10.2
1 10 2 1.754559537 ## 2.9.2
2 9 2 1.693809827 ## 2.10.2
2 10 2 0.478035367 ## 3.9.2
3 9 2 -0.002922609 ## 3.10.2
3 10 2 0.276259031
2.14 Strings
Notes
• The x in this function has to be longitudinal data.
• The call to rnorm used in reshape might generate different results for each call,
unless set.seed(1234) is used to ensure reproducibility of random-number
generation.
2.14 Strings
Note that characters at start and stop indexes are inclusive in the output.
strsplit(x, split) split x according to the substring split. Use fixed¼TRUE for
non-regular expressions.
## [[1]]
## [1] "a" "b" "c"
grep("[a-z]", letters)
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 ## [24] 24 25 26
gsub(pattern, replacement, x) replacement of matches determined by regular
expression matching. sub() is the same but only replaces the first occurrence.
a<-c("e", 0, "kj", 10, ";")
gsub("[a-z]", "letters", a)
## [1] "letters" "0" "lettersletters" "10"
## [5] ";" sub("[a-z]",
"letters", a)
2
39
## [1] 2
The first one returns NA, and dependent on the R-version, possibly a warning,
because all elements have the pattern "m". nchar(x) number of characters in x.
Dates and Times
The class Date has dates without times. POSIXct() has dates and times, including
time zones. Comparisons (e.g. >), seq(), and difftime() are useful. ?
DateTimeClasses gives more information. See also package chron.
as.Date(s) and as.POSIXct(s) convert to the respective class; format (dt)
converts to a string representation. The default string format is 2001–02-21.
2.15 Plotting
2.15 Plotting
1000
500
Y
0
-500
-3 -2 -1 0 1 2 3
Fig. 2.5 Comparing a X
Cauchy sample distribution
to Normal sample
distribution via Q-Q plot
Normal Q-Q Plot of the
Normal Q-Q Plot of the data
data
Sample Quantiles of the Y
24
(Cauchy) Data
Sample Quantiles of the X
0
(Normal) Data
123
-4-2
-3-2-10
Fig. 2.7 Using isotropic scales to compare Cauchy sample to Normal quantiles
0 24
y
43
-4 -2 0 2 4
x
# Subsampling
x <-matrix(rnorm(100,
) ncol =5)
y <-c(1, seq(19))
z <-cbind(x, y)
z.df <-data.frame(z)
z.df
## V1 V2 V3 V4 V5 y
## 1 -0.5202336 0.5695642 -0.8104910 -0.775492348 1.8310536 1
## 2 -1.4370163 -3.0437691 -0.4895970 -0.018963095 2.2980451 1
## 3 1.1510882 -1.5345341 -0.5443915 1.176473324 -0.9079013 2
## 4 0.2937683 -1.1738992 1.1329062 0.050817201 -0.1975722 3
## 5 0.1011329 1.1382172 -0.3353099 1.980538873 -1.4902878 4
## 6 -0.3842767 1.7629568 -0.1734520 0.009448173 0.4166688 5
## 7 -0.1897151 -0.2928122 0.9917801 0.147767309 -0.3447306 6
## 8 -1.5184068 -0.6339424 -1.4102368 0.471592965 1.0748895 7
## 9 -0.6475764 0.3884220 1.5151532 -1.977356193 -0.9561620 8
## 10 0.1476949 -0.2219758 0.6255156 -0.755406330 -0.3411347 9
## 11 1.1927071 -0.2031697 0.6926743 1.263878207 -0.2628487 10
## 12 0.6117842 -0.3206093 -1.0544746 0.074048308 -0.3483535 11
## 13 1.7865743 -0.9457715 -0.2907310 1.520606318 2.3182403 12
## 14 -0.2075467 0.6440087 0.6277978 -1.670570757 0.1356807 13
## 15 0.2087459 1.2049360 1.2614003 1.102632278 0.4413631 14
## 16 -0.8663415 -0.4149625 1.3974565 0.432508163 -0.7408295 15
## 17 -0.4808447 0.6163081 -0.8693709 -0.830734957 -0.2094428 16
## 18 -0.3456697 2.5622196 -0.9398627 0.363765941 -1.4032376 17
## 19 1.1240451 -0.1887518 -0.6514363 -0.988661412 -1.2906608 18
## 20 -0.9783920 1.0246003 -0.6001832 -0.568181332 0.2374808 19
(z.df)
names
## [1] "V1" "V2" "V3" "V4" "V5" "y"
# subsetting rows
z.sub <-subset
(z.df, y> 2 & (y<
10 | V1>
0))
z.sub
## V1 V2 V3 V4
V5 y
## V1 V2## V1 V2
## 1 -0.5202336 0.5695642## 1 -0.5202336 0.5695642
## 2 -1.4370163 -3.0437691## 2 -1.4370163 -3.0437691
## 3 1.1510882 -1.5345341## 3 1.1510882 -1.5345341
## 4 0.2937683 -1.1738992## 4 0.2937683 -1.1738992
## 5 0.1011329 1.1382172## 5 0.1011329 1.1382172
## 6 -0.3842767 1.7629568## 6 -0.3842767 1.7629568
## 7 -0.1897151 -0.2928122## 7 -0.1897151 -0.2928122
## 8 -1.5184068 -0.6339424## 8 -1.5184068 -0.6339424
## 9 -0.6475764 0.3884220## 9 -0.6475764 0.3884220
## 10 0.1476949 -0.2219758## 10 0.1476949 -0.2219758
## 11 1.1927071 -0.2031697## 11 1.1927071 -0.2031697
## 12 0.6117842 -0.3206093## 12 0.6117842 -0.3206093
## 13 1.7865743 -0.9457715## 13 1.7865743 -0.9457715
## 14 -0.2075467
0.6440087## 14
-0.2075467 0.6440087 ##
15 0.2087459
1.2049360## 15
0.2087459 1.2049360 ##
16 -0.8663415
-0.4149625## 16
-0.8663415 -0.4149625 ##
17 -0.4808447
0.6163081## 17
-0.4808447 0.6163081
## 18 -0.3456697
2.5622196## 18 -0.3456697
2.5622196 ## 19
1.1240451 -0.1887518## 19
1.1240451 -0.1887518
## 20 -0.9783920 1.0246003## 20 -0.9783920 1.0246003
2.18 Graphics Parameters
These can be set globally with par(...). Many can be passed as parameters to
plotting commands (Table 2.8). adj controls text justification (adj ¼ 0 left-
justified, adj ¼ 0.5 centered, adj¼1 right-justified).
Table 2.8 Common plotting functions for displaying variable relationships subject to
conditioning
(trellis plots) available in the R lattice package
Expression Explanation
46 2 Foundations of R
xyplot(y~x) Bivariate plots (with many functionalities).
barchart(y~x) Histogram of the values of y with respect to those of x.
dotplot(y~x) Cleveland dot plot (stacked plots line-by-line and column-by-
column)
densityplot(~x) Density functions plot
histogram(~x) Histogram of the frequencies of x
bwplot(y~x) “Box-and-whiskers” plot
qqmath(~x) Quantiles of x with respect to the values expected under a theoretical
distribution
stripplot(y~x) Single dimension plot, x must be numeric, y may be a factor
qq(y~x) Quantiles to compare two distributions, x must be numeric, y may be
numeric, character, or factor but must have two “levels”
splom(~x) Matrix of bivariate plots
parallel(~x) Parallel coordinates plot
Levelplot Colored plot of the values of z at the coordinates given by x and y (x,
(z x ∗ y k g1 ∗ g2) y and z are all of the same length)
wireframe 3d surface plot
(z x ∗ y k g1 ∗ g2)
cloud 3d scatter plot
(z x ∗ y k g1 ∗ g2)
bg specifies the color of the background (ex.: bg¼"red", bg¼"blue", ...the
list of the 657 available colors is displayed with colors()).
bty controls the type of box drawn around the plot. Allowed values are: "o", "l",
"7", "c", "u" ou "]" (the box looks like the corresponding character). If bty¼"n" the
box is not drawn.
cex a value controlling the size of texts and symbols with respect to the default.
The following parameters have the same control for numbers on the axes-cex.
axis, the axis labels-cex.lab, the title-cex.main, and the subtitle-cex.sub. col.
controls the color of symbols and lines. Use color names: "red", "blue" see colors()
or as "#RRGGBB"; see rgb(), hsv(), gray(), and rainbow(); as for cex there are:
col.axis, col.lab, col.main, col.sub.
font an integer which controls the style of text (1: normal, 2: italics, 3: bold, 4:
bold italics); as for cex there are: font.axis, font.lab, font.main, font. sub.
las an integer which controls the orientation of the axis labels (0: parallel to the
axes, 1: horizontal, 2: perpendicular to the axes, 3: vertical). lty controls the type
of lines, can be an integer or string (1: "solid", 2: "dashed", 3: "dotted", 4:
"dotdash", 5: "longdash", 6: "twodash", or a string of up to eight characters
(between "0" and "9") which specifies alternatively the length, in points or
pixels, of the drawn elements and the blanks, for example lty¼"44" will have the
same effect than lty¼2.
lwd a numeric which controls the width of lines, default ¼ 1.
2.19 Optimization and model Fitting
47
mar a vector of 4 numeric values which control the space between the axes and
the border of the graph of the form c(bottom,left,top,right), the default values are
c(5.1,4.1,4.1,2.1).
mfcol a vector of the form c(nr,nc) which partitions the graphic window as a
matrix of nr lines and nc columns, the plots are then drawn in columns.
mfrow id. but the plots are drawn by row.
pch controls the type of symbol, either an integer between 1 and 25, or any
single
character within "".
ts.plot(x) id. but if x is multivariate the series may have different dates by x and
y. ps an integer which controls the size in points of texts and symbols.
pty a character, which specifies the type of the plotting region, "s": square,
"m": maximal.
tck a value which specifies the length of tick-marks on the axes as a fraction
of the
smallest of the width or height of the plot; if tck¼1 a grid is drawn.
tcl a value which specifies the length of tick-marks on the axes as a fraction of
the
height of a line of text (by default tcl¼0.5).
xaxt if xaxt¼"n" the x-axis is set but not drawn (useful in conjunction with
axis(side¼1,...)).
yaxt if yaxt¼"n" the y-axis is set but not drawn (useful in conjunction with
axis(side¼2,...)).
Lattice (Trellis) graphics.
In the normal Lattice formula, y~x|g1*g2 has combinations of optional
conditioning variables g1 and g2 plotted on separate panels. Lattice functions take
many of the same arguments as base graphics plus also data¼ the data frame for
the formula variables and subset¼ for subsetting. Use panel¼ to define a custom
panel function (see apropos("panel") and ?lines). Lattice functions return an object
of class trellis and have to be printed to produce the graph. Use print (xyplot(...))
inside functions where automatic printing doesn’t work. Use lattice.theme and
lset to change Lattice defaults.
2.20 Statistics
There are many R packages and functions for computing a wide spectrum of
statistics. Below are some commonly used examples, and we will see many more
throughout:
aov(formula) analysis of variance model.
anova(fit, ...) analysis of variance (or deviance) tables for one or more fitted
model objects.
density(x) kernel density estimates of x.
Other functions include: binom.test(), pairwise.t.test(), power.
t.test(), prop.test(), t.test(), ... use help.search("test") to see details.
2.21 Distributions
49
2.21 Distributions
2.21.1 Programming
Table 2.10 A fragment of the SOCR Health Evaluation and Linkage to Primary (HELP) Care
dataset
ID i2 age treat homeless pcs mcs cesd ... female Substance racegrp
1 0 25 0 0 49 7 46 ... 0 Cocaine Black 2
3 39 36 0 0 76 9 33 ... 0 Heroin Black
100 81 22 0 0 37 17 19 ... 0 Alcohol Other
Fig. 2.8 Histogram of a N(10, 20) Histogram
sample of 200 random
Normal(m ¼ 10, sd ¼ 20)
observations
3040
Frequency
02 0 01
-40 -20 0 20 40 60
x.norm
0 10 20 30 40 50 60 70
data_1$age
data_1 <-
read.csv("https://umich.instructure.com/files/1628625/download?dow
nload_frd=1", as.is=T, header=T) # data_1 = read.csv(file.choose( ))
attach(data_1)
# to ensure all variables are accessible within R, e.g., using "age"
instead of data_1$age
# i2 maximum number of drinks (standard units) consumed per day (in
the pas t 30 days range 0-184) see also i1
# treat randomization group (0=usual care, 1=HELP clinic)
# pcs SF-36 Physical Component Score (range 14-75)
# mcs SF-36 Mental Component Score(range 7-62)
# cesd Center for Epidemiologic Studies Depression scale (range 0-60)
# indtot Inventory of Drug Use Consequences (InDUC) total score
(range 4-45)
2.22 Data Simulation Primer 53
summary(data_1)
## ID i2 age treat
## Min. : 1.00 Min. : 0.00 Min. : 3.00 Min. :
0.0000 ## 1st Qu.: 24.25 1st Qu.: 1.00 1st Qu.:27.00
1st Qu.:0.0000
## Median : 50.50 Median : 15.50 Median :34.00 Median :
0.0000 ## Mean : 50.29 Mean : 27.08 Mean :34.31
Mean :0.1222 ## 3rd Qu.: 74.75 3rd Qu.: 39.00 3rd
Qu.:43.00 3rd Qu.:0.0000 ## Max. :100.00 Max. :137.00
Max. :65.00 Max. :2.0000
## homeless pcs mcs cesd
## Min. :0.0000 Min. : 6.00 Min. : 0.00 Min. :
0.00 ## 1st Qu.:0.0000 1st Qu.:41.25 1st Qu.:20.25 1st
Qu.:17.25 ## Median :0.0000 Median :48.50 Median :29.00
Median :30.00
## Mean :0.1444 Mean :47.61 Mean :30.49 Mean :30.21
## 3rd Qu.:0.0000 3rd Qu.:57.00 3rd Qu.:39.75 3rd Qu.:43.00
## Max. :1.0000 Max. :76.00 Max. :93.00 Max. :68.00
## indtot pss_fr drugrisk sexrisk
## Min. : 0.00 Min. : 0.000 Min. : 0.000 Min. :
0.000 ## 1st Qu.:31.25 1st Qu.: 2.000 1st Qu.: 0.000 1st
Qu.: 1.250
## Median :36.00 Median : 6.000 Median : 0.000 Median :
5.000
## Mean :37.03 Mean : 6.533 Mean : 2.578 Mean :
4.922
## 3rd Qu.:45.00 3rd Qu.:10.000 3rd Qu.: 3.000 3rd Qu.:
7.750 ## Max. :60.00 Max. :20.000 Max. :23.000
Max. :13.000 ## satreat female
substance racegrp
## Min. :0.00000 Min. :0.00000 Length:90
Length:90
## 1st Qu.:0.00000 1st Qu.:0.00000 Class :character Class
:character ## Median :0.00000 Median :0.00000 Mode :character
Mode :character
## Mean :0.07778 Mean :0.05556
## 3rd Qu.:0.00000 3rd Qu.:0.00000
## Max. :1.00000 Max. :1.00000
e) ## [1]
34.31111
sd(data_1$age)
## [1]
11.68947
54 2 Foundations of R
# i2 [0: 184]
# age m=34,
sd=12 # treat
{0, 1}
# homeless {0, 1}
# pcs 14-75
# mcs 7-62
# cesd 0-60
# indtot 4-45
# pss_fr 0-14
# drugrisk 0-21
# sexrisk
# satreat (0=no, 1=yes)
# female (0=no, 1=yes)
# racegrp (black, white, other)
# Demographics variables
# Define number of
subjects
NumSubj <- 282
NumTime <- 4
# Define data elements
# Cases
Cases <- c(2, 3, 6, 7, 8, 10, 11, 12, 13, 14, 17, 18, 20, 21, 22,
23, 24,
25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 37, 41, 42, 43, 44, 45, 53,
55, 58, 60, 62, 67, 69, 71, 72, 74, 79, 80, 85, 87, 90, 95, 97, 99,
100, 101, 106,
107, 109, 112, 120, 123, 125, 128, 129, 132, 134, 136, 139, 142,
147, 149,
153, 158, 160, 162, 163, 167, 172, 174, 178, 179, 180, 182, 192,
195, 201,
208, 211, 215, 217, 223, 227, 228, 233, 235, 236, 240, 245, 248,
250, 251,
254, 257, 259, 261, 264, 268, 269, 272, 273, 275, 279, 288, 289,
291, 296,
298, 303, 305, 309, 314, 318, 324, 325, 326, 328, 331, 332, 333,
334, 336,
338, 339, 341, 344, 346, 347, 350, 353, 354, 359, 361, 363, 364,
366, 367,
368, 369, 370, 371, 372, 374, 375, 376, 377, 378, 381, 382, 384,
385, 386,
387, 389, 390, 393, 395, 398, 400, 410, 421, 423, 428, 433, 435,
443, 447,
449, 450, 451, 453, 454, 455, 456, 457, 458, 459, 460, 461, 465,
466, 467,
470, 471, 472, 476, 477, 478, 479, 480, 481, 483, 484, 485, 486,
487, 488,
489, 492, 493, 494, 496, 498, 501, 504, 507, 510, 513, 515, 528,
530, 533,
537, 538, 542, 545, 546, 549, 555, 557, 559, 560, 566, 572, 573,
576, 582,
586, 590, 592, 597, 603, 604, 611, 619, 621, 623, 624, 625, 631,
633, 634, 635, 637, 640, 641, 643, 644, 645, 646, 647, 648, 649,
650, 652, 654, 656,
658, 660, 664, 665, 670, 673, 677, 678, 679, 680, 682, 683, 686,
687, 688,
689, 690, 692)
# Imaging Biomarkers
56 2 Foundations of R
colnames(sim_PD_Data) <- c(
"Cases",
58 2 Foundations of R
"L_caudate_ComputeArea",
"Sex", "Weight", "Age",
"Dx", "chr12_rs34637584_GT", "chr17_rs11868035_GT",
"UPDRS_part_I", "UPDRS_part_II", "UPDRS_part_III", "Time"
)
# some QC
summary(sim_PD_Data)
(Other):748
dim(sim_PD_Data) ## [1]
1128 12
head(sim_PD_Data)
"18"
## [5,] "1" "1" "13" "6"
"0" ## [6,] "1" "1" "13" "6"
"6"
2.23 Appendix
6
## 3 0 1 12 1
12
## 4 0 1 12 1
18
## 5 1 0 19 22
0
## 6 1 0 19 22
6
## Cases L_caudate_ComputeArea L_caudate_Volume
## Min. : 2.0 Min. :525.0 Min. :719.0
## 1st Qu.:158.0 1st Qu.:582.0 1st Qu.:784.0
## Median :363.5 Median :600.0 Median :800.0
## Mean :346.1 Mean :600.4 Mean :800.3
## 3rd Qu.:504.0 3rd Qu.:619.0 3rd
Qu.:819.0 ## Max. :692.0 Max. :667.0
Max. :890.0
…
Also see: http://wiki.socr.umich.edu/index.php/SMHS_DataSimulation.
61
2.23 Appendix
2.23.2 R Debugging
Most programs that give incorrect results are impacted by logical errors. When
errors (bugs, exceptions) occur, we need explore deeper – this procedure to
identify and fix bugs is “debugging”.
Common R tools for debugging inlcude traceback(), debug(), browser(), trace()
and recover().
traceback(): Failing R functions report the error to the screen immediately the
error. Calling traceback() will show the function where the error occurred. The
traceback() function prints the list of functions that were called before the error
occurred.
The function calls are printed in reverse order.
f1<-function(x) { r<- x-g1(x); r } g1<-
debug()
traceback() does not tell you where is the error. To find out which line causes the
error, we may step through the function using debug().
debug(foo) flags the function foo() for debugging. Undebug(foo) unflags the
function.
When a function is flagged for debugging, each statement in the function is
executed one at a time. After a statement is executed, the function suspends and
user can interact with the R shell.
This allows us to inspect a function line-by-line.
An example computing the sum of squared errors, SS.
## compute sum of squares
62 2 Foundations of R
SS<-function(mu,
x) { d<-x-mu; d2<-
d^2; ss<-sum(d2);
ss }
set.seed(100);
x<-rnorm(100);
SS(1, x)
## to debug
debug(SS); SS(1, x)
## debugging in: SS(1, x)
## debug at <text>#2: {
## d <- x - mu
## d2 <- d^2
## ss <- sum(d2)
## ss
## }
## debug at <text>#3: d <- x - mu
## debug at <text>#4: d2 <- d^2
## debug at <text>#5: ss <- sum(d2)
## debug at <text>#6: ss
## exiting from: SS(1, x)
## [1] 202.5614519
In the debugging shell ("Browse[1] > "), users can:
• Enter n (next) executes the current line and prints the next one;
• Typing c (continue) executes the rest of the function without stopping;
• Enter Q quits the debugging;
• Enter ls() list all objects in the local environment;
• Enter an object name or print() tells the current value of an object.
Example:
debug(SS)
SS(1, x)
## debugging in: SS(1, x)
## debug at <text>#2: {
## d <- x - mu
## d2 <- d^2
## ss <- sum(d2)
## ss
## }
## debug at <text>#3: d <- x - mu
## debug at <text>#4: d2 <- d^2
## debug at <text>#5: ss <- sum(d2)
## debug at <text>#6: ss
## exiting from: SS(1, x)
## [1] 202.5614519
2.23 Appendix
63
Browse[1]> n
debug: d <- x - mu ## the next command
Browse[1]> ls() ## current environment [1] "mu" "x" ## there is no d
Browse[1]> n ## go one step debug: d2 <- d^2 ## the next command
Browse[1]> ls() ## current environment [1] "d" "mu" "x" ## d has been created
Browse[1]> d[1:3] ## first three elements of d [1] -1.5021924 -0.8684688 -1.0789171
Browse[1]> hist(d) ## histogram of d
Browse[1]> where ## current position in call stack where 1: SS(1,
x) Browse[1]> n debug: ss <- sum(d2)
Browse[1]> Q ## quit
undebug(SS) ## remove debug label, stop debugging
process
SS(1, x) ## now call SS again will without
debugging
You can label a function for debugging while debugging another function.
f<-
function(x)
{ r<-x-g(x);
r }
g<-
function(y)
{ r<-y*h(y);
r }
h<-
function(z)
{ r<-log(z);
if(r<10) r^2
else
r^3 }
f(-1)
Browse[1]> n
Browse[1]> debug(g)
Browse[1]> debug(h)
Browse[1]> n
64 2 Foundations of R
Inserting a call to browser() in a function will pause the execution of a function
at the point where browser() is called. This is similar to using debug(), except you
can control where execution gets paused.
Example
h<-function(z) {
browser() ## a break point inserted
here r<-log(z); if(r<10) r^2 else r^3
} f(-1)
## Error in if (r < 10) r^2 else r^3: missing value where
TRUE/FALSE needed
Calling trace() on a function allows inserting new code into a function. The
syntax for trace() may be challenging.
as.list(body(h))
trace("h",
quote( if(is.nan(r))
{browser()}), at=3,
print=FALSE) f(1) f(-1)
trace("h", quote(if(z<0) {z<-1}), at=2,
print=FALSE) f(-1) untrace()
During the debugging process, recover() allows checking the status of variables
in upper level functions. Recover() can be used as an error handler using options()
(e.g. options(error ¼ recover)). When functions throw exceptions, execution stops
at point of failure. Browsing the function calls and examining the environment
may indicate the source of the problem.
You should be able to download and load the Foundations of R code in RStudio
and then run all the examples.
65
2.24 Assignments: 2. R Foundations
Create a Data Frame of the SOCR Parkinson’s Disease data and compute a
summary of three features you select.
Generate 1,000 standard normal variables and 1,200 Cauchy distributed random
variables and generate a quantile-quantile (Q-Q) probability plot of the two
samples. Repeat this with 1,500 student t distributed random variables with df¼20
and generate a quantile-quantile (Q-Q) probability plot.
2.24.6 Programming
Generate a function that computes the arithmetic average and compare it against
the mean() function using the simulation data you generated in the last question.
References
Some R fundamentals: http://wiki.socr.umich.edu/index.php/SMHS_Usage_Rfundamentals
The Software Carpentry Foundation.
Programming with R: http://swcarpentry.github.io/r-novice-inflammation
R for Reproducible Scientific Analysis: http://swcarpentry.github.io/r-novice-gapminder
A very gentle stats intro using R Book (Verzani): http://cran.r-
project.org/doc/contrib/VerzaniSimpleR.pdf
Quick-R web examples: http://www.statmethods.net/index.html
R-tutor Introduction: http://www.r-tutor.com/r-introduction
R project Introduction: http://cran.r-project.org/doc/manuals/r-release/R-intro.html
UCLA ITS/IDRE R Resources: https://stats.idre.ucla.edu/r/
Chapter 3
Managing Data in R
In this Chapter, we will discuss strategies to import data and export results. Also,
we are going to learn the basic tricks we need to know about processing different
types of data. Specifically, we will illustrate common R data structures and
strategies for loading (ingesting) and saving (regurgitating) data. In addition, we
will (1) present some basic statistics, e.g., for measuring central tendency (mean,
median, mode) or dispersion (variance, quartiles, range); (2) explore simple plots;
(3) demonstrate the uniform and normal distributions; (4) contrast numerical and
categorical types of variables; (5) present strategies for handling incomplete
(missing) data; and (6) show the need for cohort-rebalancing when comparing
imbalanced groups of subjects, cases or units.
Let’s start by extracting the Edgar Anderson’s Iris Data from the package
datasets. The iris dataset quantifies morphologic shape variations of 50 Iris
flowers of three related genera – Iris setosa, Iris virginica and Iris versicolor. Four
shape features were measured from each sample – length and the width of the
sepals and petals (in centimeters). These data were used by Ronald Fisher in his
1936 linear discriminant analysis paper (Fig. 3.1).
data()
data(iris)
class(iris)
## [1] "data.frame"
© Ivo D. Dinov 2018 63
I. D. Dinov, Data Science and Predictive Analytics, https://doi.org/10.1007/978-3-319-72347-
1_3
68 3 Managing Data in R
Fig. 3.1 Definitions of petal width and length for the three iris flower genera used in the
example below
As an I/O (input/output) demonstration, after we load the iris data and examine
its class type, we can save it into a file named "myData.RData" and then reload it
back into R.
save(iris, file="myData.RData")
load("myData.RData")
water <-
read.csv('https://umich.instructure.com/files/399172/download?downl
oad_frd=1', header=T) water[1:3, ]
69
## Year..string. WHO.region..string. Country..string.
## 1 1990 Africa Algeria ##
2 1990 Africa
Angola ## 3 1990 Africa
Benin
## Residence.Area.Type..string.
## 1 Rural
## 2
Rural ## 3
Rural
##
Population.using.improved.drinking.water.sources......numeric
. ## 1
88
## 2
42 ## 3 49
##
Population.using.improved.sanitation.facilities......numeric
. ## 1
77 ## 2
7
## 3 0
colnames(water)<-c("year", "region", "country", "residence_area",
"improved_ water", "sanitation_facilities") water[1:3, ]
## year region country residence_area improved_water
sanitation_facilities ## 1 1990 Africa Algeria Rural
88 77 ## 2 1990 Africa Angola Rural
42 7 ## 3 1990 Africa Benin Rural
49 0 which.max(water$year); ## 913
# rowMeans(water[,5:6])
mean(water[,6], trim=0.08,
na.rm=T)
## [1] 71.63629
This code loads CSV files that already include a header line listing the names
of the variables. If we don’t have a header in the dataset, we can use the header ¼
FALSE option (https://umich.instructure.com/courses/38100/files/
folder/Case_Studies). R will assign default names to the column variables of the
dataset.
Simulation <-
read.csv("https://umich.instructure.com/files/354289/download?
download_frd=1", header = FALSE) Simulation[1:3, ]
## V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12
## 1 ID i2 age treat homeless pcs mcs cesd indtot pss_fr drugrisk
sexrisk ## 2 1 0 25 0 0 49 7 46 37 0
1 6 ## 3 2 18 31 0 0 48 34 17 48 00
11
## V13 V14 V15 V16
## 1 satreat female substance racegrp
## 2 0 0 cocaine black
## 3 0 0 alcohol white
To save a data frame to CSV files, we could use the write.csv() function.
The option file ¼ "a/local/file/path" allows us edit the saved file path.
70 3 Managing Data in R
write.csv(iris, file = "C:/Users/iris.csv") # Iris data
Drinking Water
We can use the command str() to explore the structure of a dataset. For instance,
using the World Drinking Water dataset:
str(water)
## 'data.frame': 3331 obs. of 6 variables:
## $ year : int 1990 1990 1990 1990 1990 1990 1990
1990 19 90 1990 ...
## $ region : Factor w/ 6 levels
"Africa","Americas",..: 1 1
1 1 1 1 1 1 1 1 ...
## $ country : Factor w/ 192 levels "Afghanistan",..:
3 5 19 2
3 26 27 30 32 33 37 ...
## $ residence_area : Factor w/ 3 levels "Rural","Total",..:
1 1 1 1
1 1 1 1 1 1 ...
## $ improved_water : num 88 42 49 86 39 67 34 46 37 83 ...
## $ sanitation_facilities: num 77 7 0 22 2 42 27 12 4 11 ...
We can see that this (WorldDrinkingWater) dataset has 3331 observations and
6 variables. The output also give us the class of each variable and first few
elements in the variable.
Summary statistics for numeric variables in the dataset could be accessed by using
the command summary() (Fig. 3.2).
summary(water$year)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1990 1995 2005 2002 2010 2012
summary(water[c("improved_water", "sanitation_facilities")])
## improved_water
sanitation_facilities ## Min. :
3.0 Min. : 0.00
71
## 1st Qu.: 77.0 1st Qu.: 42.00
## Median : 93.0 Median : 81.00
## Mean : 84.9 Mean : 68.87
## 3rd Qu.: 99.0 3rd Qu.: 97.00
## Max. :100.0 Max. :100.00
## NA's :32 NA's :135
plot(density(water$improved_water,na.rm = T))
Mean and median are two frequent measurements of the central tendency. Mean is
“the sum of all values divided by the number of values”. Median is the number in
the middle of an ordered list of values. In R, mean() and median() functions
provide us with these two measurements.
vec1<-c(40, 56,
99) mean(vec1)
## [1] 65
mean(c(40, 56,
99))
## [1] 65
median(vec1) ##
[1] 56
median(c(40, 56,
99))
## [1] 56
# install.packages("psych");
library("psych")
geometric.mean(vec1, na.rm=TRUE)
## [1] 60.52866
The mode is the value that occurs most often in the dataset. It is often used in
categorical data, where mean and median are inappropriate measurements.
We can have one or more modes. In the water dataset, we have “Europe” and
“Urban” as the modes for region and residence area, respectively. These two
variables are unimodal, which has a single mode. For the year variable, we have
two modes: 2000 and 2005. Both of the categories have 570 counts. The year
variable is an example of a bimodal. We also have multimodal data that has two or
more modes.
Mode is just one of the measures for the central tendency. The best way to use
it is to compare the counts of the mode to other values. This help us to judge
72 3 Managing Data in R
whether one or several categories dominates all others in the data. After that, we
are able to analyze the meaning behind these common centrality measures.
In numeric datasets, the mode(s) represents the highest bin(s) in the histogram.
In this way, we can also examine if the numeric data is multimodal.
More information about measures of centrality is available here (http://wiki.socr.
umich.edu/index.php/AP_Statistics_Curriculum_2007_EDA_Center).
Summary
Q1 and Q3 are the 25th and 75th percentiles of the data. Median (Q2) is right in
the middle of Q1 and Q3. The difference between Q3 and Q1 is called the
3.6 Measuring Spread: Quartiles and the Five-Number Summary
interquartile range (IQR). Within the IQR lies half of our data that has no extreme
values.
In R, we use the IQR() to calculate the interquartile range. If we use IQR() for a
data with NA‘s, the NA’s are ignored by the function while using the option na.
rm ¼ TRUE.
IQR(water$year) ## [1] 15
summary(water$improved_wa
ter)
73
## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 3.0 77.0 93.0 84.9 99.0 100.0 32
IQR(water$improved_water, na.rm = T)
## [1] 22
Similar to the command summary() that we talked about earlier in this Chapter,
the function quantile() could be used to obtain the five-number summary.
quantile(water$improved_water, na.rm = T)
We can also calculate specific percentiles in the data. For example, if we want
the 20th and 60th percentiles, we can do the following.
## 20% 60%
## 71 97
Using the seq() function, we can generate percentiles that are evenly-spaced.
A histogram is another
way to show the spread
of a numeric variable
(See Chap. 4 for
additional information).
It uses predetermined
number of bins as
containers for values to
divide the original data.
The height of the bins
indicates frequency
(Figs. 3.4 and 3.5).
We could see that the shape of two graphs are somewhat similar. They are both
left skewed patterns (mean < median). Other common skew patterns are shown in
Fig. 3.6.
75
Fig. 3.4 Histogram plot of the
water improvement data
Fig. 3.7 Live visualization demonstrations using SOCR and Distributome resources
76 3 Managing Data in R
You can see the density plots of over 80 different probability distributions
using the SOCR Java Distribution Calculators (http://socr.umich.edu/html/dist/) or
the Distributome HTML5 Distribution Calculators
(http://www.distributome.org/tools.
html), Fig. 3.7.
If the data follows a uniform distribution, then all values are equally likely to
occur in any interval of a fixed width. The histogram for a uniformly distributed
dataset would have equal heights for each bin, see Fig. 3.9.
x <- rnorm(N, 0, 1)
hist(x, probability=T,
col='lightblue', xlab=' ', ylab=' ', axes=F, main='Normal
Distribution')
lines(density(x, bw=0.4), col='red', lwd=3)
Often, but not always, real world processes behave as normally distributed. A
normal distribution would have a higher frequency for middle values and lower
frequency for more extreme values. It has a symmetric and bell-curved shape just
like in Fig. 3.8. Many parametric-based statistical approaches assume normality of
the data. In cases where this parametric assumption is violated, variable
transformations or distribution-free tests may be more appropriate.
3.10 Measuring Spread: Variance and Standard Deviation
Distribution is a great way to characterize data using only a few parameters. For
example, normal distribution can be defined by only two parameters: center and
spread, or statistically by the mean and standard deviation.
A way to estimate the mean is to divide the sum of the data values by the total
number of values. So, we have the following formula:
1 n
nX
Mean Xð Þ ¼ μ ¼xi:
i¼1
The variance is the average sum of squares and the standard devision is a
square root of the variance:
1 n
2 2
Var Xð Þ ¼ σ ¼ðx
n 1 Xi¼1 i μÞ
Since the water dataset is non-normal, we use a new dataset, including the
demographics of baseball players, to illustrate normal distribution properties. The
"01_data.txt" dataset has following variables:
Fig. 3.10 Histogram plot of
the players’ weights, Major
League Baseball (MLB)
dataset
baseball<-
read.table("https://umich.instructure.com/files/330381/download?do
wnload_frd=1", header=T)
hist(baseball$Weight, main = "Histogram for Baseball Player's
Weight", xlab=
"weight")
These plots allow us to visually inspect the normality of the players’ heights and
weights. We could also obtain mean and standard deviation of the weight and
height variables.
3.10 Measuring Spread: Variance and Standard Deviation
sd(baseball$Height)
## [1] 2.305818
Larger standard deviation, or variance, suggest the data is more spread out from
the mean. Therefore, the weight variable is more spread than the height variable.
Given the first two moments (mean and standard deviation), we can easily
estimate how extreme a specific value is. Assuming we have a normal
distribution, the values follow a 68 95 99.7 rule. This means 68% of the data lies
within the
interval [μ σ, μ + σ]; 95% of the data lies within the interval [
and 99.7% of the data lies within the interval [ ]. The
following graph plotted by R illustrates the 68 99.7% rule (Fig. 3.12).
Applying the 68-95-99.7 rule to our baseball weight variable, we know that
68% of our players weighted between 180.7 pounds and 222.7 pounds; 95% of the
79
players weighted between 159.7 pounds and 243.7 pounds; And 99.7% of the
players weighted between 138.7 pounds and 264.7 pounds.
68-95-99.7 Rule
68%
95%
99.7 %
µ − 3σ µ − 2σ µ−σ µ µ+σ µ + 2σ µ + 3σ
Back to our water dataset, we can treat the year variable as a categorical rather
than a numeric variable. Since the year variable only has six distinctive values, it
is reasonable to treat it as a categorical feature, where each value is a category that
could apply to multiple WHO regions. Moreover, region and residence area
variables are also categorical.
Different from numeric variables, the categorical variables are better examined
by tables rather than summary statistics. A one-way tables represent a single
categorical variable. It gives us the counts of different categories. The table()
function can create one-way tables for our water dataset:
water <-
read.csv('https://umich.instructure.com/files/399172/download?downl
oad_frd=1', header=T) table(water$Year)
##
## 1990 1995 2000 2005 2010 2012
## 520 561 570 570 556 554
table(water$WHO.region)
##
## Africa Americas Eastern
Mediterranean ## 797 613
373 ## Europe South-East Asia
Western Pacific ## 910 191
447 table(water$Residence.Area)
##
## Rural Total Urban
## 1095 1109 1127
Given that we have a total of 3331 observations, the WHO region table tells us
that about 27% (910/3331) of the areas examined in the study are in Europe.
R can directly give us table proportions when using the prop.table() function.
The proportion values can be transformed as percentages.
year_table<-table(water$Year..string.)
prop.table(year_table)
80 3 Managing Data in R
##
## 1990 1995 2000 2005 2010 2012
## 0.1561093 0.1684179 0.1711198 0.1711198 0.1669168 0.1663164
year_pct<-prop.table(year_table)*100
round(year_pct, digits=1)
##
## 1990 1995 2000 2005 2010 2012
## 15.6 16.8 17.1 17.1 16.7 16.6
3.12 Exploring Relationships Between Variables
So far, the methods and statistics that we have seen are univariate. Sometimes, we
want to examine the relationship between two or multiple variables. For example,
did the percentage of the population that uses improved drinking-water sources
increase over time? To address such problems, we need to look at bivariate or
multivariate relationships.
Visualizing Relationships: scatterplots
Let’s look at a bivariate case first. A scatterplot is a good way to visualize
bivariate relationships. We have the x-axis and y-axis each representing one of the
variables. Each observation is illustrated on the graph by a dot. If the graph shows
a clear pattern, rather than a cluster of random dots, the two variables may be
correlated with each other.
In R, we can use the plot() function to create scatterplots. We have to define
the variables for the x and y-axes. The labels in the graph are editable (Fig. 3.13).
plot.window(c(400,1000), c(500,1000))
plot(x=water$year, y=water$improved_water,
main= "Scatterplot of Year vs.
Improved_water", xlab= "Year",
ylab= "Percent of Population Using Improved Water")
We can see from the scatterplot that there appears to be a pattern.
Examining Relationships: two-way cross-tabulations
Scatterplot is a useful tool to examine the relationship between two variables
where at least one of them is numeric. When both variables are nominal, two-way
water$africa<-water$WHO.region=="Africa"
Let’s revisit the table() function to see how many WHO regions are in Africa.
table(water$africa)
## FALSE TRUE
## 2534 797
Each cell in the table contains five numbers. The first one N gives us the
count that fall into its corresponding category. The Chi-square contribution yields
information about the cell’s contribution in the Pearson’s Chi-squared test for
independence between two variables. This number measures the probability that
the differences in cell counts are due to chance alone.
The numbers of interest include ColTotal and RowTotal. In this case, these
numbers represent the marginal distributions for residence area type among
African regions and the regions in the rest of the world. We can see that the
numbers are very close between African and non-African regions for each type of
residence area. Therefore, we can conclude that African WHO regions do not have
a difference in terms of residence area types compared to the rest of the world.
Argentina : 18 ##
(Other) :3223 ## Residence.Area.Type..string.
## Rural:1095
## Total:1109
## Urban:1127
##
Population.using.improved.drinking.water.sources......nume
ric. ## Min. : 3.0
## 1st Qu.: 77.0
## Median : 93.0
## Mean : 84.9
## 3rd Qu.: 99.0
## Max. :100.0
## NA's :32
##
Population.using.improved.sanitation.facilities......numeric
. ## Min. : 0.00
## 1st Qu.: 42.00
## Median : 81.00
## Mean : 68.87
## 3rd Qu.: 97.00
## Max. :100.00
## NA's :135
## africa
Population.using.improved.sanitation ## Mode
:logical Min. :68.87
## FALSE:2534 1st Qu.:68.87
## TRUE :797 Median :68.87
## NA's :0 Mean :68.87
## 3rd Qu.:68.87
## Max. :68.87
##
## Population.using.improved.drinking
## Min. : 3.0
## 1st Qu.: 77.0
## Median : 93.0
## Mean : 84.9
## 3rd Qu.: 99.0
## Max. :100.0
A more sophisticated way of resolving missing data is to use a model (e.g.,
linear regression) to predict the missing feature and impute its missing values. This
is called the predictivemeanmatchingapproach. This method is good for data with
multivariate normality. However, a disadvantage of it is that it can only predict
one value at a time, which is very time consuming. Also, the multivariate
normality assumption might not be satisfied and there may be important
multivariate relations that are not accounted for. We are using the mi package to
demonstrate predictive mean matching.
Let’s install the mi package first.
Then, we need to get the missing information matrix. We are using the
imputation method pmm (predictive mean matching approach) for both missing
variables.
mdf<-missing_data.frame(water)
head(mdf)
## Year..string. WHO.region..string. Country..string.
## 1 1990 Africa Algeria
## 2 1990 Africa Angola
## 3 1990 Africa Benin
## 4 1990 Africa
Botswana ## 5 1990 Africa
Burkina Faso ## 6 1990 Africa
Burundi
## Residence.Area.Type..string.
## 1 Rural
## 2 Rural
## 3 Rural
## 4 Rural
## 5 Rural
## 6 Rural
##
##
Population.using.improved.drinking.water.sources......numeric
. ## 1
88 ## 2
42
## 3 49
## 4 86
## 5 39
## 6 67
## Population.using.improved.sanitation.facilities......numeric.
africa
## 1 77
TRUE
## 2 7
TRUE
## 3 0 TRUE
## 4 22
TRUE
## 5 2
TRUE ## 6 42 TRUE
##
missing_Population.using.improved.drinking.water.sources......numeri
c.
## 1
FALSE ## 2
FALSE
## 3
FALSE
## 4
FALSE ## 5
FALSE ## 6 FALSE
##
missing_Population.using.improved.sanitation.facilities......numeric
.
86 3 Managing Data in R
## 1
FALSE
## 2 FALSE
## 3
FALSE
## 4
FALSE
## 5
FALSE
## 6 FALSE
show(mdf)
## Object of class missing_data.frame with 3331 observations on 7
variables ##
## There are 3 missing data patterns
##
## Append '@patterns' to this missing_data.frame to access the
corresponding pattern for every observation or perhaps use table()
##
##
type
## Year..string.
continuous
## WHO.region..string. unordered-categorical
## Country..string. unordered-
categorical
## Residence.Area.Type..string. unordered-
categorical
## Population.using.improved.drinking.water.sources......numeric.
continuous
## Population.using.improved.sanitation.facilities......numeric.
continuous
## africa
binary
##
missing
## Year..string.
0 ## WHO.region..string.
0
## Country..string.
0
## Residence.Area.Type..string.
0
## Population.using.improved.drinking.water.sources......numeric.
32 ##
Population.using.improved.sanitation.facilities......numeric.
135 ## africa
0
##
method
## Year..string. <NA>
…
## africa
<NA>
mdf<-change(mdf, y="Population.using.improved.drinking", what =
"imputation_ method", to="pmm")
mdf<-change(mdf, y="Population.using.improved.sanitation", what =
"imputatio n_method", to="pmm")
3.13 Missing Data 87
Notes
• Converting the input data.frame to a missing_data.frame allows us to include in
the DF enhanced metadata about each variable, which is essential for the
subsequent modeling, interpretation, and imputation of the initial missing data.
• show() displays all missing variables and their class-labels (e.g., continuous),
along with meta-data. The missing_data.frame constructor suggests the most
appropriate classes for each missing variable; however, the user often needs to
correct, modify, or change these meta-data, using change().
• Use the change() function to change/correct meta-data in the constructed
missing_data.frame object which may be incorrectly reported by show (mfd).
• To get a sense of the raw data, look at the summary, image, or hist of the
missing_data.frame.
• The mi vignettes provide many useful examples of handling missing data.
Next, we can perform the initial imputation. Here we imputed three times,
which will create three different datasets with slightly different imputed values.
imputations<-mi(mdf, n.iter=10, n.chains=3, verbose=T)
-0.5
200 -1.0
Observation Number
400
Standardized Variable
600 Clustered by missingness
800
set.seed(123)
# create MCAR missing-data generator
create.missing <- function (data, pct.mis = 10)
y
x4
x7
x1
x5
x6
x8
{ n <-
x2
x3
x9
x10
nrow(data) J
<- ncol(data)
if (length(pct.mis) == 1)
{ if(pct.mis>= 0 & pct.mis
<=100) {
n.mis <- rep((n * (pct.mis/100)), J) } else {
warning("Percent missing values should be an integer
between
0 and 100! Exiting");
break }
} else { if (length(pct.mis) < J) stop("The length of the
missing-vector is not equal to the numbe
r of columns in the data! Exiting!")
n.mis <- n *
(pct.mis/100) }
3.13 Missing Data 89
## y x1 x2 x3 x4 x5 x6 x7 x8
x9 ## 1 NA NA 0.000000 0 1 h 8 NA 3
NA
## 2 11.449223 NA 5.236938 0 1 i NA NA 10
0.2639489 ## 3 -1.188296 0.000000 0.000000 0 5 a 3
-1.1469495 <NA> 0.4753195 ## 4 NA NA NA 0 <NA>
e 6 1.4810186 10 0.6696932 ## 5 4.267916 3.490833 0.000000 0
<NA> <NA> NA 0.9161912 <NA> 0.9578455 ## 6 NA 0.000000
4.384732 1 <NA> a NA NA 10 0.6095176
## x10
## 1 1
## 2 2
## 3 NA
## 4 3
## 5 8
## 6 6
## y x1 x2 x3
## Min. :-3.846 Min. :0.000 Min. :0.000 Min. :
0.0000 ## 1st Qu.: 2.410 1st Qu.:0.000 1st Qu.:0.000
1st Qu.:0.0000
## Median : 5.646 Median :0.000 Median :3.068 Median :0.0000
## Mean : 5.560 Mean :2.473 Mean :2.545 Mean :0.4443
## 3rd Qu.: 8.503 3rd Qu.:4.958 3rd Qu.:4.969 3rd Qu.:1.0000
## Max. :16.487 Max. :8.390 Max. :8.421 Max. :
1.0000 ## NA's :300 NA's :300 NA's :300
90 3 Managing Data in R
NA's :300
## x4 x5 x6 x7
x8 ## 1 :138 c : 80 Min. :1.00 Min. :-2.5689
3 : 78
## 2 :129 h : 76 1st Qu.:3.00 1st Qu.:-0.6099 7
: 77 ## 3 :147 b : 74 Median :5.00 Median : 0.0202
5 : 75
## 4 :144 a : 73 Mean :4.93 Mean : 0.0435 4
: 73
## 5 :142 j : 72 3rd Qu.:7.00 3rd Qu.: 0.7519 1
: 70
## NA's:300 (Other):325 Max. :9.00 Max. : 3.7157
(Other):327 ## NA's :300 NA's :300 NA's :300
NA's :300
## x9 x10
## Min. :0.1001 Min. : 0.000
## 1st Qu.:0.3206 1st Qu.: 2.000
## Median :0.5312 Median : 4.000
## Mean :0.5416 Mean :
3.929 ## 3rd Qu.:0.7772 3rd
Qu.: 5.000 ## Max. :0.9895
Max. :11.000
## NA's :300 NA's :300
# install.packages("mi")
# install.packages("betareg")
library("betareg"); library("mi")
200
50
0200
0
-1.5 -0.5 0.5 1.5 -1.0 0.0 0.5 1.0 1.5 -1.5 -0.5 0.5 1.
x3 x4 x5
200
80
100
040
0
0 1 1 2 3 4 5 a c e fg i j
x8
x6 (standardize) x7 (standardize)
150 300
80
0 40100015030002005000
040
-2 -1 0 1 2 -1.5 -0.5 0.5 1.5 1 3 5 7 9
150 3000
Fig. 3.16 Imputation chain 1: Histogram plots comparing the initially observed (blue), imputed
(red), and imputed complete (gray) data
0 1 1 2 3 4 5 a c e fg i j
15030000
x8
0408004080
x6 (standardize) x7 (standardize)
01503000
40100
400
0200
0200
0 0
04080
0
0 1 1 2 3 4 5 a c e f g i j
x6 (standardize) x7 (standardize) x8
300
150200
040100
0100
0150300
0.0 0.2 0.4 0.6 0.8 1.0 −1.5 −0.5 0.5 1.5
Fig. 3.18 Imputation chain 3: Histogram plots comparing the initially observed (blue), imputed
(red), and imputed complete (gray) data
# Extracts several multiply imputed data.frames from "imputations"
object data.frames <- complete(imputations, 3)
## y x1 x2 x3
## Min. :-3.846 Min. :0.000 Min. :0.000 Min. :0.0000
## 1st Qu.: 2.410 1st Qu.:0.000 1st Qu.:0.000 1st
94 3 Managing Data in R
Qu.:0.0000 ## Median : 5.646 Median :0.000 Median :3.068
Median :0.0000
## Mean : 5.560 Mean :2.473 Mean :2.545 Mean :0.4443
## 3rd Qu.: 8.503 3rd Qu.:4.958 3rd Qu.:4.969 3rd Qu.:1.0000
## Max. :16.487 Max. :8.390 Max. :8.421 Max. :1.0000
## NA's :300 NA's :300 NA's :300 NA's :300
…
## missing_x10
## Mode :logical
## FALSE:700
## TRUE :300 ## NA's :0
lapply(data.frames, summary)
## $`chain:1`
## y x1 x2 x3 x4
## Min. :-6.852 Min. :-3.697 Min. :-4.920 0:545 1:203
## 1st Qu.: 2.475 1st Qu.: 0.000 1st Qu.: 0.000 1:455 2:189
## Median : 5.470 Median : 2.510 Median : 1.801 3:201
## Mean : 5.458 Mean : 2.556 Mean : 2.314 4:202
## 3rd Qu.: 8.355 3rd Qu.: 4.892 3rd Qu.: 4.777 5:205
## Max. :16.487 Max. :10.543 Max. : 8.864
##
…
## missing_x10
## Mode :logical
## FALSE:700
## TRUE :300
## NA's :0
##
## $`chain:2`
## y x1 x2 x3 x4
## Min. :-4.724 Min. :-4.744 Min. :-5.740 0:558 1:211
## 1st Qu.: 2.587 1st Qu.: 0.000 1st Qu.: 0.000 1:442 2:193
## Median : 5.669 Median : 2.282 Median : 2.135 3:211
## Mean : 5.528 Mean : 2.486 Mean : 2.452 4:187
## 3rd Qu.: 8.367 3rd Qu.: 4.884 3rd Qu.: 4.782 5:198
## Max. :17.054 Max. :10.445 Max. :10.932
…
## $`chain:3`
## y x1 x2 x3 x4
## Min. :-5.132 Min. :-8.769 Min. :-3.643 0:538 1:200
## 1st Qu.: 2.414 1st Qu.: 0.000 1st Qu.: 0.000 1:462 2:182
## Median : 5.632 Median : 2.034 Median : 2.610 3:215
## Mean : 5.537 Mean : 2.417 Mean : 2.530 4:211
## 3rd Qu.: 8.434 3rd Qu.: 4.836 3rd Qu.: 4.812 5:192
## Max. :16.945 Max. :10.335 Max. :11.683
…
## missing_x10
## Mode :logical
## FALSE:700
## TRUE :300
## NA's :0
Observation
Number 200
400
600
800
x5
x4
x6
x8
x1
x2
x3
x7
x9
x10
Average completed data
Observation
Number
200
400
600
800
y
x5
x4
x6
x8
x1
x2
x3
x7
x9
x10
1.0
0.0
−1.0
1.0
0.0
−1.0
Fig. 3.20 Comparison of the missingness patterns in the raw (top) and imputed (bottom) datasets
round(mipply(imputations, mean, to.matrix = TRUE), 3)
## chain:1 chain:2
chain:3 ## y -0.013
-0.004 -0.003 ## x1
0.016 0.003 -0.011 ## x2
-0.045 -0.018 -0.003
## x3 1.455 1.442 1.462
## x4 3.017 2.968 3.013
## x5 5.321 5.406
5.480 ## x6 0.023 0.004
0.005 ## x7 -0.015
-0.005 -0.006
## x8 5.431 5.409 5.202
## x9 0.548 0.536
0.541 ## x10 -0.015
-0.020 -0.009 ## missing_y
0.300 0.300 0.300 ##
missing_x1 0.300 0.300
0.300
## missing_x2 0.300 0.300 0.300
## missing_x3 0.300 0.300 0.300
## missing_x4 0.300 0.300 0.300
## missing_x5 0.300 0.300 0.300
96 3 Managing Data in R
## missing_x6 0.300 0.300 0.300
## missing_x7 0.300 0.300
0.300 ## missing_x8 0.300
0.300 0.300 ## missing_x9
0.300 0.300 0.300
## missing_x10 0.300 0.300 0.300
Rhats(imputations, statistic = "moments")
# assess the convergence of MI algorithm
## mean_y mean_x1 mean_x2 mean_x3 mean_x4 mean_x5
mean_x6
## 1.0235026 1.1125720 1.1565542 0.9460979 1.0543446 1.3207898
0.9855947
## mean_x7 mean_x8 mean_x9 mean_x10 sd_y sd_x1
sd_x2
## 1.0023935 0.9438358 1.0192697 0.9927675 0.9658852 1.6248062
1.0025950 ## sd_x3 sd_x4 sd_x5 sd_x6 sd_x7
sd_x8 sd_x9 ## 0.9463044 1.0706666 1.4470270 1.2510790
0.9008732 1.2865944 1.0195947
## sd_x10
## 1.1760195
plot(imputations);hist(imputations);image(imputations);summary(impu
tations)
## $y
## $y$is_missing
## missing
## FALSE TRUE
## 700 300
##
## $y$imputed
## Min. 1st Qu. Median Mean 3rd Qu. Max. ##
-1.55100 -0.36930 -0.01107 -0.02191 0.30080 1.43600 ##
## $y$observed
## Min. 1st Qu. Median Mean 3rd Qu. Max. ##
-1.17500 -0.39350 0.01069 0.00000 0.36770 1.36500
##
## $x1
## $x1$is_missing
## missing
## FALSE TRUE
## 700 300
##
## $x1$imputed
## Min. 1st Qu. Median Mean 3rd Qu. Max. ##
-2.168000 -0.353600 -0.023620 0.008851 0.379800 1.556000 ##
## $x1$observed
## Min. 1st Qu. Median Mean 3rd Qu. Max. ##
-0.4768 -0.4768 -0.4768 0.0000 0.4793 1.1410
…
## $x10$observed
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -1.01800 -0.49980 0.01851 0.00000 0.27760 1.83200
Finally, pool over the m ¼ 3 completed datasets when we fit the “model”. In
order to estimate a linear regression model, we pool from across the three chains.
Figure 3.21 shows the distribution of a simple bivariate linear model (y ¼ x1 þ x2).
3.13 Missing Data 97
Fig. 3.21 Density plots comparing the observed and imputed outcome variable y
model_results<-
pool(y~x1+x2+x3+x4+x5+x6+x7+x8+x9+x10,data=imputations, m=3)
display (model_results); summary (model_results)
## bayesglm(formula = y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 +
x8 + ## x9 + x10, data = imputations, m = 3)
## coef.est coef.se
## (Intercept) 0.77 0.84
## x1 0.94 0.05
## x2 0.97 0.04
## x31 -0.27 0.37
## x4.L 0.21 0.21
## x4.Q -0.09 0.16
## x4.C 0.03 0.24
## x4^4 0.25 0.20
## x5b 0.03 0.42
## x5c -0.41 0.26
## x5d -0.22 0.86
## x5e 0.11 0.56
## x5f -0.13 0.55
## x5g -0.27 0.67
## x5h -0.17 0.66
## x5i -0.69 0.81
## x5j 0.21 0.28
## x6 -0.04 0.07
## x7 0.98 0.09
## x82 0.44 0.39
## x83 0.40 0.20
## x84 -0.14 0.62
## x85 0.20 0.30
## x86 0.19 0.25
## x87 0.19 0.38
## x88 0.51 0.34
## x89 0.25 0.26
## x810 0.17 0.48
## x9 0.88 0.71
## x10 -0.06 0.05
## n = 970, k = 30
## residual deviance = 2056.5, null deviance = 15851.5
(difference=13795.0)
98 3 Managing Data in R
## overdispersion parameter = 2.1 ##
residual sd is sqrt(overdispersion) =
1.46
## Call:
## pool(formula = y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9 +
## x10, data = imputations, m = 3)
##
## Deviance Residuals:
## Min 1Q Median 3Q
Max ## -2.8821 -0.6925 -0.0005 0.6859
3.7035
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.76906 0.83558 0.920 0.440149
## x1 0.94250 0.04535 20.781
0.000388 *** ## x2 0.97495 0.03517
27.721 2.01e-05 *** ## x31 -0.27349
0.37377 -0.732 0.533696 ## x4.L
0.21116 0.21051 1.003 0.378488 ## x4.Q
-0.08567 0.15627 -0.548 0.602349
## x4.C 0.02957 0.24490 0.121 0.911557
## missing_x10
## Mode :logical
## FALSE:700
## TRUE :300
## NA's :0
##
## $`chain:3`
## y x1 x2 x3
x4 ## Min. :-5.132 Min. :-8.769 Min. :-3.643
0:538 1:200 ## 1st Qu.: 2.414 1st Qu.: 0.000 1st Qu.:
0.000 1:462 2:182 ## Median : 5.632 Median : 2.034
Median : 2.610 3:215 ## Mean : 5.537 Mean :
2.417 Mean : 2.530 4:211 ## 3rd Qu.: 8.434 3rd
Qu.: 4.836 3rd Qu.: 4.812 5:192 ## Max. :16.945
Max. :10.335 Max. :11.683 ##
## x5 x6 x7
x8 ## b :123 Min. :-2.223 Min. :-2.76469
2 :139 ## j :115 1st Qu.: 3.000 1st Qu.:-
0.64886 5 :111 ## c :111 Median : 5.000
Median : 0.03266 1 :110 ## h :103 Mean :
4.957 Mean : 0.03220 3 :109 ## i :103 3rd
Qu.: 7.000 3rd Qu.: 0.71341 7 :106 ## a :100
Max. :11.785 Max. : 3.71572 9 :100 ##
(Other):345 (Other):325
## x9 x10 missing_y
missing_x1 ## Min. :0.007236 Min. :-1.522 Mode
:logical Mode :logical ## 1st Qu.:0.320579 1st Qu.: 2.224
FALSE:700 FALSE:700 ## Median :0.531962 Median :
4.000 TRUE :300 TRUE :300 ## Mean :0.541147
Mean : 3.894 NA's :0 NA's :0 ## 3rd
Qu.:0.772802 3rd Qu.: 5.000 ##
Max. :0.992118 Max. :11.000
##
## missing_x2 missing_x3 missing_x4
missing_x5 ## Mode :logical Mode :logical Mode
:logical Mode :logical ## FALSE:700 FALSE:700
FALSE:700 FALSE:700 ## TRUE :300 TRUE :
300 TRUE :300 TRUE :300 ## NA's :0
3.13 Missing Data 101
NA's :0 NA's :0 NA's :0 ##
## missing_x6 missing_x7 missing_x8
missing_x9 ## Mode :logical Mode :logical Mode
:logical Mode :logical ## FALSE:700 FALSE:700
FALSE:700 FALSE:700 ## TRUE :300 TRUE :
300 TRUE :300 TRUE :300 ## NA's :0
NA's :0 NA's :0 NA's :0
##
## missing_x10
## Mode :logical
## FALSE:700
## TRUE :300
## NA's :0
library("lattice")
densityplot(y ~ x1 + x2, data=imputations)
This plot, Fig. 3.21, allows us to compare the density of observed data and
imputed data–these should be similar (though not identical) under MAR
assumptions.
102 3 Managing Data in R
3.13.2 TBI Data Example
Next, we will see an example using the traumatic brain injury (TBI) dataset.
# Load the (raw) data from the table into a plain text file
"08_EpiBioSData_ Incomplete.csv"
TBI_Data <-
read.csv("https://umich.instructure.com/files/720782/download?do
wnload_frd=1", na.strings=c("", ".", "NA")) summary(TBI_Data)
## id age sex mechanism
## Min. : 1.00 Min. :16.00 Female: 9 Bike_vs_Auto: 4
## 1st Qu.:12.25 1st Qu.:23.00 Male :37
Blunt : 4 ## Median :23.50 Median :33.00
Fall :13
## Mean :23.50 Mean :36.89 GSW : 2
## 3rd Qu.:34.75 3rd Qu.:47.25 MCA : 7
## Max. :46.00 Max. :83.00
MVA :10 ##
Peds_vs_Auto: 6
## field.gcs er.gcs icu.gcs worst.gcs
## Min. : 3 Min. : 3.000 Min. : 0.000 Min. : 0.0
## 1st Qu.: 3 1st Qu.: 4.000 1st Qu.: 3.000 1st Qu.: 3.0
## Median : 7 Median : 7.500 Median : 6.000 Median :
3.0 ## Mean : 8 Mean : 8.182 Mean : 6.378 Mean
: 5.4 ## 3rd Qu.:12 3rd Qu.:12.250 3rd Qu.: 8.000 3rd
Qu.: 7.0
## Max. :15 Max. :15.000 Max. :14.000 Max. :
14.0 ## NA's :2 NA's :2 NA's :1 NA's
:1
## X6m.gose X2013.gose skull.fx temp.injury
## Min. :2.000 Min. :2.000 Min. :0.0000 Min. :0.000
## 1st Qu.:3.000 1st Qu.:5.000 1st Qu.:0.0000 1st Qu.:0.000
## Median :5.000 Median :7.000 Median :1.0000 Median :1.000
## Mean :4.805 Mean :5.804 Mean :0.6087 Mean :0.587
## 3rd Qu.:6.000 3rd Qu.:7.000 3rd Qu.:1.0000 3rd
Qu.:1.000 ## Max. :8.000 Max. :8.000 Max. :1.0000
Max. :1.000 ## NA's :5
## surgery spikes.hr min.hr max.hr
## Min. :0.0000 Min. : 1.280 Min. : 0.000 Min. :
12.00
## 1st Qu.:0.0000 1st Qu.: 5.357 1st Qu.: 0.000 1st Qu.:
35.25
## Median :1.0000 Median : 18.170 Median : 0.000 Median :
97.50
## Mean :0.6304 Mean : 52.872 Mean : 3.571 Mean :
241.89 ## 3rd Qu.:1.0000 3rd Qu.: 57.227 3rd Qu.: 0.000 3rd
Qu.: 312.75
## Max. :1.0000 Max. :294.000 Max. :42.000 Max. :
1199.00
## NA's :18 NA's :18 NA's :18
## acute.sz late.sz ever.sz
## Min. :0.0000 Min. :0.0000 Min. :0.000
## 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:0.000
## Median :0.0000 Median :1.0000 Median :1.000
## Mean :0.1739 Mean :0.5652 Mean :0.587
## 3rd Qu.:0.0000 3rd Qu.:1.0000 3rd
Qu.:1.000 ## Max. :1.0000 Max. :1.0000
Max. :1.000 ##
1. Converttoamissing_data.frame (Fig. 3.22)
10
Observation Number 20
30
40
tmp.njry
min.hr
max.hr
age
skull.fx
sex
fild.gcs
surgery
late.sz
spiks.hr
wrst.gcs
er.gcs
ever.sz
icu.gcs
acute.sz
mechansm
X6m.gose
X2013.gs
1.500000e+00
1.000000e+00
5.000000e−01
−6.666667e−09
−5.000000e−01
−1.000000e+00
Standardized Variable
Clustered by missingness
Fig. 3.22 Missing data pattern for the TBI case-study
Fig. 3.23 Validation plots for the original, imputed and complete TBI datasets
108 3 Managing Data in R
6. Reportalistof“summaries”foreachelement(imputation instance).
lapply(data.frames, summary)
## $`chain:1`
## id age sex
mechanism
## Min. : 1.00 Min. :16.00 Female: 9 Bike_vs_Auto: 4
## 1st Qu.:12.25 1st Qu.:23.00 Male :37 Blunt : 4
## Median :23.50 Median :33.00 Fall
:13 ## Mean :23.50 Mean :36.89 GSW
: 2 ## 3rd Qu.:34.75 3rd Qu.:47.25 MCA
: 7
## Max. :46.00 Max. :83.00 MVA :10
## Peds_vs_Auto: 6
## field.gcs er.gcs icu.gcs worst.gcs
## Min. :-3.424 Min. : 3.000 Min. : 0.000 Min. :
0.000
## 1st Qu.: 3.000 1st Qu.: 4.250 1st Qu.: 3.000 1st Qu.:
3.000
## Median : 6.500 Median : 8.000 Median : 6.000 Median :
3.000 ## Mean : 7.593 Mean : 8.442 Mean : 6.285
Mean : 5.494 ## 3rd Qu.:12.000 3rd Qu.:13.000 3rd Qu.:
7.750 3rd Qu.: 7.750
## Max. :15.000 Max. :15.000 Max. :14.000 Max. :
14.000
##
## X6m.gose X2013.gose skull.fx temp.injury
surgery
## Min. :2.000 Min. :2.000 0:18 0:19 0:17
## 1st Qu.:3.000 1st Qu.:5.000 1:28 1:27 1:29
## Median :5.000 Median :7.000
## Mean :5.031 Mean :5.804
## 3rd Qu.:6.815 3rd Qu.:7.000
## Max. :8.169 Max. :8.000
##
8. Saveresultsout.
write.csv(data.frames[[5]],
"C:\\Users\\User\\Desktop\\TBI_MIData.csv")
9. CompleteDataanalyticsfunctions.
# library("mi")
# lm.mi(); glm.mi(); polr.mi(); bayesglm.mi(); bayespolr.mi();
lmer.mi(); gl mer.mi()
10. Fitalinearmodelforonemultiplyimputedchain.
# Also see Step (9)
##linear regression for each imputed data set - 5 regression models
are fit fit_lm1 <- glm(ever.sz ~ surgery + worst.gcs + factor(sex) +
age, data.frame s$`chain:1`, family = "binomial"); summary(fit_lm1);
display(fit_lm1)
## ##
Call:
## glm(formula = ever.sz ~ surgery + worst.gcs + factor(sex) + age,
## family = "binomial", data = data.frames$`chain:1`)
## ## Deviance
Residuals:
## Min 1Q Median 3Q
Max ## -1.7000 -1.2166 0.8222 1.0007
1.3871
## ##
Coefficients:
## Estimate Std. Error z value
Pr(>|z|) ## (Intercept) 0.249780 1.356397
0.184 0.854 ## surgery1 0.947392
0.685196 1.383 0.167 ## worst.gcs
-0.068734 0.097962 -0.702 0.483 ##
114 3 Managing Data in R
factor(sex)Male -0.329313 0.842761 -0.391
0.696 ## age 0.004453 0.019431
0.229 0.819 ##
## (Dispersion parameter for binomial family taken to
be 1) ##
## Null deviance: 62.371 on 45 degrees of freedom
## Residual deviance: 60.046 on 41 degrees of freedom
## AIC: 70.046
##
## Number of Fisher Scoring iterations: 4
## glm(formula = ever.sz ~ surgery + worst.gcs + factor(sex) + age,
## family = "binomial", data = data.frames$`chain:1`)
## coef.est coef.se
## (Intercept) 0.25 1.36
## surgery1 0.95 0.69
## worst.gcs -0.07 0.10
## factor(sex)Male -0.33 0.84
## age 0.00 0.02
## ---
## n = 46, k = 5
## residual deviance = 60.0, null deviance = 62.4 (difference =
2.3)
11. Fittheappropriatemodelandpooltheresults.
# (estimates over MI chains)
model_results <- pool(ever.sz ~ surgery + worst.gcs +
factor(sex) + age, family = "binomial", data=imputations, m=5)
display (model_results); summary (model_results)
## $`chain:2`
## id age sex mechanism
## Min. : 1.00 Min. :16.00 Female: 9 Bike_vs_Auto: 4
## 1st Qu.:12.25 1st Qu.:23.00 Male :37 Blunt : 4
## Median :23.50 Median :33.00
Fall :13 ## Mean :23.50 Mean :36.89
GSW : 2
## 3rd Qu.:34.75 3rd Qu.:47.25 MCA : 7
## Max. :46.00 Max. :83.00 MVA
:10 ##
Peds_vs_Auto: 6 …
## missing_max.hr
## Mode :logical
## FALSE:28
## TRUE :18
## NA's :0
##
## $`chain:3`
## id age sex mechanism
## Min. : 1.00 Min. :16.00 Female: 9
Bike_vs_Auto: 4 ## 1st Qu.:12.25 1st Qu.:23.00 Male :
37 Blunt : 4
## Median :23.50 Median :33.00 Fall :13
## Mean :23.50 Mean :36.89 GSW : 2
## 3rd Qu.:34.75 3rd Qu.:47.25 MCA
: 7 ## Max. :46.00 Max. :83.00 MVA
:10 ##
Peds_vs_Auto: 6 …
## missing_max.hr
## Mode :logical
## FALSE:28
## TRUE :18
## NA's :0
13. Validation:
Next, we can verify whether enough iterations were conducted. One
validation criteria demands that the mean of each completed variable is
similar to the corresponding meen of the complete data (Fig. 3.24).
116 3 Managing Data in R
Observation
Number
10
30
tmp.n
wrst.
mn.hr
mx.hr
fld.g
skll.
srgry
lt.sz
mchns
evr.s
er.gc
act.s
age
ic.gc
sex
X2013
X6m.g
spks.
Average completed data
Observation
Number
10
30
tmp.n
wrst.
mn.hr
mx.hr
fld.g
skll.
lt.sz
srgry
mchns
act.s
evr.s
er.gc
age
ic.gc
sex
X2013
X6m.g
spks.
3
1
−1
−3
3
1
−1
−3
Fig. 3.25 Comparison of the missing data patterns in the original (top) and the completed
(bottom) TBI sets
## Coefficients:
## Estimate Std. Error z value
Pr(>|z|) ## (Intercept) 0.578917 1.348831
0.429 0.668 ## surgery1 0.990656
0.662991 1.494 0.135 ## worst.gcs
-0.105240 0.095335 -1.104 0.270 ##
factor(sex)Male -0.357285 0.772307 -0.463
0.644 ## age 0.000198 0.019702
0.010 0.992 ##
## (Dispersion parameter for binomial family taken to
be 1) ##
## Null deviance: 62.371 on 45 degrees of freedom
## Residual deviance: 58.995 on 41 degrees of
freedom ## AIC: 68.995
##
## Number of Fisher Scoring iterations: 7
their SE's
## [[2]]
## id age sex mechanism
## Min. : 1.00 Min. :16.00 Female: 9 Bike_vs_Auto: 4
## 1st Qu.:12.25 1st Qu.:23.00 Male :37 Blunt : 4
126 3 Managing Data in R
## Median :23.50 Median :33.00
Fall :13 ## Mean :23.50 Mean :36.89
GSW : 2
## 3rd Qu.:34.75 3rd Qu.:47.25 MCA : 7
## Max. :46.00 Max. :83.00 MVA
:10 ##
Peds_vs_Auto: 6 …
## missing_max.hr
## Mode :logical
## FALSE:28 ##
TRUE :18
## NA's :0
##
## [[3]]
## id age sex mechanism
## Min. : 1.00 Min. :16.00 Female: 9 Bike_vs_Auto: 4
## 1st Qu.:12.25 1st Qu.:23.00 Male :37 Blunt : 4
## Median :23.50 Median :33.00 Fall :13
## Mean :23.50 Mean :36.89 GSW : 2
## 3rd Qu.:34.75 3rd Qu.:47.25 MCA :
7 ## Max. :46.00 Max. :83.00 MVA :10
## Peds_vs_Auto:
6 …
##
## missing_max.hr
## Mode :logical
## FALSE:28
## TRUE :18
## NA's :0
3.13.3 Imputation via Expectation-Maximization
Below we present the theory and practice of one specific statistical computing
strategy for imputing incomplete datasets.
Recall that we have the following three distinct types of incomplete data.
• MCAR: Data which is Missing Completely At Random has nothing systematic
about which observations are missing. There is no relationship between
missingness and either observed or unobserved covariates.
• MAR: Missing At Random is weaker than MCAR. The missingness is still
random, but solely due to the observed variables. For example, those from a
lower socioeconomic status (SES) may be less willing to provide salary
information (but we know their SES). The key is that the missingness is not due
to the values which are not observed. MCAR implies MAR, but not vice-versa.
• MNAR: If the data are Missing Not At Random, then the missingness depends
on the values of the missing data. Examples include censored data, self-
reported data for individuals who are heavier, who are less likely to report their
weight, and response-measuring device that can only measure values above 0.5,
anything below that is missing.
Most of the time, this equation may not be directly solved, e.g., when Y is missing.
• Expectation step (E step): computes the expected value of the log likelihood
function, with respect to the conditional distribution of Z given X using the
parameter estimates at the previous iteration (or at the position of initialization,
for the first iteration), θt:
EM-Based Imputation
• E-step: (Expectation) Get the expectations of Y and YYT based on observed data.
• M-step: (Maximization) Maximize the conditional expectation in E-step to
estimate the parameters.
Details: If o ¼ obs and m ¼ mis stand for observed and missing, the mean vector,
the variance-covariance matrix, Σom are
(μobs, μmis)T, and Σð Þt ¼ Σoo Σmo
,
represented by: Σmm
Σom
μð Þt ¼ μobs , Σð Þt ¼ Σoo
Σmm
E-step: μmis Σmo
X
XXT XE Yð jXÞT
E Zð jXÞ ¼,
E ZZ TjX ¼ !:
M-step:
# Expectation Step
# $$E(Y|X)=\mu_{mis}+\Sigma_{mo}\Sigma_{oo}^{-1}
(X-\mu_{obs})$$ new.impute[i, pick.miss] <-
mean.vec[pick.miss] + sigma[pick.miss,!pick.miss] %*% inv.S
%*%
(t(new.impute[i, !pick.miss]) - t(t(mean.vec[!
pick.miss])))
}
}
# Maximization Step
# Compute the complete \Sigma complete vector of feature
(column) means
# $$\Sigma^{(t+1)} = \frac{1}{n}\sum_{i=1}^nE(ZZ^T|X) -
# \mu^{(t+1)}{\mu^{(t+1)}}^T$$
sigma <- var((new.impute))
#$$\mu^{(t+1)} = \frac{1}{n}\sum_{i=1}^nE(Z|X)$$
mean.vec <- as.matrix(apply(new.impute, 2, mean))
10
5.0
2.5 5
X2
X6
0.0
0
− 2.5
−5
−5 0 5 −5 0 5 10 15
X1 X5
20
10 10
X20
X19
− 10
− 10 − 20
−10 −5 0 5 10 −10 −5 0 5 10
X13 X18
Fig. 3.26 Four scatterplots for pairs of features illustrating the complete data (small-black points),
the imputed data points (larger-pink points), and 2D Gaussian kernels
plot.me <- function(index1, index2)
{ plot.imputed <- sim_data.imputed[row.names(
subset(sim_data.df, is.na(sim_data.df[,
index1]) |
is.na(sim_data.df[, index2]))), ] p =
ggplot(sim_data.imputed, aes_string( paste0("X",index1)
,
paste0("X",index2 ))) + geom_point(alpha = 0.5, size =
0.7)+theme_bw() + stat_ellipse(type = "norm", color =
3.13 Missing Data 133
"#000099", alpha=0.5) + geom_point(data = plot.imputed,
aes_string( paste0("X",index1) ,
paste0("X",(index2))),size = 1.5, color = "Magenta", alpha = 0.8)
}
Comparison
Let’s use the amelia function to impute the original data sim_data_df and
compare the results to the simpler manual EM_algorithm imputation defined
above.
# install.packages("Amelia")
library(Amelia) dim(sim_data.df) ##
[1] 200 20 amelia.out <-
amelia(sim_data.df, m = 5)
## -- Imputation 1 --
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## -- Imputation 2 --
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
## -- Imputation 3 --
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## -- Imputation 4 --
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## 21 22 23 24
## -- Imputation 5 --
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
amelia.out
##
## Amelia output with 5 imputed datasets.
## Return code: 1
## Message: Normal EM convergence.
##
## Chain
Lengths: ##
--------------
## Imputation 1: 20
## Imputation 2: 15
## Imputation 3: 16
## Imputation 4: 24 ## Imputation 5: 17
amelia.imputed.5 <-
amelia.out$imputations[[5]]
4
X4
−4
−2.5 0.0 2.5 5.0
X2
Fig. 3.27 Scatter plot of the second and fourth features. Magenta-circles and Orange-squares
represent the manual imputation via EM_algorithm and the automated Amelia-based imputation
10
5
X18
−5
− 10
0 10
X17
Density Plots
In this section, we will utilize the Earthquakes dataset on the SOCR website. It
stores information about earthquakes of magnitudes larger than 5 on the Richter
scale that were recorded between 1969 and 2007. Here is how we download and
parse the data on the source webpage and ingest the information into R:
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body-primary" role="main">\n\t<a
id="top ...
0.3 0.3
category
0.2
density
density
amelia 0.2
obs
0.1 0.1
simplelmplement
0.0 0.0
–5 0 5 –2.5 0.0 2.5 5.0
x x
0.20
category 0.15
0.15
density
density
amelia
0.10
0.10 obs
0.05 simplelmplement 0.05
0.00 0.00
–2.5 0.0 2.5 5.0 7.5 –4 0 4 8
x x
0.15 0.15
category
density
density
0.00 0.00
–5 0 5 10 15 –5 0 5 10
x x
0.4
0.12
0.3 category
0.08
density
density
amelia
0.2
obs
0.04
0.1 simplelmplement
0.0 0.00
–5 0 5 –10 –5 0 5
x x
0.15 0.20
category
0.15
density
amelia
density
0.10
obs 0.10
0.05 simplelmplement 0.05
0.00
0.00
category
amelia obs simplelmplement
category
amelia obs simplelmplement
137
category
amelia obs simplelmplement
category
amelia obs simplelmplement
category
amelia obs simplelmplement
–5 0 5 10 –10 –5 0 5 10 x x
Fig. 3.29 Density plots of the original, manually-imputed and Amelia-imputed datasets, 10
features only
45.0
42.5
40.0
Latitude
37.5
35.0
32.5
We can see the plotting script consists of two parts. The first part ggplot
(earthquake, aes(Longitude, Latitude, group ¼ Magt, color¼Magt)) specifies the
setting of the plot: dataset, group and color. The second part specifies that we are
going to draw points for all data points. In later Chapters, we will frequently use
ggplot2; which always takes multiple function calls, e.g., function1+function2.
We can visualize the distribution for different variables using density plots. The
following chunk of codes plots the distribution for Latitude among different
Magnitude types. Also, it uses the ggplot() function combined with
geom_density() (Fig. 3.31).
We can also compute and display 2D Kernel Density and 3D Surface Plots.
Plotting 2D Kernel Density and 3D Surface plots is very important and useful in
multivariate exploratory data analytic.
We will use the plot_ly() function under plotly package, which takes data frame
inputs.
To create a surface plot, we use two vectors: x and y, with length m and n,
respectively. We also need a matrix: z of size m n. This z matrix is created from
matrix multiplication between x and y.
3.14 Parsing Webpages and Visualizing Tabular HTML Data
1.00
0.75
1 139
density Magt
0.50
0.25
0.00
1
Md
ML
Mw
Mx
Fig. 3.31 Modified Earthquake density plot (y) of magnitude type against latitude coordinates
library(plotly)
Note that we used the option “surface”, however you can experiment with the
type option.
Alternatively, one can plot 1D, 2D, or 3D plots (Fig. 3.32):
plot_ly(x = ~ earthquake$Longitude)
## No trace type specified:
## Based on info supplied, a 'histogram' trace seems appropriate.
## Read more about this trace type-
>https://plot.ly/r/reference/#histogram plot_ly(x = ~
earthquake$Longitude, y = ~earthquake$Latitude)
plot_ly(x=~earthquake$Longitude, y=~earthquake$Latitude,
z=~earthquake$Mag)
View(as.matrix(matrix_EarthQuakes))
http://www.socr.umich.edu/people
dinov/courses/DSPA_notes
02_ManagingData.htm
Fig. 3.33 Live demo of 3D kernel density surface plots using the Earthquake and 2D brain
imaging data (http://www.socr.umich.edu/people/dinov/courses/DSPA_notes/02_ManagingData.
html)
You can see interactive surface plot generated by plotly in the live demo listed
on Fig. 3.33.
balancedData[, 5]
0.10.30.50.7
input[, 5]
Fig. 3.34 Validation that cohort rebalancing does not substantially alter the distributions of
features. This QQ plot of one variable shows the linearity of the quantiles of the initial (x) and
rebalanced (y) data
test.results.corr
0.00.20.40.60.81.0
Fig. 3.35 Scatter plot of the raw (x) and corrected/adjusted (y) p-values corresponding to the
paired two-sample Wilcoxon non-parametric test comparing the raw and rebalanced features
# update.packages()
# load the data: 06_PPMI_ClassificationValidationData_Short.csv
ppmi_data <-
read.csv("https://umich.instructure.com/files/330400/download?do
wnload_frd=1", header=TRUE)
table(ppmi_data$ResearchGroup)
# binarize the Dx classes
ppmi_data$ResearchGroup <- ifelse(ppmi_data$ResearchGroup ==
"Control",
"Control", "Patient")
attach(ppmi_data)
head(ppmi_data)
3.15 Cohort-Rebalancing (for Imbalanced Groups)
# balance cases
# SMOTE: Synthetic Minority Oversampling Technique to handle class
misbalance in binary classification.
set.seed(1000)
# install.packages("unbalanced") to deal with unbalanced group data
require(unbalanced)
ppmi_data$PD <-
ifelse(ppmi_data$ResearchGroup=="Control", 1, 0) uniqueID
<- unique(ppmi_data$FID_IID) ppmi_data <-
ppmi_data[ppmi_data$VisitID==1, ] ppmi_data$PD <-
factor(ppmi_data$PD)
colnames(ppmi_data)
# ppmi_data.1<-ppmi_data[, c(3:281, 284, 287,
336:340, 341)] n <- ncol(ppmi_data) output.1 <-
ppmi_data$PD
nrow(data.1$X); ncol(data.1$X)
nrow(balancedData); ncol(balancedData)
nrow(input); ncol(input)
colnames(balancedData) <- c(colnames(input),
144 3 Managing Data in R
"PD")
3.16 Appendix
We can also import SQL databases into R. First, we need to install and load the
RODBC(R Open Database Connectivity) package.
3.16 Appendix
Then, we could open a connection to the SQL server database with Data Source
Name (DSN), via Microsoft Access. More details are provided online.
145
3.16.2 R Code Fragments
Below are some code snippets used to generate some of the graphs shown in this
Chapter.
#Right Skewed
N <- 10000 x <-
rnbinom(N, 10, .5)
hist(x,
xlim=c(min(x), max(x)), probability=T, nclass=max(x)-
min(x)+1, col='lightblue', xlab=' ', ylab=' ',
axes=F, main='Right Skewed')
lines(density(x, bw=1), col='red', lwd=3)
#No Skew
N <- 10000 x <- rnorm(N, 0, 1) hist(x,
probability=T, col='lightblue', xlab=' ',
ylab=' ', axes=F, main='No Skew')
lines(density(x, bw=0.4), col='red', lwd=3)
#Uniform density x<-
runif(1000, 1, 50)
hist(x, col='lightblue', main="Uniform Distribution", probability =
T, xlab=
"", ylab="Density", axes=F)
abline(h=0.02, col='red', lwd=3)
#68-95-99.7 rule
x <- rnorm(N, 0,
1)
hist(x, probability=T, col='lightblue',
xlab=' ', ylab=' ', axes = F, main='68-
95-99.7 Rule')
lines(density(x, bw=0.4), col='red', lwd=3)
axis(1, at=c(-3, -2, -1, 0, 1, 2, 3), labels = expression(mu-
3*sigma, mu-2*sigma, mu-sigma, mu, mu+sigma, mu+2*sigma,
mu+3*sigma)) abline(v=-1, lwd=3, lty=2) abline(v=1, lwd=3,
lty=2) abline(v=-2, lwd=3, lty=2) abline(v=2, lwd=3, lty=2)
abline(v=-3, lwd=3, lty=2) abline(v=3, lwd=3, lty=2) text(0,
0.2, "68%")
segments(-1, 0.2, -0.3, 0.2, col = 'red',
lwd=2) segments(1, 0.2, 0.3, 0.2, col =
'red', lwd=2) text(0, 0.15, "95%")
segments(-2, 0.15, -0.3, 0.15, col = 'red',
lwd=2) segments(2, 0.15, 0.3, 0.15, col =
'red', lwd=2) text(0, 0.1, "99.7%")
segments(-3, 0.1, -0.3, 0.1, col = 'red', lwd=2)
segments(3, 0.1, 0.3, 0.1, col = 'red', lwd=2)
146 3 Managing Data in R
3.17 Assignments: 3. Managing Data in R
Load the following two datasets, generate summary statistics for all variables, plot
some of the features (e.g., histograms, box plots, density plots, etc.), and save the
data locally as CSV files:
Use ALS case-study data or SOCR Knee Pain Data to explore some bivariate
relations (e.g. bivariate plot, correlation, table, crosstable, etc.)
Use 07_UMich_AnnArbor_MI_TempPrecipitation_HistData_1900_2015 data
to show the relations between temperature and time. [Hint: use geom_line or
geom_bar].
Some sample code for dealing with the table of temperatures data is included
below.
<code>
Temp_Data <-
as.data.frame(read.csv("https://umich.instructure.com/files/706163/download?
download_frd=1", header=T, na.strings=c("", ".", "NA", "NR"))) summary(Temp_Data)
# View(Temp_Data); colnames(Temp_Data)
# View(longTempData)
bar2 <- ggplot(longTempData, aes(x = Months, y = Temps, fill =
Months)) + geom_bar(stat = "identity") print(bar2) bar3 <-
ggplot(longTempData, aes(x = Year, y = Temps, fill = Months)) +
geom_bar(stat = "identity") print(bar3)
Introduce (artificially) some missing data, impute the missing values and
examine the differences between the original, incomplete, and imputed data.
Generate a surface plot for the (RF) Knee Pain data illustrating the 2D distribution
of locations of the patient reported knee pain (use plot_ly and kernel density
estimation).
Rebalance the groups of ALS (training data) patients according to Age > 50 and
Age 50 using synthetic minoroty oversampling (SMOTE) to ensure approximately
equal cohort sizes. (Hint: you may need to set 1 as the minority class.)
Use the California Ozone Data to generate a summary report. Make sure to
include: summary for every variable, the structure of the data, convert to
appropriate data type, discuss the tendency of the ozone average concentration,
explore the differences of the ozone concentration a specific area (you may
select year 2006), explore the seasonal change of ozone concentration.
References
https://plot.ly/r/
http://www.statmethods.net/management
Chapter 4
Data Visualization
4.3 Composition
In this section, we will see composition plots for different types of variables and
data structures.
4.3 Composition 145
One of the first few graphs we learn is a histogram plot. In R, the command hist()
is applied to a vector of values and used for plotting histograms. The famous
nineteenth century statistician Karl Pearson introduced histograms as graphical
bins form a partition (disjoint and covering sets) of the range. Finally, we compute
the relative frequency representing the number of observations that fall within
146 4 Data Visualization
each bin interval. The histogram just plots a piece-wise step-function defined
over the union of the bin interfaces whose height equals the observed relative
frequencies (Fig. 4.2).
Fig. 4.2 Overlay of Normal
distribution histogram and
density curve plot
set.seed(1) x<-rnorm(1000)
hist(x, freq=T, breaks = 10)
lines(density(x), lwd=2,
col="blue") t <- seq(-3, 3,
by=0.01)
lines(t, 550*dnorm(t,0,1), col="magenta") # add the theoretical
density line
Here, freq¼T shows the frequency for each x value and breaks controls for
number of bars in our histogram.
The shape of the last histogram we drew is very close to a Normal distribution
(because we sampled from this distribution by rnorm). We can add a density line
to the histogram (Fig. 4.3).
4.3 Composition 147
We are all very familiar with pie charts that show us the components of a big
“cake”. Although pie charts provide effective simple visualization in certain
situations, it may also be difficult to compare segments within a pie chart or
across different pie charts. Other plots like bar chart, box or dot plots may be
attractive alternatives.
We will use the Letter Frequency Data on SOCR website to illustrate the use of
pie charts.
library(rvest)
wiki_url <-
read_html("http://wiki.socr.umich.edu/index.php/SOCR_LetterFrequ
encyData") html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body-primary" role="main">\n\t<a
id="top ...
par(mfrow=c(1, 2))
pie(letter$English[1:10], labels=letter$Letter[1:10],
col=rainbow(10, start=
0.1, end=0.8), clockwise=TRUE, main="First 10 Letters Pie Chart")
pie(letter$English[1:10], labels=letter$Letter[1:10],
col=rainbow(10, start=
0.1, end=0.8), clockwise=TRUE, main="First 10 Letters Pie Chart")
legend("topleft", legend=letter$Letter[1:10], cex=1.3, bty="n",
pch=15, pt.c ex=1.8, col=rainbow(10, start=0.1, end=0.8), ncol=1)
Another common data visualization method is the heatmap. Heat maps can help us
visualize intuitively the individual values in a matrix. It is widely used in genetics
research and financial applications.
We will illustrate the use of heat maps, based on a neuroimaging genetics
casestudy data about the association (p-values) of different brain regions of
interest (ROIs) and genetic traits (SNPs) for Alzheimer’s disease (AD) patients,
subjects with mild cognitive impairment (MCI), and normal controls (NC). First,
let’s import the data into R. The data are 2D arrays where the rows represent
different genetic SNPs, columns represent brain ROIs, and the cell values
represent the strength of the SNP-ROI association, a probability value (smaller p-
values indicate stronger neuroimaging-genetic associations).
AD_Data <-
read.table("https://umich.instructure.com/files/330387/download?d
ownload_frd=1", header=TRUE, row.names=1, sep=",", dec=".")
MCI_Data <-
read.table("https://umich.instructure.com/files/330390/download?
download_frd=1", header=TRUE, row.names=1, sep=",", dec=".")
NC_Data <-
read.table("https://umich.instructure.com/files/330391/download?d
ownload_frd=1", header=TRUE, row.names=1, sep=",", dec=".")
require(graphics)
require(grDevices)
library(gplots)
We may also want to set up the row (rc) and column (cc) colors for each cohort.
Fig. 4.6 Hierarchically clustered heatmap for the Alzheimer’s disease (AD) cohort of the
dementia study. The rows indicate the unique SNP reference sequence (rs) IDs and the columns
index specific brain regions of interest (ROIs) that are associated with the genomic biomarkers
(rows)
Finally, we can plot the heat maps by specifying the input type of heatmap() to
be a numeric matrix (Figs. 4.6, 4.7, and 4.8).
Fig. 4.7 Hierarchically clustered heatmap for the Mild Cognitive Impairment (MCI) cohort
In the heatmap() function, the first argument provides the input matrix we
want to use. col is the color scheme; scale is a character indicating if the values
should be centered and scaled in either the row direction or the column direction,
or none ("row", "column", and "none"); RowSideColors and ColSideColors
creates the color names for horizontal side bars.
The differences between the AD, MCI, and NC heat maps are suggestive of
variations of genetic traits or alternative brain regions that may be affected in the
three clinically different cohorts.
152 4 Data Visualization
Fig. 4.8 Hierarchically clustered heatmap for the healthy normal controls (NC) cohort
4.4 Comparison
Scatter plots use the 2D Cartesian plane to display a pair of variables. 2D points
represent the values of the two variables corresponding to the two coordinate axes.
The position of each 2D point on is determined by the values of the first and
second variables, which represent the horizontal and vertical axes. If no clear
dependent
4.4 Comparison 153
library(ggplot2)
cat <- rep(c("A", "B", "C", "D", "E"), 10)
plot.1 <- qplot(x, y, geom="point", size=5*x, color=cat,
main="GGplot with R elative Dot Size and Color") print(plot.1)
Now, let’s draw a paired scatter plot with three variables. The input type for
pairs() function is a matrix or data frame (Fig. 4.11).
z<-runif(50)
pairs(data.frame(x, y, z))
We can see that variable names are on the diagonal of this scatter plot matrix.
Each plot uses the column variable as its X-axis and row variable as its Y-axis.
Let’s see a real word data example. First, we can import the Mental Health
Services Survey Data into R, which is on the case-studies website.
GGplot with Relative Dot Size and Color
154 1.00 4 Data Visualization
cat
0.75
y
0.50
5*x
0.25
0.00
A
B
C
D
E
1
2
3
4
Fig. 4.10 Simulated bubble plot depicting four variable features represented as x and y axes, size
and color
4.4 Comparison 155
Fig. 4.11 A pairs plot depicts the bivariate relations in multivariate datasets
data1 <-
read.table('https://umich.instructure.com/files/399128/download?dow
nload_frd=1', header=T) head(data1)
## STFIPS majorfundtype FacilityType Ownership Focus
PostTraum GLBT ## 1 southeast 1 5
2 1 0 0
## 2 southeast 3 5 3 1
0 0
## 3 southeast 1 6 2 1
1 1 ## 4 greatlakes NA 2 2 1 0
0 ## 5 rockymountain 1 5 2 3
0 0 ## 6 mideast NA 2 2
1 0 0
## num qual
supp ## 1 5
NA NA
## 2 4 15 4
## 3 9 15 NA
## 4 7 14
6 ## 5 9 18
NA ## 6 8 14
NA attach(data1)
From the head() output, we observe that there are a lot of NA’s in the dataset.
pairs automatically deals with this problem (Figs. 4.12 and 4.13).
pairs(data1[, 5:10])
Figure 4.12 represents just one of the plots shown in the collage on Fig. 4.13.
We can see that Focus and PostTraum have no relationship - Focus can equal to 3
or 1 in either of the PostTraum values (0 or 1). On the other hand, larger supp
tends to correspond to larger qual values.
To see this trend we can also make a plot using qplot function. This allow us to
add a smooth model curve forecasting a possible trend (Fig. 4.14).
Fig. 4.13 A more elaborate 6D pairs plot showing the type and scale of each variable and their
4.4 Comparison 157
bivariate relations
Fig. 4.14 Plotting the bivariate trend along with its confidence limits
plot.2 <- qplot(qual, supp, data = data1, geom = c("point",
"smooth")) print(plot.2)
You can also use the human height and weight dataset or the knee pain dataset
to illustrate some interesting scatter plots.
Jitter plots can help us deal with the complexity issues when we have many points
in the data. The function we will be using is in package ggplot2 is called
position_jitter().
Let’s use the earthquake data for this example. We will compare the
differences with and without the position_jitter() function (Figs. 4.15 and 4.16).
45.0
42.5
35.0
45.0
32.5
0 2550
42.5 75 100
Magt
40.0
Latitude
37.5
35.0
32.5
Depth
Fig. 4.15 Jitter plot of magnitude type against depth and latitude (Earthquake dataset)
Md
ML
Mw
Mx
0 25 50 75 100
Depth
Fig. 4.16 A lower opacity jitter plot of magnitude type against depth and latitude
# library("xml2"); library("rvest") wiki_url <-
read_html("http://wiki.socr.umich.edu/index.php/SOCR_Data_Dinov_
021708_Earthquakes")
html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
4.4 Comparison 159
Note that with option alpha¼0.5 the “crowded” places are darker than the places
with only one data point. Sometimes, we need to add text to these points, i.e., add
label in aes or add geom_text. The result may look messy (Fig. 4.17).
Fig. 4.17 Another version of the jitter plot of magnitude type explicitly listing the Earthquake ID
label
Let’s try to fix the overlap of points and labels. We need to add
check_overlap in geom_text and adjust the positions of the text labels with respect
to the points (Figs. 4.18 and 4.19).
160 4 Data Visualization
# Or you can simply use the text to denote the positions of points.
ggplot(earthquake, aes(Depth, Latitude, group=Magt, color=Magt,
label=rownames(earthquake)))+
geom_text(check_overlap = T,vjust = 0, nudge_y = 0, size = 3,angle =
45)
Bar plots, or bar charts, represent group data with rectangular bars. There are
many variants of bar charts for comparison among categories. Typically, either
horizontal or vertical bars are used where one of the axes shows the compared
categories and the other axis represents a discrete value. It’s possible, and
sometimes desirable, to plot bar graphs including bars clustered by groups.
Fig. 4.18 Yet another version of the previous jitter plot illustrating label specifications
4.4 Comparison 161
Fig. 4.19 This jitter plot suppresses the scatter point bubbles in favor of ID labels
Fig. 4.20 Example of a labeled boxplot using simulated data with grouping categorical labels
In R, we have the barplot() function to generate bar plots. The input for
barplot() is either a vector or a matrix (Fig. 4.20).
x <- matrix(runif(50), ncol=5, dimnames=list(letters[1:10],
LETTERS[1:5])) x
162 4 Data Visualization
## A B C D
E ## a 0.64397479 0.75069788 0.4859278 0.068299279
0.5069665 ## b 0.21981304 0.84028392 0.7489431
0.130542241 0.2694441
## c 0.08903728 0.87540556 0.2656034 0.146773063 0.6346498
## d 0.13075121 0.01106876 0.7586781 0.860316695
0.9976566 ## e 0.87938851 0.04156918 0.1960069
0.949276015 0.5050743
## f 0.65204025 0.21135891 0.3774320 0.896443296 0.9332330
## g 0.02814806 0.72618285 0.5603189 0.113651731 0.1912089
## h 0.13106307 0.79411904 0.4526415 0.793385952 0.4847625
## i 0.15759514 0.63369297 0.8861631 0.004317772
0.6341256 ## j 0.47347613 0.14976052 0.5887866
0.698139910 0.2023031
the location on the x-axiseachbar is 1.5 (there is one empty space before the¼4). In
this example there are 20 bars. The x location for middle of
thex¼seq(1.5,21,byfirst bar). The middle of the last bar
is¼1)+rep(c(0,1,2,3,4),first
Fig. 4.21 Statistical barplot
showing point-estimates and
their error limits (simulated
data)
24.5. seq(1.5,21,by¼1)
starts at 1.5 and creates
20 bars that end with
x¼21. Then, we use
rep(c(0,1,2,3,4),each¼4)
to add 0 to the first
4.4 Comparison 163
group, 1 to the second group, and so forth. Thus, we have the desired positions on
the x-axis. The y-axis positions are obtained just by adding 0.1 to each bar height.
We can also add standard deviations to the means on the bars. To do this, we
need to use the arrows() function and the option angle¼90, the result is shown on
Fig. 4.21.
We are now ready to separate the groups and compute the group means.
data2.matrix <- as.data.frame(data2)
Blacks <- data2[which(data2$race=="black"), ]
Other <- data2[which(data2$race=="other"), ]
Hispanic <-
data2[which(data2$race=="hispanic"), ] White <-
data2[which(data2$race=="white"), ]
x <- cbind(B, O, H,
W) x
## B O H
W ## [1,] 9.165 9.12 8.67
8.950000 ## [2,] 9.930 10.32
9.61 9.911667
Now that we have a numerical matrix for the means, we can compute a second
order statistics, standard deviation, and plot it along with the means, to illustrate
the amount of dispersion for each variable (Fig. 4.22).
bar <- barplot(x, ylim=c(0, max(x)+2.0), beside=TRUE,
legend.text = c("age", "service") , args.legend = list(x =
"right")) text(labels=round(as.vector(as.matrix(x)), 2),
x=seq(1.4, 21, by=1.5), #y=as.vector(as.matrix(x[1:2, ]))
+0.3) y=11.5)
In general, a graph is an ordered pair G (V,E) of vertices (V), i.e., nodes or points,
E), arcs or lines connecting pairs of nodes in¼ V. A tree is a special type
of and edges (
acyclic graph that does not include looping paths. Visualization of graphs is
critical in many biosocial and health studies, and we will see menu such examples
throughout this textbook.
In Chaps. 10 and 13, we will learn more about how to build tree models and
other clustering methods, and in Chap. 23, we will discuss deep learning and
neural networks, which have a direct graphical representation.
This section will be focused on displaying tree graphs. We will use a self-
efficacy study, 02_Nof1_Data.csv, for this demonstration.
data3<-
read.table("https://umich.instructure.com/files/330385/download?
down load_frd=1", sep=",", header = TRUE) head(data3)
## ID Day Tx SelfEff SelfEff25 WPSS SocSuppt PMss
PMss3 PhyAct ## 1 1 1 1 33 8 0.97
5.00 4.03 1.03 53 ## 2 1 2 1 33 8
-0.17 3.87 4.03 1.03 73
## 3 1 3 0 33 8 0.81 4.84 4.03 1.03 23
## 4 1 4 0 33 8 -0.41 3.62 4.03 1.03
36 ## 5 1 5 1 33 8 0.59 4.62 4.03
1.03 21 ## 6 1 6 1 33 8 -1.16 2.87
4.03 1.03 0
166 4 Data Visualization
Fig. 4.24 Hierarchical clustering dendrogram of the 900 self-efficacy records of 30 participants
including
the nine features tracked over a month
We will use hclust to build the hierarchical cluster model. hclust takes only
inputs that have dissimilarity structure as produced by dist(). Also, we use the ave
method for agglomeration, see the tree graph on Fig. 4.24.
hc<-hclust(dist(data3),
method='ave') par (mfrow=c(1,
1)) plot(hc)
When we specify no limit for the maximum cluster groups, we will get the
graph, on Fig. 4.24, which is not easy to interpret. Luckily, cutree will help us
limit the cluster numbers. cutree() takes a hclust object and returns a vector of
group indicators for all observations.
require(graphics) mem
<- cutree(hc, k = 10)
4.4.5 Correlation
Plots
Barplotofcountsfordifferenttypesofchildtraumabyrace(colorlabel)
visualization methods (parameter methods) in corrplot package, named "circle",
"square", "ellipse", "number", "shade", "color", and "pie".
Let’s use 03_NC_SNP_ROI_Assoc_P_values.csv again to investigate the
associations among SNPs using correlation plots.
The corrplot() function we will be using only accepts correlation matrices. So,
we need to first obtain the correlation matrix of our data first using the cor()
function.
# install.packages("corrplot")
library(corrplot)
NC_Associations_Data <-
read.table("https://umich.instructure.com/files/3303
91/download?download_frd=1", header=TRUE, row.names=1, sep=",",
dec=".") M <- cor(NC_Associations_Data)
M[1:10, 1:10]
## P2 P5 P9 P12
P13 ## P2 1.00000000 -0.05976123 0.99999944 -0.05976123
0.21245299 ## P5 -0.05976123 1.00000000 -0.05976131
-0.02857143 0.56024640 ## P9 0.99999944 -0.05976131
1.00000000 -0.05976131 0.21248635 ## P12 -0.05976123
-0.02857143 -0.05976131 1.00000000 -0.05096471 ## P13
0.21245299 0.56024640 0.21248635 -0.05096471 1.00000000
Fig.4.23
-0.05976131
## P12 -0.02857143 -0.04099594 -0.04099594 -0.02857143
-0.02857143 ## P13 0.56024640 0.36613665 0.36613665
-0.05096471 -0.05096471
## P14 1.00000000 0.69821536 0.69821536 -0.02857143
-0.02857143 ## P15 0.69821536 1.00000000 1.00000000
-0.04099594 -0.04099594 ## P16 0.69821536 1.00000000
1.00000000 -0.04099594 -0.04099594 ## P17 -0.02857143
-0.04099594 -0.04099594 1.00000000 -0.02857143 ## P18
-0.02857143 -0.04099594 -0.04099594 -0.02857143
1.00000000
We will illustrate alternative correlation plots using the corrplot function in
Figs. 4.26, 4.27, 4.28, 4.29, 4.30, and 4.31.
corrplot(M, method = "circle", title = "circle", tl.cex = 0.5,
tl.col = 'bla ck', mar=c(1, 1, 1, 1))
# par specs c(bottom, left, top, right) which gives the margin
size specified in inches corrplot(M, method = "square", title =
"square", tl.cex = 0.5, tl.col = 'black', mar=c(1, 1, 1, 1))
Fig. 4.26 Correlation plot of regional brain volumes of the healthy normal controls using circles
Fig. 4.27 The same correlation plot of regional NC brain volumes using squares
Fig. 4.28 The same correlation plot of regional NC brain volumes using ellipses Fig. 4.29 The
same correlation plot of regional NC brain volumes using pie segments
Fig. 4.30 Upper diagonal correlation plot of regional NC brain volumes using circles
4.5 Relationships 171
Fig. 4.31 Mixed correlation plot of regional NC brain volumes using circles and numbers
4.5 Relationships
Line charts display a series of data points (e.g., observed intensities (Y) over time
(X)) by connecting them with straight-line segments. These can be used to either
track temporal changes of a process or compare the trajectories of multiple cases,
time series, or subjects over time, space, or state.
In this section, we will utilize the Earthquakes dataset on SOCR website. It
records information about earthquakes that occurred between 1969 and 2007 with
magnitudes larger than 5 on the Richter scale.
172 4 Data Visualization
geom_line()
print(plot4)
There are two important components in the script. The first part, ggplot
(earthquake, aes(Depth, Latitude, group¼Magt, color¼Magt)), specifies the
setting of the plot: dataset, group, and color. The second part specifies that we
are going to draw lines between data points. In later chapters, we will frequently
use package ggplot2 whose generic structure always involves concatenating
function calls like function1+function2+….
Fig. 4.32 Line plot of Earthquake magnitude type by its ground depth and latitude
4.5 Relationships 173
4.5.3 Distributions
To plot the 2D Kernel Density estimation plot we will use the eruptions data
from the “Old Faithful” geyser in Yellowstone National Park, Wyoming, stored in
R as geyser. Also, the kde2d() function is needed for 2D kernel density estimation.
kd <- with(MASS::geyser, MASS::kde2d(duration, waiting, n = 50))
kd$x[1:5]
## [1] 0.8333333 0.9275510 1.0217687 1.1159864 1.2102041
kd$y[1:5]
## [1] 43.00000 44.32653 45.65306 46.97959 48.30612
kd$z[1:5, 1:5]
(Fig. 4.35).
0.014
0.012
176 4 Data Visualization
0.015
0.01
0.005
0
50 1
60 2
70 3
y 80 x
90 4
100 5
0.01
0.008
z
0.006
0.004
0.002
Fig. 4.35 Interactive surface plot of kernel density for the Old Faithful geyser eruptions
volcano
180
4.5 Relationships 177
160
140
120
100
0
20
0
40 10
y 20
60 30
40 x
50
180
160
volcano
140
120
100
Fig. 4.36 Interactive surface plot of kernel density for the R volcano dataset
volcano[1:10, 1:10]
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] ## [1,]
100 100 101 101 101 101 101 100 100 100
## [2,] 101 101 102 102 102 102 102 101 101 101
## [3,] 102 102 103 103 103 103 103 102 102 102 ## [4,]
103 103 104 104 104 104 104 103 103 103
## [5,] 104 104 105 105 105 105 105 104 104 103
## [6,] 105 105 105 106 106 106 106 105 105 104
178 4 Data Visualization
## [7,] 105 106 106 107 107 107 107 106 106 105
## [8,] 106 107 107 108 108 108 108 107 107 106 ## [9,]
107 108 108 109 109 109 109 108 108 107 ## [10,] 108 109
109 110 110 110 110 109 109 108 plot_ly(z=volcano,
type="surface")
Fig. 4.37 Interactive surface plot of kernel density for the 2D brain imaging data
#install.packages("jpeg") ## if necessary
library(jpeg)
# install.packages("spatstat")
# package spatstat has a function blur() that applies a Gaussian
4.5 Relationships 179
blur library(spatstat)
http://socr.umich.edu
HTML5/BrainViewer/
brainURL <-
180 4 Data Visualization
"http://socr.umich.edu/HTML5/BrainViewer/data/TestBrain.nii.gz"
brainFile <- file.path(tempdir(), "TestBrain.nii.gz")
download.file(brainURL, dest=brainFile, quiet=TRUE) brainVolume <-
readNIfTI(brainFile, reorient=FALSE) brainVolDims <-
dim(brainVolume); brainVolDims
Fig. 4.43 Intensities of the fifth timepoint epoch of the 4D fMRI time series
4.5 Relationships 183
Fig. 4.44 The complete time course of the raw (blue) and two smoothed versions of the fMRI
timeseries at one specific voxel location (30, 30, 15)
# See examples here: https://cran.r-
project.org/web/packages/oro.nifti/vigne ttes/nifti.pdf
# and here: http://journals.plos.org/plosone/article?
id=10.1371/journal.pone
.0089470
fMRIURL <-
"http://socr.umich.edu/HTML5/BrainViewer/data/fMRI_FilteredData_4
D.nii.gz" fMRIFile <- file.path(tempdir(),
"fMRI_FilteredData_4D.nii.gz") download.file(fMRIURL,
dest=fMRIFile, quiet=TRUE)
(fMRIVolume <- readNIfTI(fMRIFile, reorient=FALSE))
## NIfTI-1 format
## Type : nifti
## Data Type : 4 (INT16)
## Bits per Pixel : 16
## Slice Code : 0 (Unknown)
## Intent Code : 0 (None)
## Qform Code : 1 (Scanner_Anat)
## Sform Code : 0 (Unknown)
## Dimension : 64 x 64 x 21 x 180
## Pixel Dimension : 4 x 4 x 6 x 3
## Voxel Units : mm
## Time Units : sec # dimensions: 64 x 64 x
dim(fMRIVolume); fMRIVolDims
## [1] 180
# Plot the 4D array of imaging data in a 5x5 grid of images
# The first three dimensions are spatial locations of the voxel
184 4 Data Visualization
(volume elem ent) and the fourth dimension is time for this
functional MRI (fMRI) acquisi tion.
image(fMRIVolume, zlim=range(fMRIVolume)*0.95)
hist(fMRIVolume)
zlim=range(fMRIVolume)*0.9)
overlay(fMRIVolume, fMRIVolume[,,,5],
zlim.x=range(fMRIVolume)*0.95)
4.6 Appendix 185
# overlay(fMRIVolume, stat_fmri_test[,,,5],
zlim.x=range(fMRIVolume)*0.95)
x1 <- c(1:180)
y1 <- loess(fMRIVolume[30, 30, 10,]~ x1, family = "gaussian")
lines(x1, smooth(fMRIVolume[30, 30, 10,]), col = "red", lwd = 2)
lines(ksmooth(x1, fMRIVolume[30, 30, 10,], kernel = "normal",
bandwidth = 5)
, col = "green", lwd = 3)
4.6 Appendix
hc = hclust(dist(data.raw), 'ave')
# the agglomeration method can be specified "ward.D", "ward.D2",
"single", "complete", "average" (= UPGMA), "mcquitty" (= WPGMA),
"median" (= WPGMC) or "centroid" (= UPGMC)
Fig. 4.45 Clustering dendrogram using the Health Behavior Risks case-study
summary(data_2$TOTINDA); summary(data_2$RFDRHV4)
## Min. 1st Qu. Median Mean 3rd Qu.
Max. ## 1.00 1.00 1.00 1.56 2.00
9.00
## Min. 1st Qu. Median Mean 3rd Qu.
Max. ## 1.0 1.0 1.0 1.3 1.0
9.0 cutree(hc, k = 2)
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 …
## [885] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1
## [919] 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
## [953] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 ## [987] 2 2 2 2 2 2 2 2 2 2 2 2 2 2
# alternatively specify the height, which is, the value of the
criterion ass ociated with the
# clustering method for the particular agglomeration -- cutree(hc,
##
## 1 2
## 930 70
Let’s try to identify the number of cases for varying number of clusters.
# To identify the number of cases for varying number of clusters we
# can combine calls to cutree and table in a call to sapply –
# to see the sizes of the clusters for $2\ge k \ge 10$ cluster-
solutions:
# numbClusters=4; myClusters = sapply(2:5,
function(numbClusters)table(cutree(hc, numbClusters)))
names(myClusters) <- paste("Number of Clusters=", 2:5,
sep = "") myClusters
## $`Number of Clusters=2`
4.6 Appendix 187
##
## 1 2
## 930 70
##
## $`Number of Clusters=3`
##
## 1 2 3 ## 930 50 20
##
## $`Number of Clusters=4`
##
## 1 2 3 4 ## 500 430 50 20
##
## $`Number of Clusters=5`
##
## 1 2 3 4 5
## 500 430 10 40 20
## [[1]]
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1
1
##
…
## [911] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
##
## [[2]]
## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
9 9 9 9 9
## [36] 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
##
## [[3]]
## [1] 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
188 4 Data Visualization
sapply(unique(groups.k.3), function(g)data_2$RFDRHV4[groups.k.3
== g])
## [[1]]
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1
1
## …
## [911] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
##
## [[2]]
## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
## [36] 2 2 2 2 2 9 9 9 9 9 9 9 9 9 9
##
## [[3]]
## [1] 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
# Perhaps there are intrinsically 3 groups here e.g., 1, 2 and 9 .
groups.k.3 <- cutree(hc, k = 3) sapply(unique(groups.k.3),
function(g)data_2$TOTINDA [groups.k.3 == g])
## [[1]]
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1
1
## …
## [911] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
##
## [[2]]
## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
9 9 9 9 9
## [36] 9 9 9 9 9 9 9 9 9 9 9 9 9
9 9 ##
## [[3]]
## [1] 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
sapply(unique(groups.k.3), function(g)data_2$RFDRHV4 [groups.k.3
== g])
## [[1]]
## [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 ## …
## [911] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
##
## [[2]]
## [1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
## [36] 2 2 2 2 2 9 9 9 9 9 9 9 9
9 9 ##
## [[3]]
## [1] 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
# Note that there is quite a dependence between the outcome
variables.
plot(data_2$RFDRHV4, data_2$TOTINDA)
# drill down deeper
table(groups.k.3,
data_2$RFDRHV4)
##
4.6 Appendix 189
## groups.k.3 1 2
9 ## 1 910 20
0
## 2 0 40
10 ## 3 0 0
20
To characterize the clusters, we can look at cluster summary statistics, like the
median, of the variables that were used to perform the cluster analysis. These can
be broken down by the groups identified by the cluster analysis. The aggregate
function will compute statistics (e.g., median) on many variables simultaneously.
Let’s examine the median values for each variable we used in the cluster analysis,
broken up by cluster groups:
aggregate(data_2, list(groups.k.3), median)
## Group.1 ID AGE_G SEX RACEGR3 IMPEDUC IMPMRTL EMPLOY1 INCOMG
CVDINFR4 ## 1 1 465.5 5 2 1 5 1
2 4 2 ## 2 2 955.5 6 2 4 6
5 8 6 2 ## 3 3 990.5 6 2 9
6 6 8 6 2
## CVDCRHD4 CVDSTRK3 DIABETE3 RFSMOK3 RFDRHV4 FRTLT1 VEGLT1
TOTINDA ## 1 2.0 2 3 1 1
1 1 1 ## 2 2.0 2 3 2
2 9 9 2 ## 3 4.5 2 4
9 9 9 9 9
4.6.2 Additional ggplot Examples
This example uses the SOCR Home Price Index data of 19 major US cities from
1991 to 2009 (Fig. 4.47).
190 4 Data Visualization
Fig. 4.48 Predicting the San Francisco home process using data from the Los Angeles home sales
We can also use ggplot to draw pairs plots (Fig. 4.49).
# install.packages("GGally")
require(GGally)
pairs <- hm_price_index[, 10:15]
head(pairs)
## GA-Atlanta IL-Chicago MA-Boston MI-Detroit MN-Minneapolis NC-
Charlotte
## 1 69.61 70.04 64.97 58.24 64.21
73.32
## 2 69.17 70.50 64.17 57.76 64.20
73.26 ## 3 69.05 70.63 63.57 57.63
64.19 72.75 ## 4 69.40 71.09 63.35 57.85
64.30 72.88
## 5 69.69 71.36 63.84 58.36 64.75
73.26 ## 6 70.14 71.66 64.25 58.90
64.95 73.49
Fig. 4.49 A more elaborate pairs plot of the home price index dataset illustrating the distributions
of home prices within a metropolitan area, as well as the paired relations between regions
Fig. 4.50 Bubble plot of Los Angeles neighborhood location (longitude vs latitude), population
size, and income
library(rvest)
require(ggplot2)
#draw data wiki_url <-
read_html("http://wiki.socr.umich.edu/index.php/SOCR_Data_LA_Nei
ghborhoods_Data") html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body-primary" role="main">\n\t<a
id="top
...
LA_Nbhd_data <- html_table(html_nodes(wiki_url, "table")[[2]])
#display several lines of data
head(LA_Nbhd_data);
## LA_Nbhd Income Schools Diversity Age Homes
Vets Asian ## 1 Adams_Normandie 29606 691 0.6
26 0.26 0.05 0.05 ## 2 Arleta 65649 719 0.4 29
4.6 Appendix 195
theme_set(theme_grey())
#treat ggplot as a
variable
#When claim "data", we can access its column directly eg"x =
Longitude" plot1 = ggplot(data=LA_Nbhd_data,
aes(x=LA_Nbhd_data$Longitude, y=LA_Nbhd_data$Latitude))
#you can easily add attribute, points, label(eg:text) plot1 +
geom_point(aes(size=Population, fill=LA_Nbhd_data$Income), pch=21,
stroke=0.2, alpha=0.7, color=2)+
geom_text(aes(label=LA_Nbhd_data$LA_Nbhd), size=1.5, hjust=0.5,
vjust=2,
check_overlap = T)+ scale_size_area() +
scale_fill_distiller(limits=c(range(LA_Nbhd_data$Incom
e)), palette='RdBu', na.value='white', name='Income') +
scale_y_continuous(limits=c(min(LA_Nbhd_data$Latitude),
max(LA_Nbhd_data$L
atitude))) + coord_fixed(ratio=1) + ggtitle('LA Neughborhoods
Scatter Plot (Location, Population, Income)')
Observe that some areas (e.g., Beverly Hills) have disproportionately higher
incomes. In addition, it is worth pointing out that the resulting plot resembles this
plot of LA County (Fig. 4.51).
This example uses ggplot to interrogate the SOCR Latin letter frequency data,
which includes the frequencies of the 26 common Latin characters in several
derivative languages. There is quite a variation between the frequencies of Latin
letters in different languages (Figs. 4.52 and 4.53).
196 4 Data Visualization
Fig. 4.51 The Los Angeles county map resembles the plot on Fig. 4.50
4.6 Appendix 197
Fig. 4.53 Pie chart similar to the stacked bar chart, Fig. 4.52
0.12000
## Turkish Swedish Polish
Toki_Pona ## Min. :0.00000 Min. :0.00000 Min. :
0.00000 Min. :0.00000 ## 1st Qu.:0.01000 1st Qu.:0.01000
1st Qu.:0.01500 1st Qu.:0.00000 ## Median :0.03000 Median :
0.03000 Median :0.03000 Median :0.03000 ## Mean :0.03667
Mean :0.03704 Mean :0.03704 Mean :0.03704 ## 3rd
Qu.:0.05500 3rd Qu.:0.05500 3rd Qu.:0.04500 3rd Qu.:0.05000
## Max. :0.12000 Max. :0.10000 Max. :0.20000 Max. :
0.17000 ## Dutch Avgerage ## Min. :
0.00000 Min. :0.00000 ## 1st Qu.:0.01000 1st Qu.:0.01000
## Median :0.02000 Median :0.03000 ## Mean :0.03704 Mean
:0.03741 ## 3rd Qu.:0.06000 3rd Qu.:0.06000 ## Max. :
0.19000 Max. :0.12000 head(letter)
## Letter English French German Spanish Portuguese Esperanto
Italian ## 1 a 0.08 0.08 0.07 0.13 0.15
0.12 0.12 ## 2 b 0.01 0.01 0.02 0.01 0.01
0.01 0.01
## 3 c 0.03 0.03 0.03 0.05 0.04 0.01
0.05
## 4 d 0.04 0.04 0.05 0.06 0.05 0.03
0.04
## 5 e 0.13 0.15 0.17 0.14 0.13 0.09
0.12 ## 6 f 0.02 0.01 0.02 0.01 0.01
0.01 0.01 ## Turkish Swedish Polish Toki_Pona Dutch Avgerage
## 1 0.12 0.09 0.08 0.17 0.07 0.11
## 2 0.03 0.01 0.01 0.00 0.02 0.01
## 3 0.01 0.01 0.04 0.00 0.01
0.03 ## 4 0.05 0.05 0.03 0.00
0.06 0.04 ## 5 0.09 0.10 0.07
0.07 0.19 0.12 ## 6 0.00 0.02 0.00
0.00 0.01 0.01 sum(letter[, -1]) #reasonable
## [1] 13.08
require(reshape)
library(scales)
You can experiment with the SOCR interactive motion chart, see Fig. 4.54.
200 4 Data Visualization
http://socr.umich.edu/HTML5
MotionChart
Use the Divorce data (Case Study 01) or the TBI dataset (CaseStudy11_TBI) to
generate appropriate visualization of histograms, density plots, pie charts,
heatmaps, barplots, and paired correlation plots.
Use the SOCR Resource Hierarchical data (JSON) or the DSPA Dynamic
Certificate Map (JSON) to generate some tree/graph displays of the structural
information.
The code fragment below shows an example of processing a JSON hierarchy.
4.6 Appendix 201
suppressMessages(library(networkD3))
treenetwork <- ToDataFrameNetwork(tree, "name")
simpleNetwork(treenetwork, fontSize = 10)
202
References
• Use SOCR Oil Gas Data to generate plots: (i) read data table, you may need to
fill the inconsistent values with NAs; (ii) data preprocessing: select variables,
type convert, etc.; (iii) generate two plots: the first plots includes two
subplots, consumption plots and production plots; the second figure includes
three subplots, for fossil, nuclear and renewable, respectively. To draw the
subplots, you can use facet_grid(); (iv) all figures should have year as x axis;
(v) the first figure should include three curves (fossil, nuclear and
renewable) for each subplot; the second figure should include two curves
(consumption and production) for each subplot.
• Use SOCR Ozone Data to generate a correlation plot with the variables
MTH_1, MTH_2, ..., MTH_12. (Hint: you need to obtain the correlation
matrix first, then apply the corrplot package. Try some alternative methods as
well, circle, pie, mixed etc.)
• Use SOCR CA Ozone Data to generate a 3D surface plot (Using variables
Longitude, Latitude and O3).
• Generate a sequence of random numbers from student t distribution. Draw the
sample histogram and compare it with normal distribution. Try different
degrees of freedom. What do you find? Does varying the seed and
regenerating the student t sample change that conclusion?
• Use the SOCR Parkinson’s Big Meta data (only rows with time¼0) to generate
a heatmap plot. Set RowSideColors, ColSideColors and rainbow. (Hint: you
may need to select columns, properly convert the data, and normalize it.)
• Use SOCR 2011 US Jobs Ranking draw scatter plot Overall_Score vs.
Average_Income(USD) include title and label the axes. Then try qplot for
Overall_Score vs. Average_Income(USD): (1) fill with the Stress_Level; (2)
size the points according to Hiring_Potential; and (3) label using Job_Title.
• Use SOCR Turkiye Student Evaluation Data to generate trees and graphs, using
cutree() and select any k you prefer. (Use variables Q1–Q28).
References
http://www.statmethods.net/graphs/
http://www.springer.com/us/book/9783319497501 www.r-graph-
gallery.com
Chapter 5
203 5 Linear Algebra & Matrix Computing
Linear Algebra & Matrix Computing
The easiest way to create a matrix is by using the matrix() function, which
organizes the elements of a vector into specified positions into a matrix.
seq1<-seq(1:6)
m1<-matrix(seq1, nrow=2,
ncol=3) m1
## [,1] [,2] [,3]
## [1,] 1 3
5 ## [2,] 2 4
6
m2<-diag(seq1)
m2
## [,1] [,2] [,3] [,4] [,5]
[,6] ## [1,] 1 0 0 0
0 0
## [2,] 0 2 0 0 0
0 ## [3,] 0 0 3 0 0
0 ## [4,] 0 0 0 4 0
0
## [5,] 0 0 0 0 5
0 ## [6,] 0 0 0 0 0
6
m3<-matrix(rnorm(20),
nrow=5) m3
## [,1] [,2] [,3]
[,4] ## [1,] 0.4877535 0.22081284 -0.6067573
-0.8982306
## [2,] -0.1672924 -1.49020015 0.3038424
-0.1875045
## [3,] -0.4771204 -0.39004837 1.1160825
-0.6948070 ## [4,] -0.9274687 0.08378863
0.3846627 0.2386284 ## [5,] 0.8672767
-0.86752831 1.5536853 0.3222158
The function diag() is very useful. When the object is a vector, it creates a
diagonal matrix with the vector in the principal diagonal.
diag(c(1, 2, 3))
## [,1] [,2] [,3]
## [1,] 1 0 0 ## [2,]
0 2 0 ## [3,] 0 0
3
## [1] 1
4
203
5.1 Matrices (Second Order Tensors)
diag(4)
## [,1] [,2] [,3] [,4] ##
[1,] 1 0 0 0
## [2,] 0 1 0 0
## [3,] 0 0 1 0
## [4,] 0 0 0 1
## c1
## [1,] 0.4877535 0.22081284 -0.6067573
-0.8982306 1 ## [2,] -0.1672924 -1.49020015
0.3038424 -0.1875045 2
## [3,] -0.4771204 -0.39004837 1.1160825
-0.6948070 3 ## [4,] -0.9274687 0.08378863
0.3846627 0.2386284 4 ## [5,] 0.8672767
-0.86752831 1.5536853 0.3222158 5
r1<-1:4 m5<-
rbind(m3, r1) m5
## [,1] [,2] [,3]
[,4] ## 0.4877535 0.22081284 -0.6067573
-0.8982306 ## -0.1672924 -1.49020015
0.3038424 -0.1875045 ## -0.4771204
-0.39004837 1.1160825 -0.6948070 ##
-0.9274687 0.08378863 0.3846627 0.2386284
## 0.8672767 -0.86752831 1.5536853
0.3222158 ## r1 1.0000000 2.00000000
3.0000000 4.0000000
Note that m5 has a row name r1 in the fourth row. We can remove row/column
names by naming them as NULL.
dimnames(m5)<-list(NULL,
NULL) m5
## [,1] [,2] [,3]
[,4] ## [1,] 0.4877535 0.22081284 -0.6067573
-0.8982306 ## [2,] -0.1672924 -1.49020015
0.3038424 -0.1875045 ## [3,] -0.4771204
-0.39004837 1.1160825 -0.6948070 ## [4,]
-0.9274687 0.08378863 0.3846627 0.2386284 ##
[5,] 0.8672767 -0.86752831 1.5536853
0.3222158 ## [6,] 1.0000000 2.00000000
3.0000000 4.0000000
204 5 Linear Algebra & Matrix Computing
5.2 Matrix Subscripts
5.3.1 Addition
Elements in the same position are added to represent the result at the same
location.
m7<-matrix(1:6, nrow=2) m7
m8<-matrix(2:7, nrow = 2)
m8
## [,1] [,2] [,3] ##
[1,] 2 4 6 ## [2,]
3 5 7 m7+m8
## [,1] [,2] [,3]
## [1,] 3 7 11
## [2,] 5 9 13
5.3 Matrix Operations
205
5.3.2 Subtraction
m8-m7
## [,1] [,2] [,3]
## [1,] 1 1 1 ## [2,]
1 1 1 m8-1
## [,1] [,2] [,3] ## [1,]
1 3 5
## [2,] 2 4 6
5.3.3 Multiplication
Elementwise Multiplication
m8*m7
## [,1] [,2] [,3]
## [1,] 2 12 30
## [2,] 6 20 42
Matrix Multiplication
The resulting matrix will have the same number of rows as the first matrix and
the same number of columns as the second matrix.
dim(m8)
## [1] 2
3
m9<-matrix(3:8,
nrow=3) m9
## [,1] [,2]
206 5 Linear Algebra & Matrix Computing
## [1,] 3 6
## [2,] 4 7
## [3,] 5
8 dim(m9) ##
[1] 3 2 m8%*%m9
## [,1] [,2]
## [1,] 52 88
## [2,] 64 109
We made a 2 2 matrix resulting from multiplying two matrices 2 3 * 3 2.
The process of multiplying two vectors is called outer product. Assume we
have two vectors u and v, in matrix multiplication their outer product is
represented mathematically as uvT. In R, the operator for outer product is %o%.
u<-c(1, 2, 3, 4, 5)
v<-c(4, 5, 6, 7, 8) u
%o%v
## [,1] [,2] [,3] [,4]
[,5] ## [1,] 4 5 6 7
8
## [2,] 8 10 12 14
16 ## [3,] 12 15 18 21
24
## [4,] 16 20 24 28
32 ## [5,] 20 25 30 35
40 u%*%t(v)
## [,1] [,2] [,3] [,4]
[,5] ## [1,] 4 5 6
7 8 ## [2,] 8 10 12
14 16 ## [3,] 12 15 18
21 24
## [4,] 16 20 24 28 32
## [5,] 20 25 30 35 40
What are the differences between u % ∗ % t(v), u % ∗ % t(v), u ∗ t(v), and u
∗ v?
5.3 Matrix Operations
m8/m7
## [,1] [,2] [,3]
## [1,] 2.0 1.333333 1.200000
## [2,] 1.5 1.250000 1.166667
m8/2
## [,1] [,2] [,3]
## [1,] 1.0 2.0 3.0
## [2,] 1.5 2.5 3.5
207
5.3.5 Transpose
The transpose of a matrix is a new matrix created by swapping the columns and
the rows of the original matrix. Do this in a simple function t().
m8
## [,1] [,2]
[,3]
## [1,] 2 4
6 ## [2,] 3 5
7 t(m8)
## [,1]
[,2] ## [1,]
2 3
## [2,] 4 5
## [3,] 6 7
Notice that the [1, 2] element in m8 is the [2, 1] element in the transpose matrix
t(m8).
The inverse of a matrix (A1) is its multiplicative inverse. That is, multiplying the
original matrix (A) by it’s inverse (A1) yields an identity matrix that has 1’s on
the diagonal and 0’s off the diagonal.
AA1 ¼ I
b c d
Its matrix
inverse is
1 d b
ad bc c a
For higher dimensions, the formula for computing the inverse matrix is more
complex. In R, we can use the solve() function to calculate the matrix inverse, if it
exists.
m10<-matrix(1:4, nrow=2)
208 5 Linear Algebra & Matrix Computing
m10
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
solve(m10)
## [,1]
[,2] ## [1,]
-2 1.5 ## [2,]
1 -0.5 m10%*
%solve(m10)
## [,1] [,2]
## [1,] 1 0
## [2,] 0 1
Note that only some matrices have inverses. These are square matrices, i.e.,
they have the same number of rows and columns, and are non-singular.
Another function that can help us compute the inverse of a matrix is the ginv()
function under the MASS package, which reports the Moore-Penrose Generalized
Inverse of a matrix.
require(MASS)
## Loading required package: MASS
ginv(m10)
## [,1]
[,2] ## [1,]
-2 1.5 ## [2,]
1 -0.5
Also, the samae function solve() can be used to solve matrix equations.
solve(A,b) returns vector x in the equation b ¼ Ax (i.e., x ¼ A1b).
s1<-diag(c(2, 4, 6, 8))
s2<-c(1, 2, 3, 4)
solve(s1, s2)
The following Table 5.1 summarizes some basic matrix operation functions.
5.4 Matrix Algebra Notation
Let’s introduce the basic matrix notation. The product AB between matrices A and
B is defined only if the number of columns in A equals the number of rows in B.
That is, we can multiply an m n matrix A by an n k matrix B and the result will be
ABm k matrix. Each element of the product matrix, (ABi, j), represents the product of
the ith row in A and the jth column in B, which are of the same size n. Matrix
multiplication is row-by-column.
Linear algebra notation enables the mathematical analysis and the analytical
solution of systems of linear equations:
a þ b þ 2c ¼6
3a 2b þ c ¼ 2:
2a þ b c ¼3
3 2 1 b ¼ 2 :
@211A@cA@3A|
fflfflfflfflfflfflfflfflfflfflffl
0a1 01 1 2 110 6 1
x¼b¼3212:@cA@211A@3A
2 x 3 ¼ 5 :
The constant term, 3, can be simply joined with the right-hand-size, b, to form
a
0 new term b ¼ 5 + 3 ¼ 8. Thus, the shifting factor is mostly ignored in
2 x ¼5þ3¼ 8 :
This (simple) linear equation is solved by multiplying both sides by the inverse
(reciprocal) of the x multiplier, 2:
1 1
2x ¼
2 2 8:
Thus, the unique solution is:
x ¼ 8 ¼ 4:
5.4 Matrix Algebra Notation
211
So, let’s use exactly the same protocol to solve the corresponding matrix
equation (linear equations, Ax ¼ b) using R (the unknown is x, and the design
matrix A and the constant vector b are known):
0 1 1 210a1 061
3 2 1 b ¼ 2 :
@2 1 1A@cA
@ 3 A |fflfflfflfflfflfflfflfflfflfflffl
ffl{zfflfflfflfflfflfflfflfflfflfflfflA ffl} |
fflffl{zfflx ffl} |fflffl{zfflb ffl}
A_matrix_values <- c(1, 1, 2, 3, -2, 1, 2,
1, -1) A <- t(matrix(A_matrix_values,
nrow=3, ncol=3)) b <- c(6, 2, 3)
# to solve Ax = b, x=A^{-
1}*b x <- solve (A, b)
# Ax = b ==> x = A^{-1} *
b x
## [1] 1.35 1.75 1.45
# Check the Solution x=(1.35 1.75 1.45)
LHS <- A %*% x
round (LHS-b)
## [,1]
## [1,] 0
## [2,] 0
## [3,] 0
How about if we want to triple-check the accuracy of the solve method to
provide accurate solutions to matrix-based systems of linear equations?
We can generate the solution (x) to the equation Ax ¼ b using first principles:
x ¼ A1b:
A.inverse <- solve(A) # the inverse matrix
A^{-1} x1 <- A.inverse %*% b
# check if X and x1 are the
same x; x1
## [1] 1.35 1.75 1.45
## [,1]
## [1,] 1.35
## [2,] 1.75
## [3,] 1.45
round(x-
x1,6)
## [,1]
## [1,] 0
## [2,] 0
## [3,] 0
212 5 Linear Algebra & Matrix Computing
5.4.3 The Identity Matrix
The identity matrix is the matrix analog to the multiplicative numeric identity, i.e.,
the number 1. Multiplying the identity matrix by any other matrix (B) does not
change the matrix B. For this to happen, the multiplicative identity matrix must
look like:
I¼ :
The identity matrix is always a square matrix with diagonal elements 1 and 0 at
the off-diagonal elements.
If you follow the matrix multiplication rule above, you notice this works out:
XI ¼
:
In R, you can form an identity matrix as follows:
n <- 3 #pick
dimensions I <-
diag(n); I
## [,1] [,2]
[,3]
## [1,] 1 0
0 ## [2,] 0 1
0 ## [3,] 0 0
1 A %*% I; I %*% A
## [,1] [,2]
[,3]
## [1,] 1 3
2 ## [2,] 1 -2
1 ## [3,] 2 1
-1
## [,1] [,2]
[,3]
213
## [1,] 1 3
2 ## [2,] 1 -2
1 ## [3,] 2 1
-1
214 5 Linear Algebra & Matrix Computing
Let’s look at this notation deeper. In the baseball player data, there are three
quantitative variables: Heights, Weight, and Age. Suppose the variable Weight is
represented as a response Y1, ..., Yn random vector.
We can examine players’ Weight as a function of Age and Height.
# Data:
https://umich.instructure.com/courses/38100/files/folder/data
(01a_data.txt)
data <-
read.table('https://umich.instructure.com/files/330381/download?down
load_frd=1', as.is=T, header=T) attach(data) head(data)
## Name Team Position Height
Weight Age ## 1 Adam_Donachie BAL Catcher
74 180 22.99 ## 2 Paul_Bako BAL
Catcher 74 215 34.69 ## 3 Ramon_Hernandez BAL
Catcher 72 210 30.78 ## 4 Kevin_Millar BAL
First_Baseman 72 210 35.43 ## 5
Chris_Gomez BAL First_Baseman 73 188 35.71
## 6 Brian_Roberts BAL Second_Baseman 69 176 29.39
We can also use vector notation. We usually use bold to distinguish vectors
from the individual elements:
¼: B@ Yn
CA
X1 ¼ and X :
Note that for the baseball players example, x1, 1 ¼ Age1 and xi, 1 ¼ Agei with Agei
represent the Age of the ith player, and similarly, xi, 2 ¼ Heighti, represents the
height of the ith player. These vectors are also thought of as n 1 matrices.
It is convenient to represent both covariates as a matrix:
5.5 Scalars, Vectors and Matrices 215
0 x1,1 x1,2 1
X ¼ ½X1X2 ¼ ⋮:
@ x n,1 x n,2 A
## [1] 1034 2
We can also use this notation to denote an arbitrary number of covariates (k)
with the following n k matrix:
X ¼ B@
You can simulate such a matrix in R now using matrix, instead of cbind:
n <- 1034; k <- 5 X <-
matrix(1:(n*k), n, k) head(X)
## [,1] [,2] [,3] [,4]
[,5]
## [1,] 1 1035 2069 3103
4137
## [2,] 2 1036 2070 3104
4138
## [3,] 3 1037 2071 3105
4139
## [4,] 4 1038 2072 3106
4140
## [5,] 5 1039 2073 3107
4141 ## [6,] 6 1040 2074
3108 4142 dim(X)
## [1] 1034 5
To compute the sample average and variance of a dataset, we use the formulas:
1 n
Y ¼ X Yi n
i¼1
and
1 Xn 2, varð Þ
¼YYi Y
n 1 i¼1
A¼ :
This implies that:
n
5.5 Scalars, Vectors and Matrices 217
1X
:
AY¼ ð Þ¼Yi ¼ Y n
nn i¼1
## [,1]
## [1,] 73.69729
# double-check the result
mean(data$Height)
## [1] 73.69729
Note: Multiplying the transpose of a matrix with another matrix is very common
in statistical modeling and computing. Thus, there is an R function for this
operation, crossprod():
barY=crossprod(A, Y) / n
print(barY)
## [,1]
## [1,] 73.69729
Variance
Y0 @ A, 1 0⊤ 0 1 YY¼ X Yi Y2:
218 5 Linear Algebra & Matrix Computing
Yn Y n 1 n 1 i¼1
A crossprod with only one matrix input computes: Y0⊤Y0 Thus, to compute the
variance, we can simply type:
Y1 <- y - barY
crossprod(Y1)/(n-1) # Y1.man
<- (1/(n-1))* t(Y1) %*% Y1
## [,1]
## [1,] 5.316798
Y ¼ B@ CA,X ¼ ,
and :
Yn
as:
0 Y1 11 0 x1 1 0 ε1 1
Y21 x2 β0 ε2 @B
Yn CCA BB@
1x CCA BB@
ε CCA
⋮ ⋮
B
or simply:
Y ¼ Xβ þ ε,
which is a brief way to write the same model equation.
5.5 Scalars, Vectors and Matrices 219
The optimal solution is achieved when all residuals (Ei) are as small as possible
(indicating a good model fit). This corresponds to the least squares (LS) solution
to this matrix equation (Y ¼ Xβ + E), which can be obtained by minimizing the
residual square error:
^
β ¼ ðY XβÞ⊤ðY XβÞ:
We can determine the values of β by minimizing this expression, using calculus
to find the minimum of the cost (objective) function, more about optimization is
in Chap. 22.
There are a series of rules that permit us to solve partial derivative equations in
matrix notation. By setting the derivative of a cost function to zero and solving for
the unknown parameter β, we obtain a candidate solution(s). The derivative of the
above equation is:
^ ^
2X⊤Y Xβ ⊤¼ 0
X
⊤Xβ
¼X
^
Y β ¼ X X X Y,
which represents the desired solution. Hat notation (^) is used to denote estimates.
For instance, the solution for the unknown β parameters is denoted by the
^
(datadriven) estimate β .
The least squares minimization works because minimizing a function
corresponds to finding the roots of its (first) derivative. In the ordinary least
squares (OLS), we square the residuals:
ðY XβÞ⊤ðY XβÞ:
Notice that the minima of f(x) and f 2(x) are achieved at the same roots of f 0(x),
as the derivative of f 2(x) is 2f(x)f 0(x).
220 5 Linear Algebra & Matrix Computing
^ ^
Now we can see the results of this by computing the estimated β 0 þ β 1x for any
value of x (Fig. 5.1):
^
β ¼ X⊤X1X⊤Y is one of the most widely used results in data analytics. One of
the advantages of this approach is that we can use it in many different situations.
The R lm Function
R has a very convenient function that fits these models. We will learn more about
this function later, but here is a preview:
# X <- cbind(data$Height, data$Age) # more complicated model
X <- data$Height # simple
model y <- data$Weight fit <-
lm(y ~ X); fit
Note that we obtain the same estimates of the solution using either the built-in
lm() function or using first-principles.
eigen(m11)
## $values
## [1] 1 1
##
## $vectors
## [,1]
222 5 Linear Algebra & Matrix Computing
[,2] ## [1,]
0 -1 ## [2,]
1 0
We can use R to prove that ðλIn AÞ~v ¼ ~0.
(eigen(m11)$values*diag(2)-
m11)%*%eigen(m11)$vectors
## [,1] [,2]
## [1,] 0 0
## [2,] 0 0
Other useful matrix operation are listed in the following Table 5.2.
Some flexible matrix operations can help us save time calculating row or column
averages. For example, column averages can be calculated by the following matrix
operation.
1 1 1X...
2,1 ...... ... X...
2,p
... N
1
0
X2,p 1 X
B X2,1 ... BBBBBB CCA XB ¼ CCA@¼ :
B ...p
p
We see that fast calculations can be done by multiplying a matrix in the front or
at the back of the original feature matrix. In general, multiplying a vector in front
can give us the following equation.
AX ¼ ða1 a2 ... ... ... X1,p 1
X1,1 ...
X ... X2,p ...
0
224 5 Linear Algebra & Matrix Computing
a N ... C
C
ÞBB@ ...N2,1
XN,p A
X ,1
N N N
¼X1 aiXi,1
Xi 1 aiXi,2 ...
Xi aiXi,N :
i¼ ¼ ¼1
1
1 1
...
N N N
needed to obtain the column averages. We may visualize the column means using
a histogram (Fig. 5.2).
colmeans<-as.matrix(colmeans)
hist(colmeans)
The histogram shows that the distribution is mostly symmetric and bell shaped.
We can address harder problems using matrix notation. For example, let’s
calculate the differences between genders for each gene. First, we need to get the
gender information for each subject.
gender<-info[, c(3, 4)]
rownames(gender)<-
gender$filename
Then, we have to reorder the columns to make then consistent with the feature
matrix gene1.
gender<-
gender[colnames(gene1), ]
After that, we will construct the design the matrix and multiply it by the feature
matrix. The plan is to multiply the following two matrices.
5.8 Matrix Notation (Another View)
0
1
0 1
ap
p
where ai ¼ 1/NF if the subject is female and ai ¼ 1/NM if the subject is male. Thus,
we gave each female and male the same weight before the subtraction. We
average each gender and get their difference. Xi represents the average across both
genders and gender. diffi represents the gender difference for the ith gene.
table(gender$sex)
##
## F M
## 86 122
gender$vector<-ifelse(gender$sex=="F", -1/86, 1/122)
vec1<-as.matrix(data.frame(rowavg=rep(1/ncol(gene1),
ncol(gene1)), gender.diff=gender$vector)) gender.matrix<-
gene1%*%vec1 gender.matrix[1:15, ]
## rowavg
gender.diff ## [1,]
6.383263 -0.003209464
## [2,] 7.091630
-0.031320597 ## [3,]
5.477032 0.064806978 ##
[4,] 7.584042 -0.001300152
## [5,] 3.197687
0.015265502
## [6,] 7.338204 0.078434938
## [7,] 4.232132 0.008437864
## [8,] 3.716460
0.018235650 ## [9,]
2.810554 -0.038698101 ##
[10,] 5.208787
0.020219666
## [11,] 6.498989
0.025979654 ## [12,]
5.292992 -0.029988980 ##
[13,] 7.069081 0.038575442
## [14,] 5.952406 0.030352616
## [15,] 7.247116 0.046020066
227
5.9 Multivariate Linear Regression
Y 1 X XN , p EN
BB@ ...N CCA ¼ BB@ ......N,1 .........N,p CAC@BB ... ACC þ BB@ ... CCA:
Y 1 X X βp EN
Y ¼ Xβ + E implies that XTY XT(Xβ) ¼ (XTX)β, and thus the solution for β is
obtained by multiplying both hand sides by (XTX)1:
β^ ¼ XTX1XTY:
# Alternatively you can also download the data in CSV format from
http://umich.instructure.com/courses/38100/files/folder/data
(teamsData.csv) Teams <-
read.csv('https://umich.instructure.com/files/2798317/download?down
load_frd=1', header=T)
dat<-Teams[Teams$G==162&Teams$yearID<2002, ]
dat$Singles<-dat$H-dat$X2B-dat$X3B-dat$HR
dat<-dat[, c("R", "Singles", "HR", "BB")]
head(dat)
5.9 Multivariate Linear Regression
## R Singles HR BB
## 439 505 997 11 344
## 1367 683 989 90 580
## 1368 744 902 189 681
## 1378 652 948 156 516
## 1380 707 1017 92 620
## 1381 632 1020 126 504
Now let’s do a simple example. We will use runs scored (R) as the response
variable and batters walks (BB) as the independent variable. Also, we need to add a
column of 1’s to the X matrix.
Y<-dat$R
X<-cbind(rep(1, n=nrow(dat)), dat$BB)
X[1:10, ]
## [,1] [,2]
## [1,] 1 344
## [2,] 1 580
## [3,] 1
681 ## [4,] 1
516 ## [5,] 1
620
## [6,] 1 504
## [7,] 1 498
## [8,] 1 502
## [9,] 1 493
## [10,] 1 556
Let’s solve for the effect-sizes (the beta coefficients) by
β^ ¼ XTX1XTY:
beta<-solve(t(X)%*%X)%*%t(X)%*
%Y beta
## [,1]
## [1,] 326.8241628
## [2,] 0.7126402
229
To examine this manual calculation, we refit the linear equation using the lm()
function. After comparing the time used for computations, we may notice that
matrix calculation are more time efficient.
fit<-lm(R~BB, data=dat)
# fit<-lm(R~., data=dat)
# '.' indicates all other variables, very useful when fitting
models with many predictors fit
##
## Call:
## lm(formula = R ~ BB, data =
dat) ##
## Coefficients:
## (Intercept) BB
## 326.8242 0.7126
summary(fit)
## Call:
## lm(formula = R ~ BB, data = dat)
##
## Residuals:
## Min 1Q Median 3Q
Max ## -187.788 -53.977 -2.995 55.649
258.614
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 326.82416 22.44340 14.56 <2e-16
*** ## BB 0.71264 0.04157 17.14
<2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1
' ' 1 ##
## Residual standard error: 76.95 on 661 degrees of freedom
## Multiple R-squared: 0.3078, Adjusted R-squared:
0.3068 ## F-statistic: 294 on 1 and 661 DF, p-value:
< 2.2e-16 system.time(fit<-lm(R~BB, data=dat))
## user system elapsed ## 0
0 0 system.time(beta<-solve(t(X)%*
%X)%*%t(X)%*%Y)
## user system elapsed
## 0 0 0
We can visualize the relationship between R and BB by drawing a scatter plot
(Fig. 5.3).
Fig. 5.4 3D Scatterplot of walks (BB), homeruns (HR), and runs (R) by batters, using the
baseball dataset
On Fig. 5.3, the red line is our regression line calculated by matrix calculation.
Matrix calculation can still work if we have multiple independent variables. Next,
we will add another variable, HR, to the model, Fig. 5.4.
X<-cbind(rep(1, n=nrow(dat)),
dat$BB, dat$HR) beta<-
solve(t(X)%*%X)%*%t(X)%*%Y
beta
## [,1]
## [1,] 287.7226756
## [2,] 0.3897178
## [3,] 1.5220448
#install.packages("scatterplot
3d") library(scatterplot3d)
scatterplot3d(dat$BB, dat$HR,
dat$R)
We can also obtain the covariance matrix for our features using matrix operations.
Suppose
... ...
@ XN,1 XN,K A
231
X ¼X2,1 ... X2,K ¼ ½X ;X ;...;X T:
Then the covariance matrix is:
Σ ¼ Σi,j,
1
Σi,j ¼X x
N 1 m¼1
m, i xixm,j xj,
where
1 N
xi ¼X xm,i,
N
m¼1
In general,
1
i ¼ 1, ...,K:
Σ ¼X XTX X: n 1
x<-matrix(c(4.0, 4.2, 3.9, 4.3, 4.1, 2.0, 2.1, 2.0, 2.1, 2.2, 0.60,
0.59,
0.58, 0.62, 0.63),
ncol=3) x
## [,1] [,2] [,3]
## [1,] 4.0 2.0 0.60
## [2,] 4.2 2.1 0.59
## [3,] 3.9 2.0 0.58
## [4,] 4.3 2.1 0.62
## [5,] 4.1 2.2 0.63
Assume we want to get the sample covariance matrix of the following 5 3
feature matrix x.
Notice that we have three features and five observations in this matrix. Let’s
get the column means first.
Then, we repeat each column mean 5 times to match the layout of feature matrix.
Finally, we are able to plug everything in the formula above.
5.11 Assignments: 5. Linear Algebra & Matrix Computing
x.bar<-matrix(rep(x.bar, each=5),
nrow=5) S<-1/4*t(x-x.bar)%*%(x-
x.bar) S
## [,1] [,2]
[,3] ## [1,] 0.02500 0.00750
0.00175
## [2,] 0.00750 0.00700
0.00135 ## [3,] 0.00175
0.00135 0.00043
In the covariance matrix, S[i,i] is the variance of the ith feature and S[i,j] is the
covariance of ith and jth features.
Compare this to the automated calculation of the variance-covariance matrix.
autoCov <- cov(x)
autoCov
## [,1] [,2]
[,3] ## [1,] 0.02500 0.00750
0.00175
## [2,] 0.00750 0.00700 0.00135
## [3,] 0.00175 0.00135 0.00043
5.11 Assignments: 5. Linear Algebra & Matrix Computing
Validate that ðAk,n Bn,mÞT ¼ BmT,n AnT,k, by using math notation, as well as by using R
functions.
Demonstrate the differences between the scalar multiplication (∗) and matrix
multiplication (% ∗ %) for numbers, vectors, and matrices (second-order tensors).
233
5.11.3 Matrix Equations
Write a simple matrix solver (b ¼ Ax, i.e., x ¼ A1b) and validate its acuracy using
the R command solve(A,b). Solve this equation:
2a b þ 2c ¼5
a 2b þ c a þ ¼ 3:
b c ¼2
5.11.4 Least Square Estimation
Use the SOCR Knee Pain dataset, extract the RB ¼ Right-Back locations (x,y), and
fit in a linear model for vertical locations (y) in terms of the horizontal locations
(x). Display the linear model on top of the scatter plot of the paired data. Comment
on the model you obtain.
Validate the multiplication transposition formula, ðAk,n Bn,mÞT ¼ BnT,m AkT,n, by using
math notation, as well as computationally using R and some example matrices.
E.g. you can try
Use the SOCR Data Iris Sepal Petal Classes and extract the rows of setosa
flowers. Compute the sample mean and variance of each variables; then calculate
sample covariance and correlation between sepal width and sepal height.
234 5 Linear Algebra & Matrix Computing
5.11.8 Least Square Estimation
Use the SOCR Knee Pain dataset, extract the RB ¼ Right-Back locations (x,y), and
fit in a linear model for vertical location (y) in terms of the horizontal
References
location (x). Display the linear model on top of the scatter plot of the paired data.
Comment on the model you obtained.
Now that we have most of the fundamentals covered in the previous chapters,
we can delve into the first data analytic method, dimension reduction, which
reduces the number of features when modeling a very large number of variables.
Dimension reduction can help us extract a set of “uncorrelated” principal
variables and reduce the complexity of the data. We are not simply picking some
of the original variables. Rather, we are constructing new “uncorrelated”
variables as functions of the old features.
Dimensionality reduction techniques enable exploratory data analyses by
reducing the complexity of the dataset, still approximately preserving important
properties, such as retaining the distances between cases or subjects. If we are
able to reduce the complexity down to a few dimensions, we can then plot the
data and untangle its intrinsic characteristics.
We will (1) start with a synthetic example demonstrating the reduction of a 2D
data into 1D; (2) explain the notion of rotation matrices; (3) show examples of
principal component analysis (PCA), singular value decomposition (SVD),
independent component analysis (ICA) and factor analysis (FA); and (4) present a
Parkinson’s disease case-study at the end. The supplementary DSPA electronic
materials for this chapter also include the theory and practice of t-Distributed
Stochastic Neighbor Embedding (t-SNE), which represents high-dimensional data
via projections into non-linear low-dimensional manifolds.
y[500,1] y[500,2]
Fig. 6.1 Scatterplot of paired
twin heights. The red points
show the heights of the
first two pairs of twins
library(MASS)
set.seed(1234
) n <- 1000
y=t(mvrnorm(n, c(0, 0), matrix(c(1, 0.95, 0.95, 1), 2, 2)))
0:95
#
2T 500 "y½ ¼1; TwinTwin21HeightHeight #¼ BVN μ !
:
¼"TwinTwin12HeightHeight #;Σ ¼" 0:195 y ¼ y½ ¼2;
1
plot(y[1, ], y[2, ], xlab="Twin 1 (standardized height)",
ylab="Twin 2 (standardized height)", xlim=c(-3, 3), ylim=c(-
3, 3))
points(y[1, 1:2], y[2, 1:2], col=2, pch=16) # plot the first 2
points
d=dist(t(y))
as.matrix(d)[1,
235
2] ## [1]
2.100187
6.1 Example: Reducing 2D to 1D
Fig. 6.3 Scatterplots of the transformed twin heights, compare to Fig. 6.2
Of course, matrix linear algebra notation can be used to represent this affine
transformation of the data. Here we can see that to get z we multiplied y by the
matrix:
1=2 1
A ¼ 1 1=2
!⟹z ¼ A y:
We can invert this transform by multiplying the result by the inverse matrix A1
as follows:
A1 ¼ 1 1 1=2
!⟹y ¼ A1 z:
1=2
You can try this in R:
A <- matrix(c(1/2, 1, 1/2, -1), nrow=2, ncol=2); A # define
a matrix
## [,1] [,2]
## [1,] 0.5
0.5 ## [2,] 1.0
-1.0
A¼ þ2 Y :
MA <- matrix(c(1/2, 1, 1/2, -1), 2, 2)
plot(as.numeric(d), as.numeric(d_MA))
abline(0, 1, col=2)
Observe that this MA transformation is not an isometry – the distances are not
preserved. Here is one example withdistance p2 apart in their native space, but
separated further by the transformationv1 ¼ vv11yx ¼¼ 10 , v2 ¼ vv22yx ¼¼ 10 ,
which are
## [,1] [,2]
## [1,] 0.5
0.5 ## [2,] 1.0
-1.0
## [,1] [,2]
## [1,] 0.5
1 ## [2,] 0.5
-1
## [,1] [,2]
## [1,] 1
0.5 ## [2,] 1
-0.5
## [,1] [,2] ## [1,] -0.5 0.5
## [2,] -0.5 -0.5 v1 <- c(0,1); v2
<- c(1,0); rbind(v1,v2)
## [,1] [,2]
## v1 0 1
## v2 1 0
Then,
Z ¼ AY þ η BVNη þ Aμ;AΣAT :
Notice that,
6.2 Matrix Rotations 239
k¼1
where P ¼ (Pj,1 Pi,1, ..., Pj,T Pi,T)T, Pi and Pj is any two points in T dimensions.
Let’s use a two dimension orthogonal matrix to illustrate this concept. Set A ¼
1 1 1
p 2ffi 1 1 . It’s easy to verify that A is an orthogonal (2D rotation) matrix.
The simplest way to test the isometry is to perform the linear transformation
directly (Fig. 6.5).
plot(as.numeric(d), as.numeric(d2))
abline(0, 1, col=2)
We can observe that the distances computed using the original data are
preserved after the transformation. This transformation is called a rotation
(isometry) of y. Note the difference compared to the earlier plot, Fig. 6.4.
An alternative method is to simulate from the joint distribution of Z ¼ (Z 1, Z2)T.
As we have mentioned above:
Z ¼ AY þ η BVNη þ Aμ;AΣAT ,
isometry), as illustrated by the perfect linear relation between the native-space and the
transformed pairs of twin height distances
Fig. 6.6 QQ-plot of the
distanced between twin
heights (d) and distances
between the simulated
bivariate Normal distribution
data (d3)
set.seed(2017)
zz1 = rnorm(1000,0,sd = sqrt(1.95))
zz2 = rnorm(1000,0,sd = sqrt(0.05))
zz = rbind(zz1,zz2) d3 =
dist(t(zz)) qqplot(d,d3) abline(a =
0,b=1,col=2)
We can observe that the distances computed using the original data and the
simulated data are the same (Figs. 6.7 and 6.8).
thelim <- c(-3, 3)
#par(mfrow=c(2,1))
plot(y[1, ], y[2, ], xlab="Twin 1 (standardized
height)", ylab="Twin 2 (standardized height)",
xlim=thelim, ylim=thelim)
6.2 Matrix Rotations 241
Fig. 6.9 Comparing the twin distances, computed using just one dimension, following the
rotation transformation against the actual twin pair height distances. The strong linear relation
suggests that measuring distances in the native space is equivalent to measuring distances in
the transformed space, where we reduced the dimension of the data from 2D to 1D
6.3 Notation
In the notation above, the rows represent variables and columns represent cases.
In general, rows represent cases and columns represent variables. Hence, in our
example shown here, Y would be transposed to be a N 2 matrix. This is the most
common way to represent the data: individuals in the rows, features in the
columns. In genomics, it is more common to represent subjects/SNPs/genes in
the columns. For example, genes are rows and samples are columns. The sample
covariance matrix usually denoted with XTX and has cells representing
covariance between two units. Yet, for this to be the case, we need the rows of X
to represent the subjects and the columns to represent the variables, or features.
Here, we have to compute, YYT instead following the rescaling.
Let’s consider the simplest situation where we have n observations {p 1, p2, ...,
pn} with two features pi ¼ (xi, yi). When we draw them on a plot, we use the x-axis
and yaxis for positioning. However, we can make our own coordinate system by
principal components (Fig. 6.10).
6.5 Principal Component Analysis (PCA) 245
Illustrated on the graph, the first PC, pc1 is a minimum distance fit in the
feature space. The second PC is a minimum distance fit to a line perpendicular
to the first PC. Similarly, the third PC would be a minimum distance fit to all
previous PCs. In our case of a 2D space, two PC’s is the most we can have. In
higher dimensional spaces, we have to figure out how many PCs are needed to
make the best performance.
T
N
¼
In general, the formula for the first PC is pc1 ¼ a1 X Xi¼1 ai,1Xi where Xi is a n 1
vector representing a column of the matrix X (complete design matrix with a total
of n observations and N features). The weights a 1 ¼ {a1, 1, a2, 1, ..., aN, 1} are
chosen to maximize the variance of pc1. According to this rule, the kth PC
T N
¼
pck ¼ ak X Xi¼1 ai,kXi, where ak ¼ {a1, k, a2, k, ..., aN, k} has to be constrained by
more conditions:
Let’s figure out how to find a1. To begin, we need to express the variance
of our first principal component using the variance covariance matrix of X:
2
Var pcð 1Þ ¼ E pc 1 ðE pcð 1ÞÞ2 ¼
N N
Xai,1aj,1E x x X ai,1aj,1E xð Þi E x
i j j¼
i,j¼1 i,j¼1
N
iX,j¼1 ai,1aj,1Si,j,
where Si, j ¼ E(xixj) E(xi)E(xj).
This implies Var pcð 1Þ ¼ a1TSa1, where S ¼ Si, j is the covariance matrix of
X ¼ {X1, ..., XN}. Since a1 maximized Var(pc1) and the constrain a1Ta1 ¼ 1 holds,
Where the part after the subtraction should be trivial. Take the derivative of
this expression w.r.t. a1 and set the derivative to zero, which yields (S λIN)a1 ¼ 0.
In Chap. 5 we showed that a1 will correspond to the largest eigenvalue of S, the
variance covariance matrix of X. Hence, pc 1 retains the largest amount of
variation in the sample. Likewise, ak is the kth largest eigenvalue of S.
PCA requires data matrix to have zero empirical means for each column. That
is, the sample mean of each column has been shifted to zero.
Let’s use a subset (N ¼ 33) of Parkinson's Progression Markers Initiative
(PPMI) database to demonstrate the relationship between S and PC loadings.
First, we need to import the dataset into R and delete the patient ID column.
library(rvest)
wiki_url <-
read_html("http://wiki.socr.umich.edu/index.php/SMHS_PCA_ICA_FA")
html_nodes(wiki_url, "#content")
pd.center<-as.matrix(pd.sub)-mean(mu)
S<-cov(pd.center)
eigen(S)
## $values
## [1] 1.315073e+02 1.178340e+01 6.096920e+00 1.424351e+00
6.094592e-02
## [6] 8.035403e-03
##
## $vectors
## [,1] [,2] [,3] [,4]
[,5]
stored the model information into pca1. Then pca1$rotation provides the loadings
for each PC.
pca1<-prcomp(as.matrix(pd.sub), center = T)
summary(pca1)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5
PC6 ## Standard deviation 11.4677 3.4327 2.46919 1.19346
0.2469 0.08964 ## Proportion of Variance 0.8716 0.0781 0.04041
0.00944 0.0004 0.00005 ## Cumulative Proportion 0.8716 0.9497
0.99010 0.99954 1.0000 1.00000 pca1$rotation
## PC1 PC2 PC3
## Top_of_SN_Voxel_Intensity_Ratio 0.007460885 -0.0182022093
0.016893318 ## Side_of_SN_Voxel_Intensity_Ratio 0.005800877
0.0006155246 0.004186177 ## Part_IA
-0.080839361 -0.0600389904 -0.027351225
## Part_IB -0.229718933 -0.2817718053
-0.929463536
## Part_II -0.282109618 -0.8926329596
0.344508308
## Part_III -0.927911126 0.3462292153
0.127908417
## PC4 PC5 PC6 ##
Top_of_SN_Voxel_Intensity_Ratio 0.02071859 -0.97198980
-0.232667561 ## Side_of_SN_Voxel_Intensity_Ratio 0.01552971
-0.23234862 0.972482080 ## Part_IA
0.99421646 0.02352324 -0.009618592 ## Part_IB
-0.06088782 -0.01466136 0.003019008 ## Part_II
-0.06772403 0.01764367 0.006061772 ## Part_III
-0.05068855 -0.01305167 0.002456374
The loadings are just the eigenvectors times -1. This actually represents the
same line in 6D dimensional space (we have six columns for the original data).
The multiplier -1 represents the opposite direction in the same line. For further
comparisons, we can load the factoextra package to get the eigenvalues of PCs.
# install.packages("factoextra")
library("factoextra") eigen<-
get_eigenvalue(pca1); eigen
## eigenvalue variance.percent
cumulative.variance.percent ## Dim.1 1.315073e+02
87.159638589 87.15964
## Dim.2 1.178340e+01 7.809737384 94.96938
## Dim.3 6.096920e+00 4.040881920 99.01026
## Dim.4 1.424351e+00 0.944023059
99.95428 ## Dim.5 6.094592e-02 0.040393390
99.99467 ## Dim.6 8.035403e-03 0.005325659
100.00000
The eigenvalues correspond to the amount of the variation explained by each
principal component (PC), which represesnts the eigenvalues for the S matrix.
To see detailed information about the variances that each PC explains, we
utilize the plot() function. We can also visualize the PC loadings (Figs. 6.11, 6.12,
and 6.13).
6.5 Principal Component Analysis (PCA) 249
Fig. 6.13 A more elaborate biplot of the same Parkinson’s disease dataset
plot(pca1)
represent the data. In this case, the dimension of the data is substantially
reduced.
Here, biplot uses PC1 and PC2 as the axes and red vectors to represent the
direction of variables after adding loadings as weights. It help us to visualize how
the loadings are used to rearrange the structure of the data.
Next, let’s try to obtain a bootstrap test for the confidence interval of the
explained variance (Fig. 6.14).
set.seed(12)
num_boot = 1000
bootstrap_it = function(i) { data_resample =
pd.sub[sample(1:nrow(pd.sub),nrow(pd.sub),replace=TRUE), ]
p_resample = princomp(data_resample,cor = T)
return(sum(p_resample$sdev[1:3]^2)/sum(p_resample$sdev^2))
}
pco = data.frame(per=sapply(1:num_boot, bootstrap_it))
quantile(pco$per, probs = c(0.025,0.975))
Fig. 6.14 A histogram plot illustrating the proportion of the energy of the original dataset
accounted for by the first three principal components
Xi ¼ ai,1s1 þ þ ai,nsn:
252 6 Dimensionality Reduction
6.6 Independent Component Analysis (ICA)
X ¼ As,
X A s
where ¼ (X1, ..., Xn)T, ¼ (a1, ..., an)T, ai ¼ (ai,1, ..., ai,n) and ¼ (s1, ..., sn)T.
Note that s is obtained by maximizing the independence of the components. This
procedure is done by maximizing some independence objective function.
ICA assumes all of its components (s i) are non-Gaussian and independent of
each other.
We will now introduce the fastICA function in R.
• X: data matrix
• n.comp: number of components,
• alg.type: components extracted simultaneously (alg.typ ¼¼ "parallel") or one
Finally, we can check the correlation of two components in the ICA result, S; it
is nearly 0.
cor(a$S)
## [,1] [,2] ## [1,] 1.000000e+00
-7.677818e-16
## [2,] -7.677818e-16 1.000000e+00
1.5
1.0
1
0.5
a$X[, 2]
a$S[, 2]
0 0.0
-0.5
-1
-1.0
-1.5
-2
-1.0 -0.5 0.0 0.5 1.0 -1.5-1.0-0.5 0.0 0.5 1.0 1.5 a$X[,1] a$S[,1]
Fig. 6.15 Scatterplots of the raw data (left) illustrating intrinsic relation in the simulated
bivariate data and the ICA-transformed data (right) showing random scattering
6.6 Independent Component Analysis (ICA)
Fig. 6.17 Factor analysis results projecting the key features on the first two factor dimensions
Here the p-value 0.854 is very large, suggesting that we failed to reject the
nullhypothesis that two factors are sufficient. We can also visualize the loadings
for all the variables (Fig. 6.17).
This plot displays factors 1 and 2 on the x-axis and y-axis, respectively.
X ¼ UDVT,
We can compare the output from the svd() function and the princomp()
function (another R function for PCA). Still, we are using the pd.sub dataset.
Before the SVD, we need to scale our data matrix.
#SVD output df<-
nrow(pd.sub)-1 zvars<-
scale(pd.sub)
z.svd<-svd(zvars)
z.svd$d/sqrt(df)
## [1] 1.7878123 1.1053808 0.7550519 0.6475685 0.5688743
0.5184536 z.svd$v
loadings(pca2)
##
## Loadings:
## Comp.1 Comp.2 Comp.3 Comp.4
Comp.5 Comp.6 ## Top_of_SN_Voxel_Intensity_Ratio -0.256 -0.713
-0.373 -0.105 0.477 -0.221 ##Side_of_SN_Voxel_Intensity_Ratio
-0.386 -0.472 0.357 0.433 -0.558 ## Part_IA
0.383 -0.373 0.710 -0.320 0.238 0.227 ## Part_IB
0.460 -0.112 0.794 0.292 0.226 ## Part_II
0.425 -0.342 -0.464 -0.262 -0.534 0.365 ## Part_III
0.498 -0.183 -0.844 ##
## Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6
## SS loadings 1.000 1.000 1.000 1.000 1.000
1.000 ## Proportion Var 0.167 0.167 0.167 0.167
0.167 0.167 ## Cumulative Var 0.167 0.333 0.500
0.667 0.833 1.000
260 6 Dimensionality Reduction
When the correlation matrix is used for calculation (cor¼T), the V matrix of
SVD contains the loadings of the PCA.
numeric variable. Second, we don’t need the patient ID and time variable in the
dimension reduction procedures.
1.5
1.0
Variances
0.5
0.0
Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 Comp.9 Comp.10
Fig. 6.18 Barplot illustrating the decay of the eigenvectors corresponding to the PCA linear
transformation of the variables in the Parkinson’s disease dataset (Figs. 6.19 and 6.20)
Comp.9
## Standard deviation 1.18527282 1.15961464 1.135510 1.10882348
1.0761943 ## Proportion of Variance 0.04531844 0.04337762 0.041593
0.03966095 0.037361
## Cumulative Proportion 0.26136637 0.30474399 0.346337 0.38599794
0.423359
## Comp.10 Comp.11 Comp.12
Comp.13
## Standard deviation 1.06687730 1.05784209 1.04026215
1.03067437
## Proportion of Variance 0.03671701 0.03609774 0.03490791
0.03426741
## Cumulative Proportion 0.46007604 0.49617378 0.53108169
0.56534910
## Comp.14 Comp.15 Comp.16
Comp.17 ## Standard deviation 1.0259684 0.99422375
0.97385632 0.96688855 ## Proportion of Variance 0.0339552
0.03188648 0.03059342 0.03015721
## Cumulative Proportion 0.5993043 0.63119078 0.66178421 0.69194141
## Comp.18 Comp.19 Comp.20
Comp.21
## Standard deviation 0.92687735 0.92376374 0.89853718 0.88924412
## Proportion of Variance 0.02771296 0.02752708 0.02604416
0.02550823 ## Cumulative Proportion 0.71965437 0.74718145
0.77322561 0.79873384
## Comp.22 Comp.23 Comp.24 Comp.25
## Standard deviation 0.87005195 0.86433816 0.84794183 0.82232529
## Proportion of Variance 0.02441905 0.02409937 0.02319372 0.02181351
## Cumulative Proportion 0.82315289 0.84725226 0.87044598
0.89225949 ## Comp.26 Comp.27
Comp.28 Comp.29 ## Standard deviation 0.80703739 0.78546699
0.77505522 0.76624322 ## Proportion of Variance 0.02100998
0.01990188 0.01937776 0.01893963 ## Cumulative Proportion
0.91326947 0.93317135 0.95254911 0.97148875
## Comp.30
Comp.31 ## Standard deviation 0.68806884
0.64063259 ## Proportion of Variance 0.01527222
0.01323904
## Cumulative Proportion 0.98676096 1.00000000
plot(pca.model)
biplot(pca.model)
Fig. 6.19 Biplot of the PD variables onto the first two principle axes
266 6 Dimensionality Reduction
Fig. 6.20 Enhanced biplot of the PD data explicitly labeling the patients and control volunteers
We can see that in real world examples PCs do not necessarily have an
“elbow” in the scree plot (Fig. 6.18). In our model, each PC explains about the
same amount of variation. Thus, it is hard to tell how many PCs, or factors, we
need to pick. This would be an ad hoc decision.
2. FA
Let’s set up a Cattell’s Scree test to determine the number of factors first.
Although the Cattell’s Scree test suggest that we should use 14 factors, the
real fit shows 14 is not enough. Previous PCA results suggest we need around
20 PCs to obtain a cumulative variance of 0.6. After a few trials, we find that 19
factors can pass the chi square test for sufficient number of factors at 0.05
level.
fa.model<-factanal(pd_data, 19, rotation="varimax")
fa.model
## ##
Call:
## factanal(x = pd_data, factors = 19, rotation =
"varimax") ## ## Uniquenesses:
6.10 Case Study for Dimension Reduction (Parkinson’s Disease) 267
## L_caudate_ComputeArea L_caudate_Volume
## 0.840
0.005 ## R_caudate_ComputeArea
R_caudate_Volume ## 0.868
0.849 ## L_putamen_ComputeArea
L_putamen_Volume
## 0.791 0.702
## R_putamen_ComputeArea R_putamen_Volume
## 0.615 0.438
## L_hippocampus_ComputeArea L_hippocampus_Volume
## 0.476 0.777
## R_hippocampus_ComputeArea R_hippocampus_Volume
## 0.798 0.522
## cerebellum_ComputeArea
cerebellum_Volume ## 0.137
0.504
## L_lingual_gyrus_ComputeArea L_lingual_gyrus_Volume
## 0.780 0.698
## R_lingual_gyrus_ComputeArea
R_lingual_gyrus_Volume ##
0.005 0.005
## L_fusiform_gyrus_ComputeArea L_fusiform_gyrus_Volume
## 0.718 0.559
## R_fusiform_gyrus_ComputeArea R_fusiform_gyrus_Volume
## 0.663 0.261
## Sex Weight
## 0.829 0.005
## Age Dx
## 0.005 0.005
## chr12_rs34637584_GT chr17_rs11868035_GT
## 0.638 0.721
## UPDRS_part_I
UPDRS_part_II ## 0.767
0.826
## UPDRS_part_III
## 0.616
##
## Loadings:
## Factor1 Factor2 Factor3 Factor4
Factor5 ## L_caudate_ComputeArea
## L_caudate_Volume
0.980
## R_caudate_ComputeArea
## R_caudate_Volume
## L_putamen_ComputeArea
## L_putamen_Volume
## R_putamen_ComputeArea
## R_putamen_Volume
## L_hippocampus_ComputeArea
## L_hippocampus_Volume
## R_hippocampus_ComputeArea -0.102
## R_hippocampus_Volume
## cerebellum_ComputeArea
## cerebellum_Volume
## L_lingual_gyrus_ComputeArea 0.107
268 6 Dimensionality Reduction
## L_lingual_gyrus_Volume
## R_lingual_gyrus_ComputeArea 0.989
## R_lingual_gyrus_Volume 0.983
## L_fusiform_gyrus_ComputeArea
## L_fusiform_gyrus_Volume
## R_fusiform_gyrus_ComputeArea
## R_fusiform_gyrus_Volume
## Sex -0.111
## Weight 0.983
## Age
## Dx 0.965
## chr12_rs34637584_GT 0.124
## chr17_rs11868035_GT -0.303
## UPDRS_part_I -0.260
## UPDRS_part_II
## UPDRS_part_III 0.332 0.104
## Factor6 Factor7 Factor8 Factor9
Factor10 ## L_caudate_ComputeArea -0.101
## L_caudate_Volume
…
## Factor1 Factor2 Factor3 Factor4 Factor5 Factor6
Factor7
## SS loadings 1.282 1.029 1.026 1.019 1.013 1.011
0.921 ## Proportion Var 0.041 0.033 0.033 0.033 0.033
0.033 0.030 ## Cumulative Var 0.041 0.075 0.108 0.140
0.173 0.206 0.235
## Factor8 Factor9 Factor10 Factor11 Factor12
Factor13 ## SS loadings 0.838 0.782 0.687 0.647
0.615 0.587
## Proportion Var 0.027 0.025 0.022 0.021 0.020
0.019
## Cumulative Var 0.263 0.288 0.310 0.331 0.351
0.370
## Factor14 Factor15 Factor16 Factor17 Factor18
Factor19 ## SS loadings 0.569 0.566 0.547 0.507
0.475 0.456
## Proportion Var 0.018 0.018 0.018 0.016 0.015
0.015 ## Cumulative Var 0.388 0.406 0.424 0.440
0.455 0.470 ##
## Test of the hypothesis that 19 factors are sufficient.
## The chi square statistic is 54.51 on 47 degrees of freedom.
## The p-value is 0.211
269
6.11 Assignments: 6. Dimensionality Reduction
This data matrix has relatively low correlation. Thus, it is not suitable for ICA.
cor(pd_data)[1:10, 1:10]
## L_caudate_ComputeArea
L_caudate_Volume
## L_caudate_ComputeArea 1.000000000
0.05794916
## L_caudate_Volume 0.057949162
1.00000000 ## R_caudate_ComputeArea
-0.060576361 0.01076372 ## R_caudate_Volume
0.043994457 0.07245568 ## L_putamen_ComputeArea
0.009640983 -0.06632813 ## L_putamen_Volume
-0.064299184 -0.11131525 ## R_putamen_ComputeArea
0.040808105 0.04504867
## R_putamen_Volume 0.058552841
-0.11830387 ## L_hippocampus_ComputeArea
-0.037932760 -0.04443615 ## L_hippocampus_Volume
-0.042033469 -0.04680825 …
## L_caudate_ComputeArea 0.04080810
0.058552841 ## L_caudate_Volume
0.04504867 -0.118303868 ## R_caudate_ComputeArea
0.07864348 0.007022844 ## R_caudate_Volume
0.05428747 -0.094336376 ## L_putamen_ComputeArea
0.09049611 0.176353726 ## L_putamen_Volume
0.09093926 -0.057687648 ## R_putamen_ComputeArea
1.00000000 0.052245264 ## R_putamen_Volume
0.05224526 1.000000000 ## L_hippocampus_ComputeArea
-0.05508472 0.131800075 ## L_hippocampus_Volume
-0.08866344 -0.001133570
## L_hippocampus_ComputeArea
L_hippocampus_Volume
## L_caudate_ComputeArea -0.037932760
-0.04203347
## L_caudate_Volume -0.044436146
-0.04680825 ## R_caudate_ComputeArea 0.051359613
0.08578833
## R_caudate_Volume 0.006123355
-0.07791361
## L_putamen_ComputeArea 0.094604791
-0.06442537 ## L_putamen_Volume 0.025303302
0.04041557
## R_putamen_ComputeArea -0.055084723
-0.08866344
## R_putamen_Volume 0.131800075 -0.00113357
## L_hippocampus_ComputeArea 1.000000000
-0.02633816 ## L_hippocampus_Volume -0.026338163
1.00000000
6.11 Assignments: 6. Dimensionality Reduction
Load Allometric Relations in Plants data and perform a proper type conversion,
e.g., convert “Province” and “Born”.
Dimensionality Reduction
References
Jolliffe, I.T. (2002) Principal Component Analysis, Springer.
Karhunen, J. and Hyvärinen, A. (2001) Independent Component Analysis, Wiley-Interscience.
Cattell, R.B. (1952) Factor analysis. New York: Harper.
Chapter 7
Lazy Learning: Classification Using
Nearest
Neighbors
In the next several Chapters, we will concentrate on various progressively
advanced machine learning, classification and clustering techniques. There are
two categories of learning techniques we wil explore: supervised (human-guided)
classification and unsupervised (fully-automated) clustering. In general,
supervised classification aims to identify or predict predefined classes and label
new objects as members of specific classes. Whereas, unsupervised clustering
attempts to group objects into sets, without knowing a priori labels, and determine
relationships between objects.
In the context of machine learning, classification refers to supervised learning
and clustering to unsupervised learning.
Unsupervised classification refers to methods where the outcomes (groupings
with common characteristics) are automatically derived based on intrinsic
affinities and associations in the data without prior human indication of
clustering. Unsupervised learning is purely based on input data (X) without
corresponding output labels. The goal is to model the underlying structure,
affinities, or distribution in the data in order to learn more about its intrinsic
characteristics. It is called unsupervised learning because there are no a priori
correct answers and there is no human guidance. Algorithms are left to their own
devises to discover and present the interesting structure in the data. Clustering
(discovers the inherent groupings in the data) and association (discovers
association rules that describe the data) represent the core unsupervised learning
problems. The k-means clustering and the Apriori association rule provide
solutions to unsupervised learning problems.
Supervised classification methods utilize user provided labels representative
of specific classes associated with concrete observations, cases, or units. These
training classes/outcomes are used as references for the classification. Many
problems can be addressed by decision-support systems utilizing combinations of
supervised and unsupervised classification processes. Supervised learning
involves input variables (X) and an outcome variable (Y) to learn mapping
functions from the input to the output: Y ¼ f(X). The goal is to approximate the
mapping function so that when it is applied to new (validation) data (Z) it
(accurately) predicts the (expected) outcome variables (Y). It is called supervised
learning because the learning process is
7.1 Motivation
Classification tasks could be very difficult when the features and target classes
are numerous, complicated, or extremely difficult to understand. In those
scenarios where the items of similar class type tend to be homogeneous, nearest
273
neighbor classifying method are well-suited because assigning unlabeled examples
to most similar labeled examples would be fairly easy.
7.2 The kNN Algorithm Overview
Such classification methods can help us to understand the story behind the
complicated case-studies. This is because machine learning methods generally
have no distribution assumptions. However, this non-parametric manner makes the
methods rely heavily on large and representative training datasets.
How to measure the similarity between records? We can measure the similarity as
the geometric distance between the two records. There are many distance
functions to choose from. Traditionally, we use Euclidean distance as our distance
function.
274 7 Lazy Learning: Classification Using Nearest Neighbors
https://codepen.io/gangtao/pen
PPoqM
dist a ;b
ð Þ¼
qðffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
a1 b1 a2 b2 ... an bn :
ffiffi Þ2 þð Þ2 þ þð Þ2ffi
When we have nominal features, it requires a little trick to apply the Euclidean
distance formula. We could create dummy variables as indicators of the nominal
feature. The dummy variable would equal to one when we have the feature and
zero otherwise. We show two examples:
0 X male
Gender ¼¼¼ ,
1 X female
¼0
Temp 37F
Cold:
1 Temp < 37F
The parameter k could be neither too large nor too small. If our k is too large, the
test record tends to be classified as the most popular class in the training records
rather than the most similar one. On the other hand, if the k is too small, outliers or
noisy data, like mislabeling the training data, might lead to errors in predictions.
A common practice is to calculate the square root of the number of training
examples and use that number as k.
A more robust way would be to choose several k’s and select the one with best
classifying performance.
Different features might have different scales. For example, we can have a
measure of pain scaling from one to ten or one to one hundred. They could be
transferred into the same scale. Re-scaling can make each feature contribute to the
distance in a relatively equal manner.
276 7 Lazy Learning: Classification Using Nearest Neighbors
1. min-max normalization
X min Xð Þ
Xnew ¼ ð Þ ð Þ: max X min X
After re-scaling, Xnew would range between 0 and 1. It measures the distance
between each value and its minimum as a percentage. The larger a percentage the
further a value is from the minimum. 100% means that the value is at the
maximum.
2. z-Score Standardization
X μ ¼ X Mean Xð Þð Þ
Xnew ¼: σ SD X
This is based on the properties of normal distribution that we have talked about
in Chap. 3. After z-score standardization, the re-scaled feature will have
unbounded range. This is different from the min-max normalization that has a
limited range from 0 to 1. However, after z-score standardization, the new X is
assumed to follow a standard normal distribution.
The data we are using for this case study is the “Boys Town Study of Youth
Development”, which is the second case study, CaseStudy02_Boystown_Data.csv.
Variables:
• ID: Case subject identifier.
• Sex: dichotomous variable (1 ¼ male, 2 ¼ female).
• GPA: Interval-level variable with range of 0–5 (0-"A" average, 1- "B" average,
2"C" average, 3- "D" average, 4-"E", 5-"F"").
• Alcohol use: Interval level variable from 0 to 11 (drink everyday - never
drinked).
• Attitudes on drinking in the household: Alcatt- Interval level variable from 0 to
6 (totally approve - totally disapprove).
• DadJob: 1-yes, dad has a job: and 2- no.
• MomJob: 1-yes and 2-no.
7.3 Case Study 277
• Parent closeness (example: In your opinion, does your mother make you feel
close to her?)
– Dadclose: Interval level variable 0–7 (usually-never)
– Momclose: interval level variable 0–7 (usually-never).
• Delinquency:
– larceny (how many times have you taken things >$50?): Interval level data
0–4 (never - many times),
– vandalism: Interval level data 0–7 (never - many times).
First, we need to load in the data and do some data manipulation. We are using the
Euclidean distance, so dummy variable should be used. The following code
transfers sex, dadjob and momjob into dummy variables.
boystown<-
read.csv("https://umich.instructure.com/files/399119/download?down
load_frd=1", sep=" ") boystown$sex<-boystown$sex-1
boystown$dadjob<--1*(boystown$dadjob-2)
boystown$momjob<--1*(boystown$momjob-2)
str(boystown)
## 'data.frame': 200 obs. of 11 variables:
## $ id : int 1 2 3 4 5 6 7 8 9 10 ...
## $ sex : num 0 0 0 0 1 1 0 0 1 1
... ## $ gpa : int 5 0 3 2 3 3 1 5
1 3 ... ## $ Alcoholuse: int 2 4 2 2 6 3
2 6 5 2 ...
## $ alcatt : int 3 2 3 1 2 0 0 3 0 1 ...
## $ dadjob : num 1 1 1 1 1 1 1 1 1 1 ...
## $ momjob : num 0 0 0 0 1 0 0 0 1 1
... ## $ dadclose : int 1 3 2 1 2 1 3 6
3 1 ... ## $ momclose : int 1 4 2 2 1 2
1 2 3 2 ...
## $ larceny : int 1 0 0 3 1 0 0 0 1 1 ...
## $ vandalism : int 3 0 2 2 2 0 5 1 4 0 ...
The str() function reports that we have 200 observations and 11 variables.
However, the ID variable is not important in this case study so we can delete it.
The variable of most interest is GPA. We can classify it into two categories.
Whoever gets a "C" or higher will be classified into the "above average"
category; Students who have average score below "C" will be in the "average or
below" category. These two are the classes of interest for this case study.
boystown<-boystown[, -1]
table(boystown$gpa)
##
## 0 1 2 3 4 5
## 30 50 54 40 14 12
boystown$grade<-boystown$gpa %in% c(3, 4, 5)
278 7 Lazy Learning: Classification Using Nearest Neighbors
We can see that most of the students are above average (67%).
The remaining ten features are all numeric but with different scales. If we use
these features directly, the ones with larger scale will have a greater impact on the
classification performance. Therefore, re-scaling is needed in this scenario.
summary(boystown[c("Alcoholuse", "larceny", "vandalism")])
## Alcoholuse larceny vandalism
## Min. : 0.00 Min. :0.00 Min. :0.0
## 1st Qu.: 2.00 1st Qu.:0.00 1st Qu.:1.0
## Median : 4.00 Median :1.00 Median :2.0
## Mean : 3.87 Mean :0.92 Mean :1.9
## 3rd Qu.: 5.00 3rd Qu.:1.00 3rd Qu.:3.0
## Max. :11.00 Max. :4.00 Max. :7.0
First let’s create a function of our own using the min-max normalization formula.
We can check the function using some trial vectors.
normalize<-function(x){
# be careful, the denominator may be trivial!
return((x-min(x))/(max(x)-min(x)))
}
# some test examples:
normalize(c(1, 2, 3, 4, 5))
## [1] 0.00 0.25 0.50 0.75
1.00 normalize(c(1, 3, 6,
7, 9))
## [1] 0.000 0.250 0.625 0.750 1.000
After confirming that it is working properly, we use the lapply() function to
apply the normalization to each element in a “list.” First, we need to make our
dataset into a list. The as.data.frame() function converts our data into a data frame,
which is a list of equal-length column vectors. Thus, each feature is an element in
the list that we can apply the normalization function to.
boystown_n<-as.data.frame(lapply(boystown[-11], normalize))
7.3 Case Study 279
We have 200 observations in this dataset. The more data we use to train the
algorithm, the more precise the prediction would be. We can use 3/4 of the data
for training and the remaining 1/4 for testing.
# Ideally, we want to randomly split the raw data into training and
testing
# For example: 80% training + 20% testing
# subset_int <- sample(nrow(boystown_n),
floor(nrow(boystown_n)*0.8))
# bt_train<- boystown_n [subset_int, ]; bt_test<-boystown_n[-
subset_int, ]
# Below, we use a simpler 3:1 split for simplicity
bt_train<-boystown_n[1:150, -11] bt_test<-
boystown_n[151:200, -11]
The following step is to extract the labels or classes (column ¼ 11, Delinquency
cell½1; 1
algorithm. In this situation, we got accuracy ¼38
50
; total
þcell½2;2
¼ ¼ 0:82, however,
The summary() shows the re-scaling is working properly. Then, we can proceed
to next steps (retraining the kNN, predicting and assessing the accuracy of the
results):
bt_train<-bt_z[1:150, -11] bt_test<-
bt_z[151:200, -11] bt_train_labels<-
boystown[1:150, 11] bt_test_labels<-
boystown[151:200, 11]
bt_test_pred<-knn(train=bt_train, test=bt_test,
cl=bt_train_labels, k=14)
CrossTable(x=bt_test_labels, y=bt_test_pred, prop.chisq =
F)
##
##
## Cell Contents
##
|-------------------------|
## | N
|
## | N / Row Total |
## | N / Col Total |
## | N / Table Total
| ##
|-------------------------|
##
##
## Total Observations in Table: 50
##
##
## | bt_test_pred
## bt_test_labels | above_avg | avg_or_below | Row
Total |
##
---------------|--------------|--------------|------------
--|
282 7 Lazy Learning: Classification Using Nearest Neighbors
## above_avg | 30 | 0 |
30 | ## | 1.000 | 0.000 |
0.600 |
## | 0.769 | 0.000 |
|
## | 0.600 | 0.000 |
|
##
---------------|--------------|--------------|------------
--|
## avg_or_below | 9 | 11 | 20 |
## | 0.450 | 0.550 |
0.400 |
## | 0.231 | 1.000 |
|
## | 0.180 | 0.220 |
|
##
---------------|--------------|--------------|------------
--|
## Column Total | 39 | 11 |
50 |
## | 0.780 | 0.220 |
|
##
---------------|--------------|--------------|------------
--|
Under the z-score method, the prediction result is similar to the previous run.
Originally, we used the square root of 200 as our k. However, this might not be the
best k in this case study. We can test different k’s for their predicting
performance.
## | N / Col Total |
## | N / Table Total
| ##
|-------------------------|
##
##
## Total Observations in Table: 50
##
##
## | bt_test_pred1
## bt_test_labels | above_avg | avg_or_below | Row Total |
## ---------------|--------------|--------------|--------------|
## above_avg | 27 | 3 | 30 |
## | 0.900 | 0.100 |
0.600 | ## | 0.818 | 0.176 |
| ## | 0.540 | 0.060 |
|
## ---------------|--------------|--------------|--------------|
## avg_or_below | 6 | 14 | 20 |
## | 0.300 | 0.700 | 0.400 |
## | 0.182 | 0.824 | |
## | 0.120 | 0.280 | |
## ---------------|--------------|--------------|--------------|
## Column Total | 33 | 17 | 50 |
## | 0.660 | 0.340 | |
## ---------------|--------------|--------------|--------------|
##
ct_5<-CrossTable(x=bt_test_labels, y=bt_test_pred5,
prop.chisq = F)
## Cell Contents
##
|-------------------------|
## | N
|
## | N / Row Total |
## | N / Col Total |
## | N / Table Total
| ##
|-------------------------|
##
##
##
## Total Observations in Table: 50
##
##
## | bt_test_pred5
## bt_test_labels | above_avg | avg_or_below | Row Total |
## ---------------|--------------|--------------|--------------|
## above_avg | 30 | 0 | 30 |
## | 1.000 | 0.000 | 0.600 |
## | 0.857 | 0.000 | |
## | 0.600 | 0.000 | |
## ---------------|--------------|--------------|--------------|
## avg_or_below | 5 | 15 | 20 |
## | 0.250 | 0.750 | 0.400 |
## | 0.143 | 1.000 |
284 7 Lazy Learning: Classification Using Nearest Neighbors
| ## | 0.100 | 0.300 |
|
## ---------------|--------------|--------------|--------------|
## Column Total | 35 | 15 | 50 |
## | 0.700 | 0.300 | |
## ---------------|--------------|--------------|--------------|
##
ct_11<-CrossTable(x=bt_test_labels, y=bt_test_pred11,
prop.chisq = F)
## Cell Contents
##
|-------------------------|
## | N
|
## | N / Row Total
| ## | N / Col
Total | ## | N /
Table Total | ##
|-------------------------|
##
## Total Observations in Table: 50
##
## | bt_test_pred11
## bt_test_labels | above_avg | avg_or_below | Row Total |
## ---------------|--------------|--------------|--------------|
## above_avg | 30 | 0 | 30 |
## | 1.000 | 0.000 | 0.600 |
## | 0.769 | 0.000 |
| ## | 0.600 | 0.000 |
| ##
---------------|--------------|--------------|-------------
-|
## avg_or_below | 9 | 11 | 20 |
## | 0.450 | 0.550 |
0.400 | ## | 0.231 | 1.000 |
| ## | 0.180 | 0.220 |
|
## ---------------|--------------|--------------|--------------|
## Column Total | 39 | 11 | 50 |
## | 0.780 | 0.220 | |
## ---------------|--------------|--------------|--------------|
ct_21<-CrossTable(x=bt_test_labels, y=bt_test_pred21, prop.chisq
= F)
## Cell Contents
## |-------------------------| ## |
N | ## | N / Row Total |
## | N / Col Total | ## |
N / Table Total | ##
|-------------------------|
##
##
## Total Observations in Table: 50
##
##
## | bt_test_pred21
## bt_test_labels | above_avg | avg_or_below | Row Total |
## ---------------|--------------|--------------|--------------|
## above_avg | 30 | 0 | 30 | ##
| 1.000 | 0.000 | 0.600 |
## | 0.714 | 0.000 | |
## | 0.600 | 0.000 | |
## ---------------|--------------|--------------|--------------|
7.3 Case Study 285
## avg_or_below | 12 | 8 | 20 |
## | 0.600 | 0.400 | 0.400 |
## | 0.286 | 1.000 | |
## | 0.240 | 0.160 | |
## ---------------|--------------|--------------|--------------|
## Column Total | 42 | 8 | 50 |
## | 0.840 | 0.160 | |
## ---------------|--------------|--------------|--------------|
## ## ct_27<-CrossTable(x=bt_test_labels,
y=bt_test_pred27,prop.chisq = F)
##
##
## Cell Contents
## |-------------------------| ## |
N | ## | N / Row Total |
## | N / Col Total | ## |
N / Table Total | ##
|-------------------------|
##
##
## Total Observations in Table: 50
##
##
## | bt_test_pred27
## bt_test_labels | above_avg | avg_or_below | Row Total |
## ---------------|--------------|--------------|--------------|
## above_avg | 30 | 0 | 30 |
## | 1.000 | 0.000 | 0.600 |
## | 0.682 | 0.000 | | ##
| 0.600 | 0.000 | | ##
---------------|--------------|--------------|--------------|
## avg_or_below | 14 | 6 | 20 | ##
| 0.700 | 0.300 | 0.400 |
## | 0.318 | 1.000 | | ##
| 0.280 | 0.120 | |
## ---------------|--------------|--------------|--------------|
## Column Total | 44 | 6 | 50 | ##
| 0.880 | 0.120 | |
## ---------------|--------------|--------------|--------------|
It’s useful to visualize the errorrate against the value of k. This can help us
select a k parameter that minimizes the cross-validation (CV) error (Fig. 7.2).
0.3
0.2
7.3 Case Study 287
Classification error
0.1
0.0 45
Train
CV
Test
0 10 20 30
Number of nearest neighbors ($k$)
Fig. 7.2 Classification error plots (y-axis) for training data (red), internal statistical cross-
validation
(green) and external out of box data (blue) against different k-parameters of the kNN method
library(class)
library(ggplot2)
#CV error
cverrbt = sapply(folds, function(fold) {
mean(bt_train_labels[fold$test] !=
knn(train=bt_train[fold$training,],
cl = bt_train_labels[fold$training], test = bt_train[fold$test,],
k=K))
}
) cv_error =
mean(cverrbt)
#Test error
knn.test = knn(train = bt_train, test =
bt_test, cl = bt_train_labels, k = K)
test_error = mean(knn.test != bt_test_labels)
return(c(train_error, cv_error, test_error))
}
require(ggplot2)
library(reshape2)
The reader should first review the fundamentals of hypothesis testing inference.
Table 7.2 shows the basic components of binary classification, and Table 7.3
reports the results of the classification for several k values. Table 7.2 Basic
evaluation metrics of binary classification
kNN fails to reject TN FN
kNN rejects FP TP
Specificity: TN/(TN + FP) Sensitivity: TP/(TP + FN)
Table 7.3 Summary results of the kNN classification for different values of the parameter k
5 5 0.90
11 9 0.82
21 12 0.76
27 14 0.72
Suppose we want to evaluate the kNN model (5) as to how well it predicts the
below-average boys. Let’s report manually some of the accuracy metrics for
model5. Combining the results, we get the following sensitivity and specificity:
# bt_test_pred5<-knn(train=bt_train, test=bt_test,
cl=bt_train_labels, k=5)
# ct_5<-CrossTable(x=bt_test_labels, y=bt_test_pred5,
prop.chisq = F) mod5_TN <- ct_5$prop.row[1, 1] mod5_FP <-
ct_5$prop.row[1, 2] mod5_FN <- ct_5$prop.row[2, 1] mod5_TP <-
ct_5$prop.row[2, 2]
print(paste0("mod5_speci=", mod5_speci))
## [1] "mod5_speci=0.75"
Therefore, model5 yields a good choice for the number of clusters k ¼ 5.
Nevertheless, we can always examine further near 5 to get potentially better
choices of k.
Another strategy for model validation and improvement involves the use of the
confusionMatrix() method, which reports several complementary metrics
quantifying the performance of the prediction model.
Let’s focus on model5 power to predict Delinquency in terms of reoccurring
vandalism.
## [1] 0.8017837
# plot(as.numeric(bt_test_labels), as.numeric(bt_test_pred5))
# install.packages("caret")
library("caret")
# Model 1: bt_test_pred1
confusionMatrix(as.numeric(bt_test_labels),
as.numeric(bt_test_pred1))
## 1 27 3
## 2 6 14
##
## Accuracy : 0.82
## 95% CI : (0.6856, 0.9142)
## No Information Rate : 0.66
## P-Value [Acc > NIR] : 0.009886
##
## Kappa : 0.6154
## Mcnemar's Test P-Value : 0.504985
##
## Sensitivity : 0.8182
## Specificity : 0.8235
## Pos Pred Value : 0.9000
## Neg Pred Value : 0.7000
## Prevalence : 0.6600
## Detection Rate : 0.5400
## Detection Prevalence : 0.6000
## Balanced Accuracy : 0.8209
##
## 'Positive' Class : 1
##
# Model 5: bt_test_pred5
confusionMatrix(as.numeric(bt_test_labels),
as.numeric(bt_test_pred5))
## 2 9 11
##
## Accuracy : 0.82
## 95% CI : (0.6856, 0.9142)
## No Information Rate : 0.78
## P-Value [Acc > NIR] : 0.313048
##
## Kappa : 0.5946
## Mcnemar's Test P-Value : 0.007661
##
## Sensitivity : 0.7692
## Specificity : 1.0000
## Pos Pred Value : 1.0000
## Neg Pred Value : 0.5500
## Prevalence : 0.7800
## Detection Rate : 0.6000
## Detection Prevalence : 0.6000
## Balanced Accuracy : 0.8846
##
## 'Positive' Class : 1
##
Finally, we can use a 3D plot to display the results of model5 (mod5_TN,
mod5_FN, mod5_FP, mod5_TP), Fig. 7.3.
# install.packages("scatterplot3d")
library(scatterplot3d)
grid_xy <- matrix(c(0, 1, 1, 0), nrow=2, ncol=2)
intensity <- matrix(c(mod5_TN, mod5_FN, mod5_FP, mod5_TP), nrow=2,
ncol=2)
1.0
Agreement
predicted
0.8
0.6
2.0
0.4 1.8
1.6
0.2 1.4
1.2
0.0 1.0
1.0 1.2 1.4 1.6 1.8 2.0
real
292 7 Lazy Learning: Classification Using Nearest Neighbors
Use the kNN algorithm to provide a classification of the data in the TBI case
study, (CaseStudy11_TBI). Determine an appropriate k, train, and evaluate the
performance of the classification model on the data. Report some model quality
statistics for a couple of different values of k and use these to rank-order (and
perhaps plot the classification results of) the models.
• Preprocess the data: delete the index and ID columns; convert the response
variable ResearchGroup to binary 0-1 factor; detect NA (missing) values
(impute if necessary)
• Summarize the dataset: use str, summary, cor, ggpairs
• Scale/Normalize the data: As appropriate, scale to [0, 1]; transform log(x + 1);
discretize (0 or 1), etc.
• Partition data into training and testing sets: use set.seed and random sample,
train:test ¼ 2:1
• Select the optimal k for each of the scaled data: Plot an error graph for k,
including three lines: training_error, cross-validation error, and testing error,
respectively
• What is the impact of k? Formulate a hypothesis about the relation between k
and the error rates. You can try to use knn.tunning to verify the results (Hint:
select the same folds, all you may obtain a result slightly different)
• Interpret the results: Hint: Considering the number of dimension of the data,
how many points are necessary to obtain the same density result for 100
dimensional space compared to a 1 dimensional space?
• Report the error rates for both the training and the testing data. What do you
find?
Try all the above again but select only the variables:
UPDRS_Part_I_Summary_Score_Baseline,
UPDRS_Part_I_Summary_Score_Month_24,
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Baseline,
UPDRS_Part_II_Patient_Questionnaire_Summary_Score_Month_24,
UPDRS_Part_III_Summary_Score_Baseline, UPDRS_Part_
III_Summary_Score_Month_24, as predictors. Now, what about the specific k
you select and the error rates for each kind of data (original data, normalized data,
log-transformed data, and binary data). Comment on any interesting observations.
References
Kidwell , David A. (2013) Lazy Learning, Springer Science & Business Media, ISBN
9401720533, 9789401720533
Interactive kNN webapp: https://codepen.io/gangtao/pen/PPoqMW
Aggarwal, Charu C. (ed.) (2015) Data Classification: Algorithms and Applications, Chapman &
Hall/CRC, ISBN 1498760589, 9781498760584
Chapter 8
Probabilistic Learning: Classification
Using Naive Bayes
The introduction to Chap. 7 presented the types of machine learning methods and
described lazy classification for numerical data. What about nominal features or
textual data? In this Chapter, we will begin to explore some classification
techniques for categorical data. Specifically, we will (1) present the Naive Bayes
algorithm; (2) review its assumptions; (3) discuss Laplace estimation; and (4)
illustrate the Naive Bayesian classifier on a Head and Neck Cancer Medication
case-study.
Later, in Chap. 20, we will also discuss text mining and natural language
processing of unstructured text data.
Naive Bayes is named for its “naive” assumptions. Its most important assumption
is that all of the features are equally important and independent. This rarely
happens in real world data. However, sometimes even when the assumptions are
violated, Naive Bayes still performs fairly accurately, particularly when the
number of features p is large. This is why the Naive Bayes algorithm may be used
as a powerful text classifier.
There are interesting relations between QDA (Quadratic Discriminant
Analysis), LDA (Linear Discriminant Analysis), and Naive Bayes classification.
Additional information about LDA and QDA is available online
(http://wiki.socr.umich.edu/
index.php/SMHS_BigDataBigSci_CrossVal_LDA_QDA).
Let’s first define the set-theoretic Bayes formula. We assume that Bi0s are
mutually exclusive events, for all i = 1, 2, ..., n, where n represents the number of
features.
If A and B are two events, the Bayes conditional probability formula is as follows:
P Bð jAÞP Að Þ
P Að jBÞ ¼
: P Bð Þ
0
When Bi s represent a partition of the event space, S¼ [Bi and Bi \Bj = ∅ 8i¼6j.
So we have:
P Bð jAÞ P Að Þ
P Að jBÞ ¼ :
P Bð jB1Þ P Bð 1Þ þ P Bð jB2Þ P Bð 2Þ... þ P Bð jBnÞ P Bð nÞ
Now, let’s represent the Bayes formula in terms of classification using
observed features. Having observed n features, Fi, for each of K possible class
outcomes, Ck. The Bayesian model may be reformulate to make it more tractable
using the Bayes’ theorem, by decomposing the conditional probability.
P Fð 1;...;FnjCkÞP Cð kÞ
P Cð k j F1;...;FnÞ ¼
: P Fð 1;...;FnÞ
291
8.3 Bayes Formula
In the above expression, only the numerator depends on the class label, Ck, as
the values of the features Fi are observed (or imputed) making the denominator
constant.
Let’s focus on the numerator.
The numerator essentially represents the jointprobabilitymodel: P Fð
1;...;FnjCkÞP Cð kÞ ¼ P Fð 1;...;Fn;CkÞ
|
fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl}
joint model
Repeatedly using the chain rule and the definition of conditional probability
simplifies this to:
P Fð 4;...;Fn;CkÞ ¼
¼ ... ¼
Note that the “naive” qualifier in the Naive Bayes classifier name is
attributed to the oversimplification of the conditional probability. Assuming each
feature Fi is conditionally statistical independent of every other feature Fj, 8j 6¼ i,
given the category Ck, we get:
P Fð ijFiþ1;...;Fn;CkÞ ¼ P Fð ijCkÞ:
This reduces the joint probability model to:
Þ
P Fð 1;...;Fn;CkÞ ¼ P Cð k Y P Fð ijCkÞ
292 8 Probabilistic Learning: Classification Using Naive Bayes
i¼1
P Cð LjF1;...;FnÞ ¼ P Cð LÞ Qn in¼1 P Fð
ijCLÞ ,
Qi¼1 P Fð iÞ
n where
the Y denominator,P Fð iÞ, is a scaling factor that
represents the i¼1
marginal probability of observing all features jointly.
For a given case X = (F1, F2, ..., Fn), i.e., given vector of features, the naive
^ Q P
Bayes classifier assigns the most likely class C by calculating P Cð LÞ n
in¼1
F C
ð ij LÞ Qi¼1 P
Fð iÞ
^
for all class labels L, and then assigning the class C corresponding to the
^
maximum posterior probability. Analytically, C is defined by:
C^ ¼ arg max LÞ Qn in¼1 P Fð ijCLÞ:
P Cð
L Qi¼1 P Fð iÞ
If at least one P(Fi| CL) ¼ 0, then P(CL| F1, ..., Fn) ¼ 0, which means the probability
of being in this class is zero. However, P(Fi| CL) ¼ 0 could be result from a
random chance in picking the training data.
One of the solutions to this scenario is Laplace estimation, also known as
Laplace smoothing, which can be accomplished in two ways. One is to add small
number to each cell in the frequency table, which allows each class-feature
combination to be at least one in the training data. Then P(Fi| CL) > 0 for all i.
Another strategy is to add some small value, E, to the numerator and denominator
when calculating the posterior probability. Note that these small perturbations of
the denominator should be larger than the changes in the numerator to avoid trivial
(0) posterior for another class.
8.5 Case Study: Head and Neck Cancer Medication
We utilize the Inpatient Head and Neck Cancer Medication data for this case
study, which is the case study 14 in our data archive. Variables:
• PID: coded patient ID.
• ENC_ID: coded encounter ID.
• Seer_stage: SEER cancer stage (0 ¼ In situ, 1 ¼ Localized, 2 ¼ Regional by
direct extension, 3 ¼ Regional to lymph nodes, 4 ¼ Regional (both codes 2 and
3), 5 ¼ Regional, NOS, 7 ¼ Distant metastases/systemic disease, 8 ¼ Not
applicable, 9 ¼ Unstaged, unknown, or unspecified). See:
http://seer.cancer.gov/ tools/ssm.
• Medication_desc: description of the chemical composition of the medication.
• Medication_summary: brief description about medication brand and usage.
• Dose: the dosage in the medication summary.
• Unit: the unit for dosage in the Medication_summary.
• Frequency: the frequency of use in the Medication_summary.
• Total_dose_count: total dosage count according to the Medication_summary.
294 8 Probabilistic Learning: Classification Using Naive Bayes
8.5.2 Step 2: Exploring and Preparing the Data
hn_med<-
read.csv("https://umich.instructure.com/files/1614350/download?downl
oad_frd=1", stringsAsFactors = FALSE) str(hn_med)
## 'data.frame': 662 obs. of 9 variables:
## $ PID : int 10000 10008 10029 10063 10071 10103
1012 1013
5 10136 10143 ...
## $ ENC_ID : int 46836 46886 47034 47240 47276 47511
3138 4773
9 47744 47769 ...
## $ seer_stage : int 1 1 4 1 9 1 1 1 9 1 ...
## $ MEDICATION_DESC : chr "ranitidine" "heparin injection"
"ampicillin/ sulbactam IVPB UH" "fentaNYL injection UH" ...
## $ MEDICATION_SUMMARY: chr "(Zantac) 150 mg tablet oral two
times a day" "5,000 unit subcutaneous three times a day" "(Unasyn)
15 g IV every 6 hours" "25 - 50 microgram IV every 5 minutes PRN
severe pain\nMaximum dose 200 mcg Per PACU protocol" ...
## $ DOSE : chr "150" "5000" "1.5" "50" ...
## $ UNIT : chr "mg" "unit" "g" "microgram" ...
## $ FREQUENCY : chr "two times a day" "three times a day"
"every
6 hours" "every 5 minutes" ...
## $ TOTAL_DOSE_COUNT : int 5 3 11 2 1 2 2 6 15 1 ...
Change the seer_stage (cancer stage indicator) variable into a factor.
table(hn_med$seer_stage)
##
## 0 1 2 3 4 5 7 8 9
## 21 265 53 90 46 18 87 14 68
hn_med_corpus<-Corpus(VectorSource(hn_med$MEDICATION_SUMMARY))
print(hn_med_corpus)
After we construct the corpus object, we could see that we have 662
documents. Each document represents an encounter (e.g., notes on medical
treatment) for a patient.
inspect(hn_med_corpus[1:3])
## <<SimpleCorpus>>
## Metadata: corpus specific: 1, document level (indexed): 0
## Content: documents: 3
##
## [1] (Zantac) 150 mg tablet oral two times a
day ## [2] 5,000 unit subcutaneous three times
a day
## [3] (Unasyn) 15 g IV every 6 hours hn_med_corpus[[1]]
$content
## [1] "(Zantac) 150 mg tablet oral two times a day"
hn_med_corpus[[2]]$content
## [1] "5,000 unit subcutaneous three times a day"
hn_med_corpus[[3]]$content
The above lines of code changed all the characters to lower case, removed all
punctuations and extra white spaces (typically created by deleting punctuations),
and removed numbers (we could also convert the corpus to plain text).
inspect(corpus_clean[1:3])
## <<SimpleCorpus>>
## Metadata: corpus specific: 1, document level (indexed): 0
## Content: documents: 3
##
## [1] zantac mg tablet oral two times a
day ## [2] unit subcutaneous three times
a day
## [3] unasyn g iv every hours corpus_clean[[1]]
$content
## [1] "zantac mg tablet oral two times a day"
296 8 Probabilistic Learning: Classification Using Naive Bayes
corpus_clean[[2]]$content
## [1] " unit subcutaneous three times a day"
corpus_clean[[3]]$content
hn_med_dtm<-DocumentTermMatrix(corpus_clean)
Just like in Chap. 7, we need to separate the dataset into training and test subsets.
We have to subset the raw data with other features, the corpus object and the
document term matrix.
set.seed(12)
# 80% training + 20% testing subset_int <-
sample(nrow(hn_med),floor(nrow(hn_med)*0.8))
hn_med_train<-hn_med[subset_int, ] hn_med_test<-hn_med[-
subset_int, ] hn_med_dtm_train<-hn_med_dtm[subset_int, ]
hn_med_dtm_test<-hn_med_dtm[-subset_int, ] corpus_train<-
corpus_clean[subset_int] corpus_test<-corpus_clean[-subset_int]
# hn_med_train<-hn_med[1:562, ] #hn_med_test<-hn_med[563:662, ]
# hn_med_dtm_train<-hn_med_dtm[1:562, ]
# hn_med_dtm_test<-hn_med_dtm[563:662, ]
#corpus_train<-corpus_clean[1:562]
#corpus_test<-corpus_clean[563:662]
Let’s examine the distribution of seer stages in the training and test datasets.
(tabl(hn_med_train$seer_stag
prop.tabl ))
e e e
##
## 0 1 2 3 4 5
## 0.03024575 0.38374291 0.08317580 0.14555766 0.06616257 0.03402647
## 7 8 9
## 0.13421550 0.02268431
0.10018904
(tabl(hn_med_test$seer_stag
prop.tabl ))
e e e
##
## 0 1 2 3 4 5
## 0.03759398 0.46616541 0.06766917 0.09774436 0.08270677 0.00000000
## 7 8 9
## 0.12030075 0.01503759
0.11278195
A word cloud can help us visualize text data. More frequent words would have
larger fonts in the figure, while less common words appear in smaller fonts.
There is a wordcloud package in R that is commonly used for creating these
figures
(Figs. 8.1, 8.2, 8.3).
early<-subset(hn_med_train,
stage=="early_stage") later<-
subset(hn_med_train,
stage=="later_stage")
wordcloud(early$MEDICATION_SUMMARY,
max.words = 20)
wordcloud(later$MEDICATION_SUMMARY,
max.words = 20)
We can see that the frequent words are somewhat different in the medication
summary between early stage and later stage patients.
For simplicity, we utilize the medication summary as the only feature to classify
cancer stages. You may recall that in Chap. 7 we used features for classifications.
In this study, we are going to make the frequencies of words into features.
summary(findFreqTerms(hn_med_dtm_train, 5))
## Length Class
Mode ## 103 character
character
The package we will use for Naive Bayes classifier is called e1071.
• train: data frame containing numeric training data (features) • class: factor
vector with the class for each row in the training data.
• laplace: positive double controlling Laplace smoothing; default is zero and
disables Laplace smoothing.
Let’s build our classifier first.
Then, we can use the classifier to make predictions using predict(). Recall that
when we presented the AdaBoost example in Chap. 3, we saw the basic
mechanism of machine-learning training, prediction and assessment.
The function predict() has the following components:
p<-predict(m, test, type="class")
Similarly to the approach in Chap. 7, we use cross table to compare predicted class
and the true class of our test dataset.
library(gmodels)
CrossTable(hn_test_pred, hn_med_test$stage)
## Cell Contents
## | N
| ## | Chi-square
contribution | ## |
N / Row Total | ## |
N / Col Total | ## |
N / Table Total | ##
|-------------------------|
## Total Observations in Table: 133
## | hn_med_test$stage
## hn_test_pred | early_stage | later_stage | Row Total |
## -------------|-------------|-------------|-------------|
## early_stage | 90 | 24 |
114 | ## | 0.008 | 0.032 |
| ## | 0.789 | 0.211 |
0.857 | ## | 0.849 | 0.889 |
| ## | 0.677 | 0.180 |
|
## -------------|-------------|-------------|-------------|
## later_stage | 16 | 3 |
19 | ## | 0.049 | 0.190 |
| ## | 0.842 | 0.158 |
0.143 | ## | 0.151 | 0.111 |
|
## | 0.120 | 0.023 | |
## -------------|-------------|-------------|-------------|
## Column Total | 106 | 27 | 133 |
## | 0.797 | 0.203 | |
It may be worth skipping forward to Chap. 14, where we present a summary
table for the key measures used to evaluate the performance of binary tests,
classifiers, or predictions.
The prediction accuracy:
TP þ TN 93
ACC ¼ ¼¼ 0:7:
TP þ FP þ FN þ TN 133
From the cross table we can see that our prediction accuracy is ¼ 0:70.
8.5 Case Study: Head and Neck Cancer Medication 301
However, the later stage classification only has three counts. This might be due
to the P(Fi| CL) 0 problem that we discussed above.
After setting laplace¼15, the accuracy goes up to 76%. Although this is a small
improvement in terms of accuracy, we have a better chance of detecting later
stage patients.
hn_classifier <- naiveBayes(hn_train, hn_med_train$stage,
laplace = 15) hn_test_pred<-predict(hn_classifier, hn_test)
CrossTable(hn_test_pred, hn_med_test$stage)
## Cell Contents
##
|-------------------------|
## | N
| ## | Chi-square
contribution | ## |
N / Row Total |
## | N / Col Total
| ## | N / Table
Total | ##
|-------------------------|
##
## Total Observations in Table: 133
##
## | hn_med_test$stage
## hn_test_pred | early_stage | later_stage | Row Total |
## -------------|-------------|-------------|-------------|
## early_stage | 99 | 25 |
124 | ## | 0.000 | 0.001 |
| ## | 0.798 | 0.202 |
0.932 | ## | 0.934 | 0.926 |
| ## | 0.744 | 0.188 |
| ##
-------------|-------------|-------------|-------------
|
## later_stage | 7 | 2 |
9 | ## | 0.004 | 0.016 |
| ## | 0.778 | 0.222 |
0.068 | ## | 0.066 | 0.074 |
|
## | 0.053 | 0.015 | |
## -------------|-------------|-------------|-------------|
## Column Total | 106 | 27 |
133 | ## | 0.797 | 0.203 |
|
## -------------|-------------|-------------|-------------|
8.5.6 Step 6: Compare Naive Bayesian against LDA
In the previous case study, we classified the patients with seer_stage of “not
applicable”(seer_stage¼8) and “unstaged, unknown or
unspecified”(seer_stage¼9) as no cancer or early cancer stages. Let’s remove
these two categories and replicate the Naive Bayes classifier case study again.
hn_med1<-hn_med[!hn_med$seer_stage %in% c(8, 9), ]
str(hn_med1); dim(hn_med1)
## 'data.frame': 580 obs. of 9 variables:
## $ PID : int 10000 10008 10029 10063 10103 1012
10135 10143
10152 10184 ...
## $ ENC_ID : int 46836 46886 47034 47240 47511 3138
47739 47769
47800 47938 ...
## $ seer_stage : Factor w/ 9 levels "0","1","2","3",..: 2 2
5 2 2 2 2 2 7 2 ...
## $ MEDICATION_DESC : chr "ranitidine" "heparin injection"
"ampicillin/ sulbactam IVPB UH" "fentaNYL injection UH" ...
## $ MEDICATION_SUMMARY: chr "(Zantac) 150 mg tablet oral two
times a day" "5,000 unit subcutaneous three times a day" "(Unasyn)
15 g IV every 6 hours" "25 - 50 microgram IV every 5 minutes PRN
severe pain\nMaximum dose 200 mcg Per PACU protocol" ...
## $ DOSE : chr "150" "5000" "1.5" "50" ...
## $ UNIT : chr "mg" "unit" "g" "microgram" ...
## $ FREQUENCY : chr "two times a day" "three times a day"
"every 6 hours" "every 5 minutes" ...
## $ TOTAL_DOSE_COUNT : int 5 3 11 2 2 2 6 1 24 2 ...
## [1] 580 9
Now we have only 580 observations. We can either use the first 480 of them
as the training dataset and the last 100 as the test dataset, or select 80–20
(trainingtesting) split, and evaluate the prediction accuracy when laplace¼1?
We can use the same code for creating the classes in training and test dataset.
Since the seer_stage¼8or9 is not in the data, we classify seer_stage¼0, 1,2or3 as
“early_stage” and seer_stage¼4,5or7 as “later_stage”.
hn_med_train1$stage<-hn_med_train1$seer_stage %in% c(4, 5, 7)
hn_med_train1$stage<-factor(hn_med_train1$stage, levels=c(F, T),
labels = c(
"early_stage", "later_stage"))
hn_med_test1$stage<-hn_med_test1$seer_stage %in% c(4, 5, 7)
hn_med_test1$stage<-factor(hn_med_test1$stage, levels=c(F, T),
labels = c("e arly_stage", "later_stage"))
prop.table(table(hn_med_train1$stage))
##
## early_stage later_stage ##
0.7392241 0.2607759
prop.table(table(hn_med_test1$sta
ge))
304 8 Probabilistic Learning: Classification Using Naive Bayes
##
## early_stage later_stage
## 0.7413793 0.2586207
Use terms that have appeared in five or more documents in the training
dataset to build the document term matrix.
## Length Class Mode
## 112 character character
## Cell Contents
##
|-------------------------|
## | N
| ## | Chi-square
contribution | ## |
N / Row Total |
## | N / Col Total
| ## | N / Table
Total |
## |-------------------------|
##
##
## Total Observations in Table: 116
##
##
## | hn_med_test1$stage
## hn_test_pred1 | early_stage | later_stage | Row Total |
## --------------|-------------|-------------|-------------|
## early_stage | 84 | 28 |
112 | ## | 0.011 | 0.032 |
| ## | 0.750 | 0.250 |
0.966 | ## | 0.977 | 0.933 |
| ## | 0.724 | 0.241 |
|
## --------------|-------------|-------------|-------------|
## later_stage | 2 | 2 |
4 | ## | 0.314 | 0.901 |
| ## | 0.500 | 0.500 |
0.034 | ## | 0.023 | 0.067 |
|
## | 0.017 | 0.017 | |
## --------------|-------------|-------------|-------------|
## Column Total | 86 | 30 |
116 | ## | 0.741 | 0.259 |
|
## --------------|-------------|-------------|-------------|
TP þ TN 86
ACC ¼ ¼¼ 0:74:
TP þ FP þ FN þ TN 116
Try to reproduce these results with some new data from the list of our Case-
Studies.
• Bayes Theorem
• Laplace Estimation
References
Load the SOCR 2011 US Job Satisfaction data. The last column (Description)
contains free text about each job. Notice that spaces are replaced by underscores,
__. Mine the text field and suggest some the meta-data analytics.
References
Kidwell , David A. (2013) Lazy Learning, Springer Science & Business Media, ISBN
9401720533, 9789401720533
Aggarwal, Charu C. (ed.) (2015) Data Classification: Algorithms and Applications, Chapman
&
Hall/CRC, ISBN 1498760589, 9781498760584
Chapter 9
Decision Tree Divide and Conquer
Classification
9.1 Motivation
Decision tree learners enable classification via tree structures modeling the
relationships among all features and potential outcomes in the data. All decision
trees begin with a trunk (all data are part of the same cohort), which is then split
into narrower and narrower branches by forking decisions based on the intrinsic
data structure. At each step, splitting the data into branches may include binary or
multinomial classification. The final decision is obtained when the tree
branching process terminates. The terminal (leaf) nodes represent the action to be
taken as the result of the series of branching decisions. For predictive models, the
leaf nodes provide the expected forecasting results given the series of events in the
tree.
There are a number of R packages available for decision tree classification
including rpart, C5.0, party, etc.
308 9 Decision Tree Divide and Conquer Classification
© Ivo D. Dinov 2018 307
I. D. Dinov, Data Science and Predictive Analytics, https://doi.org/10.1007/978-3-319-72347-
1_9
9.2 Hands-on Example: Iris Data
Let’s start by seeing a simple example using the Iris dataset, which we saw in
Chap. 3. The data features or attributes include Sepal.Length, Sepal.Width, Petal.
Length, and Petal.Width, and classes are represented by the Speciestaxa
(setosa; versicolor; and virginica).
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4
4.9 ... ## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4
3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4
1.5 ... ## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3
0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1
1 1 1 1 1 1 1 1 ...
## Sepal.Length Sepal.Width Petal.Length Petal.Width
Species ## 1 5.1 3.5 1.4
0.2 setosa ## 2 4.9 3.0 1.4
0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2
setosa ## 5 5.0 3.6 1.4 0.2
setosa ## 6 5.4 3.9 1.7
0.4 setosa
##
## setosa versicolor virginica
## 50 50 50
The ctree(Species ~ Sepal.Length + Sepal.Width + Petal. Length +Petal.Width,
data¼iris) function will build a decision tree
(Figs. 9.1 and 9.2).
iris_ctree <- ctree(Species ~ Sepal.Length + Sepal.Width +
Petal.Length + Pe tal.Width, data=iris) print(iris_ctree)
## Conditional inference tree with 4 terminal
nodes ##
## Response: Species
## Inputs: Sepal.Length, Sepal.Width, Petal.Length, Petal.Width
## Number of observations: 150
##
## 1) Petal.Length <= 1.9; criterion = 1, statistic =
140.264 ## 2)* weights = 50
## 1) Petal.Length > 1.9
## 3) Petal.Width <= 1.7; criterion = 1, statistic = 67.894
## 4) Petal.Length <= 4.8; criterion = 0.999, statistic = 13.865
## 5)* weights = 46
## 4) Petal.Length > 4.8
## 6)* weights = 8
## 3) Petal.Width >
1.7 ## 7)*
weights = 46
plot(iris_ctree,
cex=2)
9.2 Hands-on Example: Iris Data
309
Fig. 9.1 Decision tree classification illustrating four leaf node labels corresponding to the three
iris genera
Fig. 9.2 An alternative decision tree classification of the iris flowers dataset
head(iris); tail(iris)
## Sepal.Length Sepal.Width Petal.Length Petal.Width
Species ## 1 5.1 3.5 1.4
0.2 setosa ## 2 4.9 3.0 1.4
0.2 setosa ## 3 4.7 3.2 1.3
0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
310 9 Decision Tree Divide and Conquer Classification
## 5 5.0 3.6 1.4 0.2
setosa ## 6 5.4 3.9 1.7
0.4 setosa
The decision tree algorithm represents an upside down tree with lots of tree branch
bifurcations where a series of logical decisions are encoded as tree node splits.
The classification begins at the root node and goes through many branches until
it gets to the terminal nodes. This iterative process splits the data into different
classes by rigid criteria.
9.3 Decision Tree Overview 311
Decision trees involve recursive partitioning that uses data features and attributes
to split the data into groups (nodes) of similar classes.
To make classification trees using data features, we need to observe the
pattern between the data features and potential classes using training data. We can
draw scatter plots and separate groups that are clearly clotted together. Each group
is considered a segment of the data. After getting the approximate range of each
feature value under each group, we can make the decision tree.
the samples in the resulting partitions. There are three main indices to evaluate the
impurity reduction: Misclassification error, Gini index and Entropy.
For a given table containing pairs of attributes and their class labels, we can
assess the homology of the classes in the table. A table is pure (homogenous) if it
only contains a single class. If a data table contains several classes, then we say
that the table is impure or heterogeneous. This degree of impurity or heterogeneity
can be quantitatively evaluated using impurity measures like entropy, Gini index,
and misclassification error.
9.3.2 Entropy
results n
from a split in a decision tree classifier. Then the entropy measure is defined
by:
Entropy Dð Þ ¼ Xi pilog2pi:
If each of the 1 i k states for each feature is equally likely to be observed with
probability pi ¼ 1k, then the entropy is maximized:
k k k
X 1 1 1 1
k k Xk kX
Entropy Dð Þ ¼ log ¼ logk ¼1 ¼ 1:
i¼1 i¼1 i¼1
In the other extreme, the entropy is minimized. Note that by L’Hopital’s Rule
1 limx!0x log xð Þ ¼ limx!0 x12 ¼ limx!0x ¼ 0 ) for a single
x
class classification where the probability of one class is unitary (pio ¼ 1) and the
other ones are trivial (pi6¼io ¼ 0):
9.3 Decision Tree Overview 313
X 1 1
i
Entropy Dð Þ ¼ k logk ¼ pio log p io þ Xi6¼iopi log pð Þ ¼i
H Dð Þ ¼ XP cð ijDÞlogP cð ijDÞ,
i¼1
where P(ci| D) is the probability of a data point in D being labeled with class ci, and
k is the number of classes (clusters). P(ci| D) can be estimated from the observed
data by:
j xj 2Djxj has label y ¼
P cð ijDÞ ¼
ci j:
jDj
Observe that if the observations are evenly split amongst all k classes, then P
ðcijDÞ ¼ 1k and
k
X 1 1
H Dð Þ ¼ log ¼ 1: k k
i¼1
At the other extreme, if all the observations are from one class then:
H Dð Þ ¼ 1∗logkð Þ ¼1 0:
Also note that the base of the log function is somewhat irrelevant and can be
x
used to normalize (scale) the range of the entropy logbð Þ ¼ log
log
2 ðÞ
2 ðÞ x
b .
314 9 Decision Tree Divide and Conquer Classification
The Gain is the expected reduction in entropy caused by knowing the value of
an attribute.
Similar to the Entropy measure, the Misclassification error and the Giniindex are
also applied to evaluate information gain. The Misclassification error is defined
by the formula:
ME ¼ 1 max ðpkÞ:
k
i¼1
where entropy is the measurement, c is the number of total class levels, and pi
refers to the proportion of observations that fall into each class (i.e., probability of
a randomly selected data point to belong to the ith class level. For two possible
classes, the entropy ranges from 0 to 1. For n classes, the entropy ranges from 0 to
log2(n), where the minimum entropy corresponds to data that is purely
homogeneous (completely deterministic/predictable) and the maximum entropy
represents completely disordered data (stochastic or extremely noisy). You might
wonder what is the benefit of using the entropy? Another way to say this is the
smaller the entropy, the more information is contained in this split method.
9.3 Decision Tree Overview 315
The relationship for two class proportions and entropy are illustrated in Fig.
9.3, where x is the proportion for elements in one of the classes.
Fig. 9.3 Plot of the entropy of a (symmetric) binary process as a function of the proportion of
class
1 cases
set.seed(1234) x<-runif(100)
curve(-x*log2(x)-(1-x)*log2(1-x), col="red",
main="Entropy for Different Proportions", xlab
= "x (proportion for class 1)", ylab =
"Entropy", lwd=3)
The closer the binary proportion split is to 0.5, the greater the entropy. The
more homogeneous the split (one class becomes the majority) the lower the
entropy. Decision trees aim to find splits in the data that reduce the entropy, i.e.,
increasing the homogeneity of the elements within all classes.
This measuring mechanism could be used to measure and compare the
information we get using different features as data partitioning characteristics.
Let’s consider this scenario. Suppose S and S1 represent the entropy of the system
316 9 Decision Tree Divide and Conquer Classification
before and after the splitting/partitioning of the data according to a specific data
feature attribute (F). Denote the entropies of the original and the derived partition
by Entropy(S) and Entropy(S1), respectively. The information we gained from
partitioning the data using this specific feature (F) is calculated as a change in the
entropy:
While making a decision tree, we can classify those observations using as many
splits as we want. This eventually might over classify our data. An extreme
example of this would be that we make each observation as a class, which is
meaningless.
So how do we control the size of the decision tree? One possible solution is to
make a cutoff for the number of decisions that a decision tree could make.
Similarly, we can control the number of examples in each segment to be not too
small. This method is called early stopping or pre-pruning the decision tree.
However, this might make the decision procedure stop prematurely, before some
important partition occurs.
Another solution post-pruning is that we begin with growing a big decision tree
and subsequently reduce the branches based on error rates with penalty at the
nodes. This is often more effective than the pre-prunning solution.
The C5.0algorithm uses the post-pruning method to control the size of the
decision tree. It first grows an overfitting large tree to contain all the
9.3 Decision Tree Overview 317
possibilities of partitioning. Then, it cuts out nodes and branches with little effect
on classification errors.
In this Chapter, we are using the Quality of life and chronic disease dataset,
Case06_QoL_Symptom_ChronicIllness.csv. This dataset has 41 variables.
Detailed description for each variable is provided here (https://umich.
instructure.com/files/399150/download?download_frd=1).
Important variables:
• Charlson Comorbidity Index: ranging from 0 to 10. A score of 0 indicates no
comorbid conditions. Higher scores indicate a greater level of comorbidity.
• Chronic Disease Score: A summary score based on the presence and
complexity of prescription medications for select chronic conditions. A high
score in decades the patient has severe chronic diseases. Entries stored as 9
indicate missing value.
qol$cd<-qol$CHRONICDISEASESCORE>1.497
qol$cd<-factor(qol$cd, levels=c(F, T), labels =
c("minor_disease", "severe_d isease"))
To make the qol data more organized, we can order the data by the variable ID.
qol<-qol[order(qol$ID), ]
Then, we are able to subset training and testing datasets. Here is an example of
a non-random split of the entire data into training (2114) and testing (100) sets:
qol_train<-qol[1:2114, ] qol_test<-
qol[2115:2214, ]
set.seed(1234)
train_index <- sample(seq_len(nrow(qol)), size
= 0.8*nrow(qol)) qol_train<-qol[train_index, ]
qol_test<-qol[-train_index, ]
We can quickly inspect the distributions of the training and testing data to
ensure they are not vastly different. We can see that the classes are split fairly
equal in training and testing datasets.
prop.table(table(qol_train$cd))
## minor_disease
severe_disease ##
0.5279503 0.4720497
prop.table(table(qol_test$cd))
## minor_disease severe_disease
## 0.503386 0.496614
9.4.3 Step 3: Training a Model On the Data
In this section, we are using the C5.0() function from the C50 package.
320 9 Decision Tree Divide and Conquer Classification
# install.packages("C50")
library(C50)
In the qol dataset (ID column is already removed), column 41 is the class vector
(qol$cd), and column 40 is the numerical version of vector 41 (qol
$CHRONICDISEASESCORE). We need to delete these two columns to create
our training data that only contains features.
summary(qol_train[,-c(40, 41)])
## INTERVIEWDATE LANGUAGE AGE
RACE_ETHNICITY ## Min. : 0.00 Min. :1.000 Min. :
20.00 Min. :1.000
## 1st Qu.: 0.00 1st Qu.:1.000 1st Qu.:52.00 1st
Qu.:3.000
## Median : 0.00 Median :1.000 Median :59.00
Median :3.000
## Mean : 21.68 Mean :1.217 Mean :58.74
Mean :3.614
## 3rd Qu.: 0.00 3rd Qu.:1.000 3rd Qu.:67.00 3rd
Qu.:4.000
## Max. :440.00 Max. :2.000 Max. :90.00
Max. :7.000
## SEX QOL_Q_01 QOL_Q_02
QOL_Q_03
## Min. :1.000 Min. :1.000 Min. :1.000 Min. :
1.000
## 1st Qu.:1.000 1st Qu.:3.000 1st Qu.:3.000 1st
Qu.:3.000
## Median :1.000 Median :4.000 Median :3.000 Median :
4.000
## Mean :1.422 Mean :3.661 Mean :3.408 Mean :
3.714
## 3rd Qu.:2.000 3rd Qu.:4.000 3rd Qu.:4.000 3rd
Qu.:4.000 ## Max. :2.000 Max. :6.000 Max. :
6.000 Max. :6.000 …
## TOS_Q_03 TOS_Q_04 CHARLSONSCORE
## Min. :1.000 Min. :1.000 Min. :-9.0000
## 1st Qu.:4.000 1st Qu.:5.000 1st Qu.: 0.0000
## Median :4.000 Median :5.000 Median :
1.0000 ## Mean :3.787 Mean :4.686 Mean
9.4 Case Study 1: Quality of Life and Chronic Disease 321
set.seed(1234)
qol_model<-C5.0(qol_train[,-c(40, 41)], qol_train$cd)
qol_model
##
## Call:
## C5.0.default(x = qol_train[, -c(40, 41)], y =
qol_train$cd) ##
## Classification Tree
## Number of samples: 1771
## Number of predictors:
39
##
## Tree size: 25
##
## Non-standard options: attempt to group attributes
summary(qol_model)
##
## Call:
## C5.0.default(x = qol_train[, -c(40, 41)], y = qol_train$cd)
##
##
## C5.0 [Release 2.07 GPL Edition] Tue Jun 20 16:09:16 2017
## -------------------------------
##
## Class specified by attribute `outcome'
##
## Read 1771 cases (40 attributes) from undefined.data
##
## Decision tree:
##
## CHARLSONSCORE <= 0: minor_disease (665/180)
## CHARLSONSCORE > 0:
## :...AGE <= 47:
## :...MSA_Q_08 > 2: severe_disease (15/4)
## : MSA_Q_08 <= 2:
## : :...MSA_Q_14 <= 1: minor_disease (86/20)
## : MSA_Q_14 > 1:
## : :...MSA_Q_10 > 4: minor_disease (6)
## : MSA_Q_10 <= 4:
## : :...TOS_Q_03 > 4: severe_disease (8)
## : TOS_Q_03 <= 4:
## : :...MSA_Q_17 > 2: minor_disease
(8/1) ## : MSA_Q_17 <= 2:
## : :...QOL_Q_01 <= 2: minor_disease (4)
## : QOL_Q_01 > 2: severe_disease (38/13)
## AGE > 47:
## :...RACE_ETHNICITY > 3:
## :...QOL_Q_07 > 5: severe_disease
(133/26) ## : QOL_Q_07 <= 5:
## : :...QOL_Q_10 > 5: severe_disease (24/2)
## : QOL_Q_10 <= 5:
## : :...MSA_Q_14 <= 5: severe_disease
(202/72) ## : MSA_Q_14 > 5:
minor_disease (11/2) ## RACE_ETHNICITY <= 3:
322 9 Decision Tree Divide and Conquer Classification
## 62.45% AGE
## 53.13% RACE_ETHNICITY
## 38.40% QOL_Q_07
## 34.61% QOL_Q_01
## 21.91% MSA_Q_14
## 19.03% MSA_Q_04
## 13.38% QOL_Q_10
## 10.50% QOL_Q_05 ## 9.32% MSA_Q_08 ## 8.13% MSA_Q_17
## 7.28% MSA_Q_06
## 7.00% QOL_Q_09
## 4.23% PH2_Q_01
## 3.61% MSA_Q_10
## 3.27% TOS_Q_03
## 3.22% TOS_Q_04
The output of qol_model indicates that we have a tree that has 25 terminal
nodes. summary(qol_model) suggests that the classification error for decision
tree is 28% in the training data.
9.4.4 Step 4: Evaluating Model Performance
Now we can make predictions using the decision tree that we just built. The
predict() function we will use is the same as the one we showed in earlier chapters,
e.g., Chaps. 3 and 8. In general, predict() is extended by each specific type of
regression, classificaiton, clustering, or forecasting machine learning technique.
For example, randomForest::predict.randomForest() is invoked by:
The C5.0 function includes, an option trials¼, which is an integer specifying the
number of boosting iterations. The default value of one indicates that a single
model is used, and we can specify a larger number of iterations, for instance
trials¼6.
set.seed(1234)
qol_boost6<-C5.0(qol_train[ , -c(40, 41)], qol_train$cd, trials=6) #
try alt ernative values for the trials option qol_boost6
##
## Call:
## C5.0.default(x = qol_train[, -c(40, 41)], y = qol_train$cd,
trials = 6) ##
## Classification Tree
## Number of samples:
1771 ## Number of
predictors: 39
##
## Number of boosting iterations: 6
## Average tree size: 11.7
##
## Non-standard options: attempt to group attributes
We can see that the size of the tree reduced to about 12 (this may vary at each
run).
Since this is a fairly small tree, we can visualize it by the function plot(). We
also use the option type¼"simple" to make the tree look more condensed
(Fig. 9.4).
9.4 Case Study 1: Quality of Life and Chronic Disease 325
Fig. 9.4 Classification tree plot of the quality of lofe (QoL) data
plot(qol_boost6, type="simple")
Caution The plotting of decision trees will fail if you have columns that start with
numbers or special characters (e.g., "5variable", "!variable"). In general, avoid
spaces, special characters, and other non-terminal symbols in column/row names.
The next step would be making predictions and testing the corresponding
accuracy.
qol_boost_pred6 <- predict(qol_boost6, qol_test[ ,-c(40, 41)])
confusionMatrix(table(qol_boost_pred6, qol_test$cd))
## Confusion Matrix and Statistics
## qol_boost_pred6 minor_disease
severe_disease ## minor_disease
140 75
## severe_disease 83 145
##
## Accuracy : 0.6433
## 95% CI : (0.5968,
0.688) ## No Information Rate :
0.5034 ## P-Value [Acc > NIR]
: 1.987e-09 ##
Kappa : 0.2868 ## Mcnemar's Test
P-Value : 0.5776 ##
Sensitivity : 0.6278
## Specificity : 0.6591
## Pos Pred Value : 0.6512
## Neg Pred Value : 0.6360
## Prevalence : 0.5034
## Detection Rate : 0.3160
## Detection Prevalence : 0.4853
## Balanced Accuracy : 0.6434
##
## 'Positive' Class : minor_disease
The accuracy is about 64%. However, this may vary each time we run the
experiment (mind the confidence interval). In some studies, the trials option
provides significant improvement to the overall accuracy. A good choice for this
option is trials ¼ 10.
326 9 Decision Tree Divide and Conquer Classification
Suppose we want to reduce the false negative rate, in this case, misclassifying a
severe case as minor. False negative (failure to detect a severe disease case) may
be more costly than false positive (misclassifying a minor disease case as severe).
Misclassification errors can be expressed as a matrix:
error_cost<-matrix(c(0, 1, 4, 0), nrow = 2)
error_cost
## [,1] [,2]
## [1,] 0 4
## [2,] 1 0
library("rpart")
# remove CHRONICDISEASESCORE, but keep *cd* label
set.seed(1234)
qol_model<-rpart(cd~., data=qol_train[, -40], cp=0.01)
# here we use rpart::cp = *complexity parameter* = 0.01
qol_model
## n= 1771
##
## node), split, n, loss, yval, (yprob)
## * denotes terminal node
##
## 1) root 1771 836 minor_disease (0.5279503 0.4720497)
## 2) CHARLSONSCORE< 0.5 665 180 minor_disease (0.7293233
0.2706767) *
## 3) CHARLSONSCORE>=0.5 1106 450 severe_disease (0.4068716
0.5931284)
## 6) AGE< 47.5 165 65 minor_disease (0.6060606 0.3939394) *
## 7) AGE>=47.5 941 350 severe_disease (0.3719447 0.6280553) *
You can also plot directly using rpart.plot (Fig. 9.5).
library("rattle")
fancyRpartPlot(qol_model, cex = 1)
328 9 Decision Tree Divide and Conquer Classification
Fig. 9.6 Another decision tree classification of the QoL data, compare to Fig. 9.5
qol_pred<-predict(qol_model, qol_test,type = 'class')
confusionMatrix(table(qol_pred, qol_test$cd))
## Confusion Matrix and Statistics
## qol_pred minor_disease
severe_disease ## minor_disease
133 64 ## severe_disease
90 156
##
## Accuracy : 0.6524
## 95% CI : (0.606,
0.6967) ## No Information Rate :
0.5034 ## P-Value [Acc > NIR]
: 1.759e-10 ##
Kappa : 0.3053 ## Mcnemar's Test
P-Value : 0.04395 ##
Sensitivity : 0.5964
## Specificity : 0.7091
## Pos Pred Value : 0.6751
## Neg Pred Value : 0.6341
## Prevalence : 0.5034
## Detection Rate : 0.3002
## Detection Prevalence : 0.4447
## Balanced Accuracy : 0.6528
## 'Positive' Class :
minor_disease
These results are consistent with their counterparts reported using C5.0. How
can we tune the parameters to further improve the results? (Fig. 9.7).
set.seed(1234)
control = rpart.control(cp = 0.000, xxval = 100,
minsplit = 2) qol_model= rpart(cd ~ ., data =
qol_train[ , -40], control = control)
plotcp(qol_model)
9.4 Case Study 1: Quality of Life and Chronic Disease 329
Fig. 9.7 Tuning the decision tree classification by reducing the error across the spectrum of
costcomplexity pruning parameter (cp) and tree size
printcp(qol_model)
## Classification tree:
## rpart(formula = cd ~ ., data = qol_train[, -40], control =
control) ##
## Variables actually used in tree construction:
## [1] AGE CHARLSONSCORE INTERVIEWDATE LANGUAGE
## [5] MSA_Q_01 MSA_Q_02 MSA_Q_03 MSA_Q_04
## [9] MSA_Q_05 MSA_Q_06 MSA_Q_07 MSA_Q_08
## [13] MSA_Q_09 MSA_Q_10 MSA_Q_11 MSA_Q_12
## [17] MSA_Q_13 MSA_Q_14 MSA_Q_15 MSA_Q_16
## [21] MSA_Q_17 PH2_Q_01 PH2_Q_02 QOL_Q_01
## [25] QOL_Q_02 QOL_Q_03 QOL_Q_04 QOL_Q_05
## [29] QOL_Q_06 QOL_Q_07 QOL_Q_08 QOL_Q_09
## [33] QOL_Q_10 RACE_ETHNICITY SEX TOS_Q_01
## [37] TOS_Q_02 TOS_Q_03 TOS_Q_04
##
## Root node error: 836/1771 = 0.47205
##
## n= 1771
##
## CP nsplit rel error xerror
xstd ## 1 0.24641148 0 1.0000000 1.00000
0.025130
## 2 0.04186603 1 0.7535885 0.75359
0.024099 ## 3 0.00717703 2 0.7117225
0.71651 0.023816 ## 4 0.00657895 3
0.7045455 0.72967 0.023920
## 5 0.00598086 9 0.6543062 0.74282
0.024020 ## 6 0.00478469 14 0.6244019
0.74282 0.024020
## 7 0.00418660 17 0.6100478 0.75239
0.024090 ## 8 0.00398724 21 0.5933014
0.75359 0.024099 ## 9 0.00358852 32
0.5466507 0.75957 0.024141
330 9 Decision Tree Divide and Conquer Classification
Now, we can prune the tree according to the optimal cp, complexity parameter
to which the rpart object will be trimmed. Instead of using the real error (e.g., 1
R2, RMSE) to capture the discrepancy between the observed labels and the
modelpredicted labels, we will use the xerror, which averages the discrepancy
between observed and predicted classifications using cross-validation, see Chap.
21. Figs. 9.8, 9.9, and 9.10 show some alternative decision tree prunning results.
set.seed(1234)
selected_tr <- prune(qol_model, cp=
qol_model$cptable[which.min(qol_model$cp
table[,"xerror"]),"CP"]) fancyRpartPlot(selected_tr,
cex = 1)
9.4 Case Study 1: Quality of Life and Chronic Disease 331
Fig. 9.8 Prunned decision tree classification for the QoL data, compare to Figs. 9.5 and 9.6
qol_pred_tune<-predict(selected_tr, qol_test,type = 'class')
confusionMatrix(table(qol_pred_tune, qol_test$cd))
## Confusion Matrix and Statistics
## qol_pred_tune minor_disease severe_disease
## minor_disease 133 64
## severe_disease 90 156
##
## Accuracy : 0.6524
## 95% CI : (0.606,
0.6967) ## No Information Rate :
0.5034 ## P-Value [Acc > NIR]
: 1.759e-10 ##
Kappa : 0.3053 ## Mcnemar's Test
P-Value : 0.04395 ##
Sensitivity : 0.5964
## Specificity : 0.7091
## Pos Pred Value : 0.6751
## Neg Pred Value : 0.6341
## Prevalence : 0.5034
## Detection Rate : 0.3002
## Detection Prevalence : 0.4447
## Balanced Accuracy : 0.6528
## 'Positive' Class :
minor_disease
The result is roughly same as that of C5.0. Despite the fact that there is no
substantial classification improvement, the tree-pruning process generates a
graphical representation of the decision making protocol (selected_tr) that is much
simpler and intuitive compared to the original (un-pruned) tree (qol_model):
fancyRpartPlot(qol_model, cex = 0.1)
332 9 Decision Tree Divide and Conquer Classification
Fig. 9.9 Testing data (QoL dataset) decision tree prediction results (for chronic disease, CD)
333
9.6 Classification Rules
set.seed(1234)
qol_model = rpart(cd ~ .,
data=qol_train[ , -40], parms = list(split
= "entropy")) fancyRpartPlot(qol_model,
cex = 1)
# Modify and test using "error" and "gini"
# qol_pred<-predict(qol_model, qol_test,type = 'class')
# confusionMatrix(table(qol_pred, qol_test$cd))
Separate and conquer repeatedly splits the data (and subsets of the data) by rules
that cover a subset of examples. This procedure is very similar to the Divide and
conquer approach. However, a notable difference is that each rule can be
independent, and yet, each decision node in a tree has to be linked to past
decisions.
To understand the One Rule (OneR) algorithm, we need to know about its
"sibling" - ZeroR rule. ZeroR rule means that we assign the mode class to
unlabeled test observations regardless of its feature value. The One rule algorithm
is an improved version of ZeroR that uses a single rule for classification. In other
words, OneR splits the training dataset into several segments based on feature
values. Then, it assigns the modes of the classes with in each segment to related
observations in the unlabeled test data. In practice, we first test multiple rules
and pick the rule with the smallest error rate to be our OneRule. Remember, these
rules may be subjective.
Let’s take another look at the same dataset as Case Study 1 - this time applying
classification rules. Naturally, we will skip over the first two data handling
steps and go directly to step three.
335
9.7.1 Step 3: Training a Model on the Data
Let’s start by using the OneR() function in the RWeka package. Before installing
the package you might want to check that the Java program in your computer is up
to date. Also, its version has to match the version of R (i.e., 64bit R needs 64bit
Java).
The function OneR() has the following invocation protocol:
m<-OneR(class~predictors, data=mydata)
• class: factor vector with the class for each row in mydata.
• predictors: feature variables in mydata. If we want to include x1, x2 as predictors
and y as the class label variable, we do y x1 + x2. To specify a full model, we
use this notation: y~., which includes all of the column variables as predictors.
• mydata: the dataset where the features and labels can be found.
# install.packages("RWeka")
library(RWeka)
# just remove the CHRONICDISEASESCORE but keep cd
set.seed(1234)
qol_1R<-OneR(cd~., data=qol[ , -40])
qol_1R
## CHARLSONSCORE:
## < -4.5 -> severe_disease
## < 0.5 -> minor_disease
## < 5.5 -> severe_disease
## < 8.5 -> minor_disease
## >= 8.5 -> severe_disease
## (1453/2214 instances correct)
Note that 1,453 out of 2,214 cases are correctly classified, 66%, by the “one
rule”.
Another possible option for the classification rules would be the RIPPER rule
algorithm that we discussed earlier in the chapter. In R we use the Java based
function JRip() to invoke this algorithm.
JRip() function has the same components as the OneR() function:
m<-JRip(class~predictors, data=mydata)
set.seed(1234)
qol_jrip1<-JRip(cd~., data=qol[ , -40])
qol_jrip1
## JRIP rules:
## ===========
## (CHARLSONSCORE >= 1) and (RACE_ETHNICITY >= 4) and (AGE >= 49) =>
cd=seve re_disease (448.0/132.0)
## (CHARLSONSCORE >= 1) and (AGE >= 53) => cd=severe_disease
(645.0/265.0)
## => cd=minor_disease (1121.0/360.0)
##
## Number of Rules : 3
summary(qol_jrip1)
Another idea is to repeat the generation of trees multiple times, predict according
to each tree’s performance, and finally ensemble those weighted votes into a
combined classification result. This is precisely the idea behind randomforest
classification, see Chap. 15 (Figs. 9.11 and 9.12).
require(randomForest)
set.seed(12)
# rf.fit <- tuneRF(qol_train[ , -40],
qol_train[ , 40], stepFactor=1.5) rf.fit <-
randomForest(cd~. , data=qol_train[ ,
-40],importance=TRUE,ntree=2
000,mtry=26)
varImpPlot(rf.fit); print(rf.fit)
9.7 Case Study 2: QoL in Chronic Disease (Take 2)
Fig. 9.11 Variable importance plots of random forest classification of the QoL CD variable
using accuracy (left) and Gini index (right) as evaluation metrics
338 9 Decision Tree Divide and Conquer Classification
Fig. 9.12 Error plots of the random forest prediction of CD (QoL chronic disease) using three
different trees models
## Call:
## randomForest(formula = cd ~ ., data =
qol_train[, -40], importance = TRUE, ntree =
2000, mtry = 26) ## Type of random
forest: classification
## Number of trees: 2000
## No. of variables tried at each split:
26 ##
## OOB estimate of error rate:
35.86% ## Confusion matrix:
## minor_disease severe_disease
class.error ## minor_disease 576
359 0.3839572 ## severe_disease 276
560 0.3301435
In random forest (RF) classification, the node size (nodesize) refers to the
smallest node that can be split, i.e., nodes with fewer cases than the nodesize are
never subdivided. Increasing the node size leads to smaller trees, which may
compromise previous predictive power. On the flip side, increasing the tree size
340 9 Decision Tree Divide and Conquer Classification
(maxnodes) and the number of trees (ntree) tends to increase the predictive
accuracy. However, there are tradeoffs between increasing node-size and tree-size
simultaneously. To optimize the RF predictive accuracy, try smaller node sizes
and more trees. Ensembling (forest) results from a larger number of trees will
likely generate better results.
# qol_train1<-qol[1:2114, ]
# qol_test1<-qol[2115:2214, ]
train_index <- sample(seq_len(nrow(qol)), size =
0.8*nrow(qol)) qol_train1<-qol[train_index, ]
qol_test1<-qol[-train_index, ]
9.8 Practice Problem 341
prop.table(table(qol_train1$cdthree))
##
## minor_disease mild_disease
severe_disease ## 0.1699605
0.6459627 0.1840768
prop.table(table(qol_test1$cdthree))
##
## minor_disease mild_disease
severe_disease ## 0.1760722
0.6478555 0.1760722
set.seed(1234)
qol_model1<-C5.0(qol_train1[ , -c(40, 41, 42)],
qol_train1$cdthree, trials=10) qol_model1
## ##
Call:
## C5.0.default(x = qol_train1[, -c(40, 41, 42)], y =
## qol_train1$cdthree, trials = 10)
##
## Classification Tree
## Number of samples: 1771
## Number of predictors: 39
##
## Number of boosting iterations: 10
## Average tree size: 230.5
##
## Non-standard options: attempt to group attributes qol_pred1<-
predict(qol_model1, qol_test1)
confusionMatrix(table(qol_test1$cdthree, qol_pred1))
qol_pred1<-predict(qol_1R1, qol_test1)
confusionMatrix(table(qol_test1$cdthree,
qol_pred1))
## qol_pred1
## minor_disease mild_disease
severe_disease ## minor_disease
0 78 0
## mild_disease 0 285
2
## severe_disease 0 76
2 ##
## Overall Statistics
##
## Accuracy :
0.6479
## 95% CI : (0.6014, 0.6923)
## No Information Rate : 0.991
## P-Value [Acc > NIR] : 1
##
## Kappa :
0.012
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: minor_disease
Class: mild_disease ## Sensitivity
NA 0.64920
## Specificity 0.8239
0.50000 ## Pos Pred Value
NA 0.99303 ## Neg Pred Value
NA 0.01282
## Prevalence 0.0000
0.99097 ## Detection Rate
0.0000 0.64334 ## Detection
Prevalence 0.1761
0.64786 ## Balanced Accuracy
NA 0.57460
## Class: severe_disease
## Sensitivity 0.500000
## Specificity
0.826879 ## Pos Pred Value
0.025641 ## Neg Pred Value
0.994521 ## Prevalence
0.009029 ## Detection Rate
0.004515 ## Detection Prevalence
0.176072 ## Balanced Accuracy
0.663440
The OneRule classifier that is purely based on the value of the
INTERVIEWDATE has 65% internal classification accuracy, and also 65%
external (validation data) prediction accuracy. Although, the latter assessment is a
bit misleading, as the vast majority of external validation data are classified in
only one class - mild_disease.
Finally, let’s revisit the JRip() classifier with the same three class labels
according to cdthree.
set.seed(1234)
qol_jrip1<-JRip(cdthree~., data=qol[ , -c(40, 41)])
qol_jrip1
344 9 Decision Tree Divide and Conquer Classification
## JRIP
rules: ##
===========
## (CHARLSONSCORE <= 0) and (AGE <= 50) and (MSA_Q_06 <= 1) and
(QOL_Q_07 >= 1) and (MSA_Q_09 <= 1) => cdthree=minor_disease
(35.0/11.0) ## (CHARLSONSCORE >= 1) and (QOL_Q_10 >= 4) and
(QOL_Q_07 >= 9) => cdthree=severe_disease (54.0/20.0)
## (CHARLSONSCORE >= 1) and (QOL_Q_02 >= 5) and (MSA_Q_09 <= 4)
and
(MSA_Q_04 >= 3) => cdthree=severe_disease (64.0/30.0)
## (CHARLSONSCORE >= 1) and (QOL_Q_02 >= 4) and (PH2_Q_01 >= 3)
and (QOL_Q_10 >= 4) and (RACE_ETHNICITY >= 4) =>
cdthree=severe_disease (43.0/19.0)
## => cdthree=mild_disease (2018.0/653.0)
##
## Number of Rules : 5
summary(qol_jrip1)
## Sensitivity 0.71429
0.6773
## Specificity 0.83257
0.6757 ## Pos Pred Value 0.06410
0.9582 ## Neg Pred Value 0.99452
0.1603 ## Prevalence 0.01580
0.9165 ## Detection Rate 0.01129
0.6208 ## Detection Prevalence 0.17607
0.6479 ## Balanced Accuracy
0.77343 0.6765
## Class: severe_disease
## Sensitivity 0.56667
## Specificity
0.85230 ## Pos Pred Value
0.21795 ## Neg Pred Value
0.96438 ## Prevalence
0.06772 ## Detection Rate
0.03837 ## Detection Prevalence
0.17607 ## Balanced Accuracy
0.70948
In terms of the predictive accuracy on the testing data (qol_test1$cdthree), we
can see from these outputs that the RIPPER algorithm performed better (67%)
than the C5.0 decision tree (60%) and similarly to the OneR algorithm (65%),
which suggests that simple algorithms might outperform complex methods for
certain real world case-studies. Later, in Chap. 15, we will provide more details
about optimizing and improving classification and prediction performance.
Try to replicate these results with other data from the list of our Case-Studies.
Use the SOCR Neonatal Pain data to build and display a decision tree recursively
partitioning the data using the provided features and attributes to split the data into
similar classes.
346 9 Decision Tree Divide and Conquer Classification
• Collect and preprocess the data, e.g., data conversion and variable selection.
• Randomly split the data into training and testing sets.
• Train decision tree models on the data using C5.0 and rpart.
347
References
References
Fischetti, T, Lantz, B, Abedin, J, Mittal, HV, Makhabel, B, Berlinger, E, Illes, F, Badics, M,
Banai, A, Daroczi, G (2016) R: Data Analysis and Visualization, Packt Publishing Ltd, ISBN
1786460483, 9781786460486.
Liu, H, Gegov, A, Cocea, M. (2015) Rule Based Systems for Big Data: A Machine Learning
Approach, Springer, Volume 13 (Studies in Big Data), ISBN 3319236962, 9783319236964.
Witten, IH, Frank, E, Hall, MA, Pal, CJ. (2016) Data Mining: Practical Machine Learning Tools
and Techniques, Morgan Kaufmann, Series in Data Management Systems, ISBN
0128043571, 9780128043578.
Chapter 10
Forecasting Numeric Data Using Regression
Models
First recall the material presented in Chap. 5 (Linear Algebra & Matrix
Computing). The simplest case of regression modeling involves a single
predictor.
y ¼ a þ bx:
349 10 Forecasting Numeric Data Using Regression Models
© Ivo D. Dinov 2018 345
I. D. Dinov, Data Science and Predictive Analytics, https://doi.org/10.1007/978-3-319-72347-
1_10
Fig. 10.1 Scatterplot and a linear model of length of stay (LOS) vs. hospital charges for the heart
attack data
heart_attack<-
read.csv("https://umich.instructure.com/files/1644953
/download?download_frd
=1", stringsAsFactors = F) heart_attack$CHARGES<-
as.numeric
(heart_attack$CHARGES)
## Warning: NAs introduced by coercion heart_attack<-
heart_attack[complete.cases(heart_attack), ]
^y ¼ 4582:70 þ 212:29 x,
or equivalently
How did we get the estimated expression? The most common estimating method
in statistics is ordinary least squares(OLS).OLSestimatorsare
obtainedbyminimizing the sum of the squared errors – that is the sum of squared
vertical distances between each point on the scatter plot and its predicted value on
the regression line (Fig. 10.2).
348 10 Forecasting Numeric Data Using Regression Models
Fig. 10.2 Graphical representation of the residuals representing the difference between observed
and predicted values
OLS is minimizing the following formula:
n n n
2 2
2 yi ^y i ¼ X ðyi
X
ða þ b xiÞÞ ¼ X ei :
i¼1i¼1
i¼1
Some simple mathematical operations to minimize the sum square error yield
the following solution for the slope parameter b:
xi xyi y b ¼2
P
:
P xi x
a ¼ y bx:
10.2 Ordinary Least Squares Estimation 349
i¼1
1 n
Cov xð
;yÞ b ¼
:
var xð Þ
Let’s examine these closed-form analytical expressions using the heart attack
data.
b<-cov(heart_attack$LOS,
heart_attack$CHARGES)/var(heart_attack$LOS); b
## [1] 212.2869 a<-mean(heart_attack$CHARGES)-
b*mean(heart_attack$LOS); a
## [1] 4582.7
We can see that these estimates are exactly the same result as the previously
reported.
350 10 Forecasting Numeric Data Using Regression Models
• Linear relationship,
• Multivariate normality,
• No or little multicollinearity, • No auto-correlation, independence,
• Homoscedasticity.
10.2.2 Correlations
The SOCR Interactive Scatterplot Game (requires Java enabled browser) provides
a dynamic interface demonstrating linear models, trends, correlations, slopes, and
residuals.
Using the covariance, we can calculate the correlation, which indicates how
closely the relationship between two variables follows a straight line.
The same outputs are obtained by the manual and the automated correlation
calculations. This correlation is a positive number that is relatively small. We can
say there is a weak positive linear association between these two variables. If we
have a negative correlation estimate, it suggests a negative linear association. We
have a weak association when 0.1 Cor < 0.3, a moderate association for 0.3 Cor
< 0.5, and a strong association for 0.5 Cor 1.0. If the correlation is below 0.1 then
it suggests little to no linear relation between the variables.
10.2.3 Multiple Linear Regression
or equivalently
We usually use the second notation method in statistics. This equation shows
the linear relationship between k predictors and a dependent variable. In total we
have k + 1 coefficients to estimate.
The matrix notation for corresponding to the above equation is:
Y ¼ Xβ þ E,
where
0y 1 1
y2
Y
¼ BB ...
,
CCCCA
BB
@ yn
1 x12 x22 x X ¼ BB 1n 2n
B
B
CC CCA,
: : : : :
@1 x x ... xkn
0β1 1
β
β ¼
BBBB@ ...2
CCCCA,
352 10 Forecasting Numeric Data Using Regression Models
βk
and
0
E
E1 1
¼ BBBB@ ...2
CCCCA
En
β^ ¼ XTX1XTY:
This is the matrix form solution, where X1 is the inverse matrix of X and XT is
the transpose matrix.
Let’s write a simple R function reg (x,y), that implements this matrix formula.
reg<-function(y, x){
x<-as.matrix(x) x<-cbind(Intercept=1, x) solve(t(x)%*
%x)%*%t(x)%*%y
}
The method solve() is used to compute the matrix inverse and %*% is matrix
multiplication.
Next, we will apply this function to our heart attack dataset. To begin, let’s
check if the simple linear regression output is the same as we calculated earlier.
reg(y=heart_attack$CHARGES, x=heart_attack$LOS)
##
[,1] ## Intercept
4582.6997 ##
212.2869
As the slope and intercept and consistent with our previous estimates, we can
continue and include additional variables as predictors. For instance, we can just
add age into the model.
str(heart_attack)
## 'data.frame': 148 obs. of 8 variables:
## $ Patient : int 1 2 3 4 5 6 7 8 9 10 ...
## $ DIAGNOSIS: int 41041 41041 41091 41081 41091 41091 41091
10.2 Ordinary Least Squares Estimation 353
We utilize the MLB data "01a_data.txt". The dataset contains 1034 records of
heights and weights for some current and recent Major League Baseball (MLB)
Players. These data were obtained from different resources (e.g., IBM Many
Eyes). This dataset includes the folloing variables:
• Name: MLB Player Name,
• Team: The Baseball team the player was a member of at the time the data was
acquired,
• Position: Player field position,
• Height: Player height in inch, • Weight: Player weight in pounds, and
• Age: Player age at time of record.
Let’s load this dataset first. We use as.is¼T to make non-numerical vectors into
characters. Also, we delete the Name variable because we don’t need players’
names in this case study.
mlb<-
read.table('https://umich.instructure.com/files/330381/
download?downlo ad_frd=1', as.is=T, header=T) str(mlb)
## 'data.frame': 1034 obs. of 6 variables:
## $ Name : chr "Adam_Donachie" "Paul_Bako"
"Ramon_Hernandez"
"Kevin_Millar" ...
## $ Team : chr "BAL" "BAL" "BAL" "BAL" ...
## $ Position: chr "Catcher" "Catcher" "Catcher"
"First_Baseman" ...
## $ Height : int 74 74 72 72 73 69 69 71 76 71 ...
354 10 Forecasting Numeric Data Using Regression Models
## $ Weight : int 180 215 210 210 188 176 209 200
231 180 ... ## $ Age : num 23 34.7 30.8 35.4
35.7 ... mlb<-mlb[, -1]
By looking at the srt() output, we notice that the variable TEAM and Position are
misspecified as characters. To fix this, we can use the function as.
factor() to convert numerical or character vectors to factors.
mlb$Team<-as.factor(mlb$Team) mlb$Position<-
as.factor(mlb$Position)
The data is good to go. Let’s explore it using some summary statistics and
plots (Fig. 10.3).
10.3 Case Study 1: Baseball Players 355
summary(mlb$Weight)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 150.0 187.0 200.0 201.7 215.0 290.0
hist(mlb$Weight, main = "Histogram for Weights")
mlb_binary$bi_weight =
as.factor(ifelse(mlb_binary$Weight>median(mlb_binary$Weight),1,0))
g_weight <- ggpairs(data=mlb_binary[-1], title="MLB Light/Heavy
Weights",
mapping=ggplot2::aes(colour = bi_weight),
lower=list(combo=wrap("facethist",binwidth=1)))
g_weight
Next, we may also mark player positions by different colors in the plot
(Fig. 10.5).
g_position <- ggpairs(data=mlb[-1], title="MLB by Position",
mapping=ggplot2::aes(colour = Position),
lower=list(combo=wrap("facethist",binwidth=1)))
g_position
What about potential predictors?
356 10 Forecasting Numeric Data Using Regression Models
Fig. 10.4 Pair plots of the MLB data by player’s light (red) or heavy (blue) weights
table(mlb$Team)
## ANA ARZ ATL BAL BOS CHC CIN CLE COL CWS DET FLA HOU KC LA MIN
MLW NYM
## 35 28 37 35 36 36 36 35 35 33 37 32 34 35 33 33
35 38
## NYY OAK PHI PIT SD SEA SF STL TB TEX
TOR WAS ## 32 37 36 35 33 34 34 32
33 35 34 36 table(mlb$Position)
##
## Catcher Designated_Hitter First_Baseman
Outfielder
## 76 18 55
194 ## Relief_Pitcher Second_Baseman Shortstop
Starting_Pitcher ## 315 58
52 221
## Third_Baseman
## 45
summary(mlb$Height)
10.3 Case Study 1: Baseball Players 357
In this case, we have two numerical predictors, two categorical predictors and
1,034 observations. Let’s see how R treats different classes of variables.
10.3.3 Exploring Relationships Among Features:
358 10 Forecasting Numeric Data Using Regression Models
Observe that cor(y,x) ¼ cor(x,y) and cov(x,x) ¼ 1. Also, our Height variable is
weakly related to the players’ age in a negative manner. This looks very good
and wouldn’t cause any multicollinearity problem. If two of our predictors are
highly correlated, they both provide almost the same information, which could
imply multicollinearity. A common practice is to delete one of them in the model
or use dimensionality reduction methods.
You might get a sense of the data, but it is difficult to see any linear pattern.
We can make a more sophisticated graph using pairs.panels() in the psych package
(Fig. 10.7).
# install.packages("psych")
library(psych)
pairs.panels(mlb[, c("Weight", "Height", "Age")])
This plot provides much more information about the three variables. Above the
diagonal, we have our correlation coefficients in numerical form. On the
diagonal, there are histograms of variables. Below the diagonal, visual information
is presented to help us understand the trend. This specific graph shows that
height and weight are positively and strongly correlated. Also, the relationships
between age and height, as well as, age and weight are very weak, see the
10.3 Case Study 1: Baseball Players 359
horizontal red line in the panel below the main diagonal graphs, which indicates
weak relationships (Fig. 10.7).
Fig. 10.7 A more detailed pairs plot of MLB players weights, heights and ages
360 10 Forecasting Numeric Data Using Regression Models
The function we are going to use now is lm(). No additional package is needed
when using this function.
The lm() function has the following components:
m<-lm(dv ~ iv, data¼mydata)
• dv: dependent variable
• iv: independent variables. Just like OneR() in Chap. 9, if we use . as iv, then all
of the variables, except the dependent variable (dv), are included as predictors.
• data: specifies the data containing both dependent viable and independent
variables.
fit<-lm(Weight~.,
data=mlb) fit
## ##
Call:
## lm(formula = Weight ~ ., data = mlb)
## ##
Coefficients:
## (Intercept)
TeamARZ ## -164.9995
7.1881 ## TeamATL
TeamBAL ## -1.5631
-5.3128 ## TeamBOS
TeamCHC ## -0.2838
0.4026 ## TeamCIN
TeamCLE ## 2.1051
-1.3160 ## TeamCOL
TeamCWS ## -3.7836
4.2944 ## TeamDET
TeamFLA ## 2.3024
2.6985 ## TeamHOU
TeamKC ## -0.6808
-4.7664
## TeamLA
TeamMIN
## 2.8598
2.1269 ## TeamMLW
TeamNYM ## 4.2897
-1.9736 ## TeamNYY
TeamOAK
## 1.7483
-0.5464 ## TeamPHI
TeamPIT ## -6.8486
4.3023 ## TeamSD
TeamSEA ## 2.6133
-0.9147
## TeamSF
TeamSTL
## 0.8411 -1.1341
## TeamTB
TeamTEX ## -2.6616
-0.7695
## TeamTOR
10.3 Case Study 1: Baseball Players 361
TeamWAS
## 1.3943 -1.7555
## PositionDesignated_Hitter
PositionFirst_Baseman
## 8.9037
2.4237
## PositionOutfielder
PositionRelief_Pitcher
## -6.2636
-7.7695
## PositionSecond_Baseman
PositionShortstop ## -13.0843
-16.9562 ## PositionStarting_Pitcher
PositionThird_Baseman ## -7.3599
-4.6035 ## Height
Age
## 4.7175
0.8906
As we can see from the output, factors are included in the model by creating
several indicators, one for each factor level. For each numerical variable, a
corresponding model coefficient is estimated.
Fig. 10.9 QQ-normal plot of the residuals suggesting a linear model may explain the players ’
weight
summary(fit)
##
## Call:
## lm(formula = Weight ~ ., data = mlb)
##
## Residuals:
## Min 1Q Median 3Q
Max ## -48.692 -10.909 -0.778 9.858
73.649
##
## Coefficients:
## Estimate Std. Error t value Pr(>|
t|) ## (Intercept) -164.9995 19.3828
-8.513 < 2e-16 *** ## TeamARZ 7.1881
4.2590 1.688 0.091777 . ## TeamATL
-1.5631 3.9757 -0.393 0.694278
## TeamBAL -5.3128 4.0193 -1.322
0.186533
## TeamBOS -0.2838 4.0034 -0.071
0.943492
## TeamCHC 0.4026 3.9949 0.101
0.919749
## TeamCIN 2.1051 3.9934 0.527 0.598211
## TeamCLE -1.3160 4.0356 -0.326
0.744423
## TeamCOL -3.7836 4.0287 -0.939
0.347881 ## TeamCWS 4.2944 4.1022
1.047 0.295413
## TeamDET 2.3024 3.9725 0.580
0.562326
## TeamFLA 2.6985 4.1336 0.653
0.514028 ## TeamHOU -0.6808
4.0634 -0.168 0.866976 ## TeamKC
-4.7664 4.0242 -1.184 0.236525
## TeamLA 2.8598 4.0817 0.701
0.483686
## TeamMIN 2.1269 4.0947 0.519
0.603579
## TeamMLW 4.2897 4.0243 1.066
0.286706 ## TeamNYM -1.9736 3.9493
-0.500 0.617370 ## TeamNYY 1.7483
4.1234 0.424 0.671655 ## TeamOAK
-0.5464 3.9672 -0.138 0.890474 ## TeamPHI
-6.8486 3.9949 -1.714 0.086778 . ## TeamPIT
10.3 Case Study 1: Baseball Players 363
The model summary shows us how well the model fits the data.
Normal Q-Q This plot examines the normality assumption of the model, Fig. 10.9.
The scattered dots represent the matched quantiles of the data and the normal
distribution. If the Q-Q plot closely resembles a line bisecting the first quadrant
in the plane, the normality assumption is valid. In our case, it is relatively close to
the line. So, we can say that our model is valid in terms of normality.
step(fit,direction = "backward")
## Start: AIC=5871.04
## Weight ~ Team + Position + Height +
Age ##
## Df Sum of Sq RSS
AIC ## - Team 29 9468 289262
5847.4 ## <none>
279793 5871.0 ## - Age 1
14090 293883 5919.8 ## - Position 8
20301 300095 5927.5 ## - Height 1
95356 375149 6172.3
##
## Step: AIC=5847.45
## Weight ~ Position + Height + Age
##
## Df Sum of Sq RSS AIC
## <none> 289262
5847.4 ## - Age 1 14616
303877 5896.4 ## - Position 8
20406 309668 5901.9 ## - Height 1
100435 389697 6153.6
##
## Call:
## lm(formula = Weight ~ Position + Height + Age, data =
mlb) ##
## Coefficients:
## (Intercept)
PositionDesignated_Hitter ##
-168.0474 8.6968 ##
PositionFirst_Baseman PositionOutfielder ##
2.7780 -6.0457 ##
PositionRelief_Pitcher PositionSecond_Baseman
## -7.7782
-13.0267 ## PositionShortstop
PositionStarting_Pitcher ##
-16.4821 -7.3961 ##
PositionThird_Baseman Height ##
-4.1361 4.7639
## Age
## 0.8771
step(fit,direction =
"forward")
## Start: AIC=5871.04
## Weight ~ Team + Position + Height + Age
##
## Call:
366 10 Forecasting Numeric Data Using Regression Models
## -3.7836 4.2944
## TeamDET
TeamFLA
## 2.3024
2.6985
## TeamHOU
TeamKC ## -0.6808
-4.7664
## TeamLA
TeamMIN
## 2.8598
2.1269
## TeamMLW
TeamNYM ## 4.2897
-1.9736 ## TeamNYY
TeamOAK ## 1.7483
-0.5464 ## TeamPHI
TeamPIT ## -6.8486
4.3023 ## TeamSD
TeamSEA
## 2.6133
-0.9147 ## TeamSF
TeamSTL ## 0.8411
-1.1341 ## TeamTB
TeamTEX ## -2.6616
-0.7695 ## TeamTOR
TeamWAS ## 1.3943
-1.7555
## PositionDesignated_Hitter
PositionFirst_Baseman
## 8.9037 2.4237
## PositionOutfielder
PositionRelief_Pitcher ## -6.2636
-7.7695 ## PositionSecond_Baseman
PositionShortstop ## -13.0843
-16.9562 ## PositionStarting_Pitcher
PositionThird_Baseman ## -7.3599
-4.6035 ## Height
Age
## 4.7175
## Start: AIC=5871.04
## Weight ~ Team + Position + Height +
Age ##
## Df Sum of Sq RSS
AIC ## - Team 29 9468 289262
5847.4 ## <none>
279793 5871.0 ## - Age 1
14090 293883 5919.8 ## - Position 8
20301 300095 5927.5 ## - Height 1
95356 375149 6172.3
##
## Step: AIC=5847.45
## Weight ~ Position + Height + Age
##
## Df Sum of Sq RSS
AIC ## <none> 289262
5847.4
## + Team 29 9468 279793
5871.0 ## - Age 1 14616
303877 5896.4 ## - Position 8
20406 309668 5901.9 ## - Height 1
100435 389697 6153.6
##
## Call:
## lm(formula = Weight ~ Position + Height + Age, data
= mlb)
##
## Coefficients:
## (Intercept)
PositionDesignated_Hitter ##
-168.0474 8.6968 ##
PositionFirst_Baseman PositionOutfielder ##
2.7780 -6.0457
## PositionRelief_Pitcher
PositionSecond_Baseman ## -7.7782
-13.0267 ## PositionShortstop
PositionStarting_Pitcher ##
-16.4821 -7.3961
## PositionThird_Baseman
Height ## -4.1361 4.7639
## Age
## 0.8771
We can observe that forward retains the whole model. The better feature
selection model uses backward step-wise selection.
Both backward and forward are greedy algorithms and neither guarantees an
optimal model result. The optimal feature selection requires exploring every
possible combination of the predictors, which is practically not feasible, due to
computational complexity, nk combinations.
step(fit,k=2)
## Start: AIC=5871.04
## Weight ~ Team + Position + Height +
Age ##
## Df Sum of Sq RSS
AIC ## - Team 29 9468 289262
5847.4
## <none> 279793 5871.0
## - Age 1 14090 293883 5919.8
## - Position 8 20301 300095 5927.5
## - Height 1 95356 375149 6172.3
##
## Step: AIC=5847.45
## Weight ~ Position + Height + Age
##
## Df Sum of Sq RSS AIC
## <none> 289262
5847.4 ## - Age 1 14616
303877 5896.4
## - Position 8 20406 309668 5901.9
## - Height 1 100435 389697 6153.6
## ##
Call:
## lm(formula = Weight ~ Position + Height + Age, data
= mlb) ## ## Coefficients:
## (Intercept)
PositionDesignated_Hitter ##
-168.0474 8.6968 ##
PositionFirst_Baseman PositionOutfielder
## 2.7780
-6.0457 ## PositionRelief_Pitcher
PositionSecond_Baseman ## -7.7782
-13.0267 ## PositionShortstop
PositionStarting_Pitcher ##
-16.4821 -7.3961 ##
PositionThird_Baseman Height
## -4.1361
4.7639
## Age
## 0.8771
step(fit,k=log(nrow(mlb)))
## Start: AIC=6068.69
## Weight ~ Team + Position + Height +
Age ##
## Df Sum of Sq RSS
AIC ## - Team 29 9468 289262
5901.8 ## <none>
279793 6068.7 ## - Position 8
20301 300095 6085.6 ## - Age 1
14090 293883 6112.5 ## - Height 1
95356 375149 6365.0
##
## Step: AIC=5901.8
## Weight ~ Position + Height + Age
10.4 Step 5: Improving Model Performance 369
##
## Df Sum of Sq RSS AIC
## <none> 289262
5901.8 ## - Position 8 20406
309668 5916.8 ## - Age 1
14616 303877 5945.8 ## - Height 1
100435 389697 6203.0
##
## Call:
## lm(formula = Weight ~ Position + Height + Age, data =
mlb) ##
## Coefficients:
## (Intercept)
PositionDesignated_Hitter ##
-168.0474 8.6968 ##
PositionFirst_Baseman PositionOutfielder ##
2.7780 -6.0457 ##
PositionRelief_Pitcher PositionSecond_Baseman
## -7.7782
-13.0267 ## PositionShortstop
PositionStarting_Pitcher ##
-16.4821 -7.3961 ##
PositionThird_Baseman Height ##
-4.1361 4.7639
## Age
## 0.8771
k ¼ 2 yields the AIC criterion, and k ¼ log (n) refers to BIC. Let’s try to
evaluate the model performance again (Figs. 10.10 and 10.11).
halfnorm(lm.influence(fit)$hat, nlab = 2,
ylab="Leverages")
mlb[c(226,879),]
## Team Position Height Weight Age
## 226 NYY Designated_Hitter 75 230
36.14 ## 879 SD Designated_Hitter 73
200 25.60 summary(mlb)
## Team Position Height
Weight ## NYM : 38 Relief_Pitcher :315 Min. :67.0
Min. :150.0 ## ATL : 37 Starting_Pitcher:221 1st
Qu.:72.0 1st Qu.:187.0 ## DET : 37 Outfielder :
194 Median :74.0 Median :200.0 ## OAK : 37 Catcher
: 76 Mean :73.7 Mean :201.7 ## BOS : 36
Second_Baseman : 58 3rd Qu.:75.0 3rd Qu.:215.0 ## CHC
: 36 First_Baseman : 55 Max. :83.0 Max. :290.0 ##
(Other):813 (Other) :115
## Age
## Min. :20.90
## 1st Qu.:25.44
## Median :27.93
## Mean :28.74
## 3rd Qu.:31.23
## Max. :48.52
A deeper discussion of variable selection, controlling the false discovery rate, is
provided in Chaps. 17 and 18.
10.4.1 Model Specification: Adding Non-linear Relationships
to a Binary Indicator
fit4<-lm(Weight~Team+Height+Age*Position+age2, data=mlb)
summary(fit4)
## Call:
## lm(formula = Weight ~ Team + Height + Age * Position + age2,
376 10 Forecasting Numeric Data Using Regression Models
## data = mlb)
##
## Residuals:
## Min 1Q Median 3Q
Max ## -48.761 -11.049 -0.761 9.911
75.533
##
## Coefficients:
## Estimate Std. Error t value
Pr(>|t|) ## (Intercept) -199.15403 29.87269
-6.667 4.35e-11 *** ## TeamARZ 8.10376
4.26339 1.901 0.0576 . ## TeamATL
-0.81743 3.97899 -0.205 0.8373
## TeamBAL -4.64820 4.03972 -1.151
0.2502 ## TeamBOS 0.37698 4.00743
0.094 0.9251
## TeamCHC 0.33104 3.99507 0.083
0.9340
## TeamCIN 2.56023 3.99603 0.641
0.5219
## TeamCLE -0.66254 4.03154 -0.164
0.8695 ## TeamCOL -3.72098 4.03759 -0.922 0.3570
## TeamCWS 4.63266 4.10884 1.127
0.2598
## TeamDET 3.21380 3.98231 0.807
0.4199 ## TeamFLA 3.56432 4.14902
0.859 0.3905 ## TeamHOU -0.38733
4.07249 -0.095 0.9242
## TeamKC -4.66678 4.02384 -1.160
0.2464 ## TeamLA 3.51766 4.09400
0.859 0.3904
## TeamMIN 2.31585 4.10502 0.564
0.5728
## TeamMLW 4.34793 4.02501 1.080
0.2803 ## TeamNYM -0.28505 3.98537
-0.072 0.9430 ## TeamNYY 1.87847
4.12774 0.455 0.6491 ## TeamOAK
-0.23791 3.97729 -0.060 0.9523
## TeamPHI -6.25671 3.99545 -1.566
0.1177 ## TeamPIT 4.18719 4.01944
1.042 0.2978 ## TeamSD 2.97028
4.08838 0.727 0.4677 ## TeamSEA
-0.07220 4.05922 -0.018 0.9858 ## TeamSF
1.35981 4.07771 0.333 0.7388 ## TeamSTL
-1.23460 4.11960 -0.300 0.7645 ## TeamTB
-1.90885 4.09592 -0.466 0.6413
## TeamTEX -0.31570 4.03146 -0.078
0.9376 ## TeamTOR 1.73976 4.08565
0.426 0.6703 ## TeamWAS -1.43933
4.00274 -0.360 0.7192 ## Height
4.70632 0.25646 18.351 < 2e-16 *** ## Age
3.32733 1.37088 2.427 0.0154 * ## PositionDesignated_Hitter
-44.82216 30.68202 -1.461 0.1444 ## PositionFirst_Baseman
23.51389 20.23553 1.162 0.2455 ## PositionOutfielder
-13.33140 15.92500 -0.837 0.4027 ## PositionRelief_Pitcher
-16.51308 15.01240 -1.100 0.2716
## PositionSecond_Baseman -26.56932 20.18773 -1.316
10.4 Step 5: Improving Model Performance 377
Numeric prediction trees are built in the same way as classification trees. Data
are partitioned first via a divide-and-conquer strategy based on features. Recall
that, homogeneity in classification trees may be assessed by measures like the
entropy. In prediction, tree homogeneity is measured by statistics such as variance,
standard deviation or absolute deviation, from the mean.
A common splitting criterion for regression trees is the standard deviation
reduction (SDR).
n
SDR ¼ sd Tð Þ X Ti sd Tð iÞ,
T
i¼1
10.6 Case Study 2: Baseball Players (Take 2) 379
where sd(T) is the standard deviation for the original data. After the summation of
T
all segments, i
T is the proportion of observations in the ith segment compared to
the total number of observations and sd(Ti) is the standard deviation for the ith
segment. Let’s look at one simple example.
Originaldata : f1;2;3;3;4;5;6;6;7;8g,
Split method1 : f1,2,3j3,4,5,6,6,7,8g,and Split
method2 : f1,2,3,3,4,5j6,6,7,8g:
In split method 1, T1 ¼ {1,2,3}, T2 ¼ {3,4,5,6,6,7,8}. In split method 2, T1 ¼
{1,2,3,3,4,5}, T2 ¼ {6,6,7,8}.
ori<-c(1, 2, 3, 3, 4, 5, 6, 6, 7, 8)
at1<-c(1, 2, 3)
at2<-c(3, 4, 5, 6, 6, 7, 8)
bt1<-c(1, 2, 3, 3, 4, 5)
bt2<-c(6, 6, 7, 8)
sdr_a <- sd(ori)-
(length(at1)/length(ori)*sd(at1)+length(at2)/length(ori) * sd(at2))
sdr_b <- sd(ori)-
(length(bt1)/length(ori)*sd(bt1)+length(bt2)/length(ori) * sd(bt2))
sdr_a
## [1] 0.7702557
sdr_b
## [1] 1.041531
length() is used in the above R codes to get the number of elements in a
specific vector.
Larger SDR indicates greater reduction in standard deviation after splitting.
Here, split method 2 yields greater SDR, so the regression tree split will use the
second method, which results in more homogeneous sets than the first method.
Now, the tree will be split under bt1 and bt2 following the same rules (greater
SDR wins). When we cannot split further bt1 and bt2 are terminal nodes. The
observations classified into bt1 will be predicted with mean(bt1) ¼ 3, and those
classified as bt2 with mean(bt2) ¼ 6.75.
We will continue with the MLB dataset, which includes 1,034 observations. Let’s
try to randomly separate them into training and testing datasets first.
set.seed(1234)
380 10 Forecasting Numeric Data Using Regression Models
train_index <- sample(seq_len(nrow(mlb)), size =
0.75*nrow(mlb)) mlb_train<-mlb[train_index, ] mlb_test<-mlb[-
train_index, ]
We used a random 75–25% split to divide the data into training and testing sets.
10.6.2 Step 3: Training a Model On the Data
In R, the function rpart(), under the rpart package, provides regression tree
modeling: m<-rpart(dv~iv, data¼mydata)
• dv: dependent variable
• iv: independent variable
• mydata: training data containing dv and iv.
We use two numerical features in the MLB data "01a_data.txt" Age and Height
as features.
#install.packages("rpart")
library(rpart)
mlb.rpart<-rpart(Weight~Height+Age, data=mlb_train)
mlb.rpart
## n= 775
##
## node), split, n, deviance, yval
## * denotes terminal node
##
## 1) root 775 323502.600 201.4361
## 2) Height< 73.5 366 112465.500 192.5000
## 4) Height< 70.5 55 9865.382 178.7818 *
## 5) Height>=70.5 311 90419.300 194.9260
## 10) Age< 31.585 234 71123.060 192.8547 *
## 11) Age>=31.585 77 15241.250 201.2208 *
## 3) Height>=73.5 409 155656.400 209.4328
## 6) Height< 76.5 335 118511.700 206.8627
## 12) Age< 28.6 194 75010.250 202.2938
## 24) Height< 74.5 76 20688.040 196.8026 *
## 25) Height>=74.5 118 50554.610 205.8305 *
## 13) Age>=28.6 141 33879.870 213.1489
* ## 7) Height>=76.5 74 24914.660
221.0676
## 14) Age< 25.37 12 3018.000 206.0000 *
## 15) Age>=25.37 62 18644.980 223.9839 *
The output contains rich information. split indicates the decision criterion; n is
the number of observations that fall in this segment; yval is the predicted value if
the test data falls into a segment.
10.6 Case Study 2: Baseball Players (Take 2) 381
10.6.3 Visualizing Decision Trees
A fancy way of drawing the rpart decision tree is by the rpart.plot() function
under rpart.plot package (Fig. 10.13).
#
install.packages("rpart.plot"
) library(rpart.plot)
rpart.plot(mlb.rpart,
digits=3)
Fig. 10.14 Expanding the decision tree by specifying significant digits, drawing separate split
labels for the left and right directions, displaying the number and percentage of observations in
the node, and positioning the leaf nodes at the bottom of the graph
382 10 Forecasting Numeric Data Using Regression Models
A more detailed graph can be obtained by specifying more options in the
function call (Fig. 10.14).
We may also use a more elaborate tree plot from package rattle to observe the
order and rules of splits (Fig. 10.15).
library(rattle)
fancyRpartPlot(mlb.rpart, cex =
0.8)
Let’s make predictions with the regression tree model using the predict()
command.
summary(mlb_test$Weight)
cor(mlb.p, mlb_test$Weight)
## [1] 0.4940257
The predicted values are moderately correlated with the true values.
10.6.5 Measuring Performance with Mean Absolute Error
To measure the distance between predicted value and the true value, we can use a
measurement called mean absolute error (MAE) defined by the formula:
1 n
where the predi is the ith predicted value and obsi is the ith observed value. Let’s
make a corresponding MAE function in R and evaluate our model performance.
This implies that on average, the difference between the predicted value and the
observed value is 14.975. Considering that the Weight variable in our test dataset
ranges from 150 to 260, the model is reasonable.
What if we used a more the most primitive method for prediction – the test data
mean?
mean(mlb_test$Weight)
## [1] 202.3643
MAE(mlb_test$Weight, 202.3643)
## [1] 17.11207
This shows that the regression decision tree is better than using the mean to
predict every observation in the test dataset. However, it is not dramatically better.
There might be room for improvement.
384 10 Forecasting Numeric Data Using Regression Models
10.6.6 Step 5: Improving Model Performance
To improve the performance of our decision tree, we are going to use a model tree
instead of a regression tree. We can use the M5P() function, under the package
RWeka, which implements the M5 algorithm. This function uses similar syntax as
rpart().
m<-M5P(dv~iv, data¼mydata)
#install.packages("RWeka")
mlb.m5<-M5P(Weight~Height+Age, data=mlb_train)
mlb.m5
0.5500171
MAE(mlb_test$Weight, mlb.p.m5)
10.6 Case Study 2: Baseball Players (Take 2) 385
## [1] 14.07716
summary(mlb.m5) reports some rough diagnostic statistics. We can see that the
correlation and MAE for this model are better than the previous rpart() model.
heart_attack$CHARGES<-as.numeric(heart_attack$CHARGES)
heart_attack<-heart_attack[complete.cases(heart_attack), ]
heart_attack$gender<-ifelse(heart_attack$SEX=="F", 1, 0)
heart_attack<-heart_attack[, -3]
Next, we can build a model tree using M5P() with all the features in the model.
As usual, we need to separate the heart_attack data into training and test datasets
(e.g., use the 75–25% random split).
Using the model to predict CHARGES in the test dataset, we can obtain the
following correlation and MAE.
## [1] 0.5616003
## [1] 3193.502
We can see that the predicted values and observed values are strongly correlated.
In terms of MAE, it may seem very large at first glance.
386
References
range(ha_test$CHARGES)
# 17137-701
# 3193.502/16436
However, the test data itself has a wide range, and the MAE is within 20% of
the range. With only 148 observations, the model represents a fairly good
prediction of the expected hospital stay charges. Try to reproduce these results and
also test the same techniques to other data from the list of our Case-Studies.
Regression Models
References
Fahrmeir, L, Kneib, T, Lang, S, Marx, B. (2013) Regression: Models, Methods and Applications,
Springer Science & Business Media, ISBN 3642343333, 9783642343339.
Hyndman, RJ, Athanasopoulos, G. (2014) Forecasting: principles and practice, OTexts, ISBN
0987507109, 9780987507105.
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
Zhao, Y. (2012) R and Data Mining: Examples and Case Studies, Academic Press, ISBN
012397271X, 9780123972712.
http://wiki.socr.umich.edu/index.php/EBook#Chapter_X:_Correlation_and_Regression.
Chapter 11
Black Box Machine-Learning Methods:
Neural Networks and Support Vector
Machines
ny xð Þ ¼
f Xwi xi :
i¼1
( 0 x<0
f xð Þ ¼ :
1 x 0
This is the simplest form for activation functions. It is rarely used in real world
situations. The most commonly used alternative is the sigmoid activation
function, where f xð Þ ¼ 1þ1ex. Here, e is Euler’s natural number, which is also the
base of the natural logarithm function. The output signal is no longer binary but
can be any real number ranged from 0 to 1 (Fig. 11.2).
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
Fig. 11.1 An example of a
hard threshold activation
function, f(x)
385
11.1 Understanding Neural Networks
Number of layers: The x’s or features in the dataset are called input nodes while
the predicted values are called the output nodes. Multilayer networks include
multiple hidden layers. Figure 11.4 shows a two layer neural network.
When we have multiple layers, the information flow could be complicated.
The arrows in the last graph (with multiple layers) suggest a feed forward
network. In such a network, we can also have multiple outcomes modeled
simultaneously
(Fig. 11.5).
Alternatively, in a recurrent network (feedback network), information can also
travel backwards in loops (or delay). This is illustrated in Fig. 11.6, where the
shortterm memory increases the power of recurrent networks dramatically.
However, in practice, recurrent networks are rarely used.
The number of input nodes and output nodes are predetermined by the dataset
and the predictive variables. The number we can specify determines the hidden
nodes in the model. To simplify the model, our goal is to add the least number of
hidden nodes possible when the model performance remains reasonable.
11.1 Understanding Neural Networks
387
This algorithm could determine the weights in the model using the strategy of
backpropagating errors. First, we assign random weights (but all weights must be
non-trivial). For example, we can use normal distribution, or any other random
process, to assign initial weights. Then we adjust the weights iteratively by
repeating the process until certain convergence or stopping criterion is met. Each
iteration contains two phases.
388 11
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
• Forward phase: from input layer to output layer using current weights.
Outputs are produced at the end of this phase, and
• Backward phase: compare the outputs and true target values. If the difference
is significant, we change the weights and go through the forward phase,
again.
In the end, we pick a set of weights, which correspond to the least total error,
to be the final weights in our network.
Regression
In this case study, we are going to use the Google trends and stock market
dataset. A doc file with the meta-data and the CSV data are available on the
Case-Studies Canvas Site. These daily data (between 2008 and 2009) can be used
to examine the associations between Google search trends and the daily marker
index - Dow Jones Industrial Average.
Variables
Here we use the RealEstate as our dependent variable. Let’s see if the Google
Real Estate Index could be predicted by other variables in the dataset.
google<-
read.csv("https://umich.instructure.com/files/416274/download?downlo
ad_frd=1", stringsAsFactors = F)
Let’s delete the first two columns, since the only goal is to predict Google
Real Estate Index with other indexes and DJI.
google<-google[, -c(1, 2)]
str(google)
## 'data.frame': 731 obs. of 24 variables:
## $ Unemployment : num 1.54 1.56 1.59 1.62 1.64 1.64 1.71
1.85 1.82 1.78 ...
## $ Rental : num 0.88 0.9 0.92 0.92 0.94 0.96 0.99
1.02 1.02 1
.01 ...
## $ RealEstate : num 0.79 0.81 0.82 0.82 0.83 0.84 0.86
0.89 0.89 0.89 ...
## $ Mortgage : num 1 1.05 1.07 1.08 1.1 1.11 1.15 1.22
1.23 1.24
...
## $ Jobs : num 0.99 1.05 1.1 1.14 1.17 1.2 1.3 1.41
1.43 1.4
4 ...
## $ Investing : num 0.92 0.94 0.96 0.98 0.99 0.99 1.02
1.09 1.1 1 .1 ...
## $ DJI_Index : num 13044 13044 13057 12800 12827 ...
## $ StdDJI : num 4.3 4.3 4.31 4.14 4.16 4.16 4.16 4
4.1 4.17 . ..
## $ Unemployment_30MA : num 1.37 1.37 1.38 1.38 1.39 1.4 1.4 1.42
1.43 1. 44 ...
## $ Rental_30MA : num 0.72 0.72 0.73 0.73 0.74 0.75 0.76
0.77 0.78 0.79 ...
## $ RealEstate_30MA : num 0.67 0.67 0.68 0.68 0.68 0.69 0.7 0.7
0.71 0. 72 ...
11.2 Case Study 1: Google Trends and the Stock Market: Regression 391
## $ Mortgage_30MA : num 0.98 0.97 0.97 0.97 0.98 0.98 0.98
0.99 0.99
390 11
1 ...
## $ Jobs_30MA : num 1.06 1.06 1.05 1.05 1.05 1.05 1.05
1.06 1.07 1.08 ...
## $ Investing_30MA : num 0.99 0.98 0.98 0.98 0.98 0.97 0.97
0.97 0.98 0.98 ...
## $ DJI_Index_30MA : num 13405 13396 13390 13368 13342 ...
## $ StdDJI_30MA : num 4.54 4.54 4.53 4.52 4.5 4.48 4.46
4.44 4.41 4
.4 ...
## $ Unemployment_180MA: num 1.44 1.44 1.44 1.44 1.44 1.44 1.44
1.44 1.44 1.44 ...
## $ Rental_180MA : num 0.87 0.87 0.87 0.87 0.87 0.87 0.86
0.86 0.86 0.86 ...
## $ RealEstate_180MA : num 0.89 0.89 0.88 0.88 0.88 0.88 0.88
0.88 0.88 0.87 ...
## $ Mortgage_180MA : num 1.18 1.18 1.18 1.18 1.17 1.17 1.17
1.17 1.17 1.17 ...
## $ Jobs_180MA : num 1.24 1.24 1.24 1.24 1.24 1.24 1.24
1.24 1.24 1.24 ...
## $ Investing_180MA : num 1.04 1.04 1.04 1.04 1.04 1.04 1.04
1.04 1.04 1.04 ...
## $ DJI_Index_180MA : num 13493 13492 13489 13486 13482 ...
## $ StdDJI_180MA : num 4.6 4.6 4.6 4.6 4.59 4.59 4.59 4.58
4.58 4.58 ...
As we can see from the structure of the data, these indices and DJI have
different ranges. We should rescale the data. In Chap. 6, we learned that
normalizing these features using our own normalize() function provides one
solution. We can use lapply() to apply the normalize() function to each column.
normalize <- function(x) {
return((x - min(x)) / (max(x) -
min(x)))
}
google_norm<-as.data.frame(lapply(google, normalize))
summary(google_norm$RealEstate)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0000 0.4615 0.6731 0.6292 0.8077 1.0000
Looks like all the vectors are normalized into the [0, 1] range.
The next step would be to split the google dataset into training and testing
subsets. This time we will use the sample() and floor() function to separate the
training and testing sets. sample() is a function to create a set of indicators for
row numbers. We can subset the original dataset with random rows using these
indicators. floor() takes a number x and returns the closest integer to x
sample(row,size)
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
• row: rows in the dataset that you want to select from. If you want to select all
of the rows, you can use nrow(data) or 1 : nrow(data)(single number or
vector).
• size: how many rows you want for your subset.
sub<-sample(nrow(google_norm),
floor(nrow(google_norm)*0.75)) google_train<-
google_norm[sub, ] google_test<-google_norm[-sub, ]
We are good to go and can move forward to the model training phase.
m<-neuralnet(targetpredictors,data¼mydata,hidden¼1), where:
p<compute(m,test)
This time we will include four hidden nodes in the model. Let’s see what results
we can get from this more elaborate ANN model (Fig. 11.8).
google_model2<-
neuralnet(RealEstate~Unemployment+Rental+Mortgage+Jobs+Invest
ing+DJI_Index+StdDJI, data=google_train, hidden = 4)
plot(google_model2)
394 11
We observe an even lower Error by using three hidden layers with numbers of
nodes 4,3,3 within each, respectively.
google_model2<-
neuralnet(RealEstate~Unemployment+Rental+Mortgage+Jobs+Invest
ing+DJI_Index+StdDJI, data=google_train, hidden = c(4,3,3))
google_pred2<-compute(google_model2, google_test[, c(1:2, 4:8)])
pred_results2<-google_pred2$net.result cor(pred_results2,
google_test$RealEstate)
## [,1]
## [1,] 0.9853727545
plot(test_data, test_data_sqrt)
lines(test_data, pred_sqrt, pch=22, col="red", lty=2)
legend("bottomright", c("Actual SQRT","Predicted SQRT"),
lty=c(1,2), lwd=c
(2,2),col=c("black","red"))
We observe that the NN, net.sqrt actually learns and predicts the complex
square root function with good accuracy, Figs 11.10 and 11.11. Of course,
individual results may vary, as we randomly generate the training data
(rand_data) and due to the stochastic construction of the ANN.
396 11
In practice, ANN models are also useful as classifiers. Let’s demonstrate this by
using again the Stock Market data. We will binarize the samples according to
their RealEstate values. For those higher than the 75%, we will lable them 0; For
those lower than the 25%, we will label them 2; all others will be labeled 1. Even
in the classification setting, the response still must be numeric.
google_class = google_norm
id1 =
which(google_class$RealEstate>quantile(google_class$RealEstate,0.75)
) id2 =
which(google_class$RealEstate<quantile(google_class$RealEstate,0.25)
) id3 = setdiff(1:nrow(google_class),union(id1,id2))
google_class$RealEstate[id1]=0 google_class$RealEstate[id2]=1
google_class$RealEstate[id3]=2
summary(as.factor(google_class$RealEstate))
## 0 1 2
## 179 178 374
Here, we divide the data to training and testing sets. We need three more
column indicators that correspond to the three derived RealEstate labels.
11.4 Case Study 2: Google Trends and the Stock Market – Classification
set.seed(2017) train =
sample(1:nrow(google_class),0.7*nrow(google_class))
google_tr = google_class[train,] google_ts =
google_class[-train,] train_x =
google_tr[,c(1:2,4:8)] train_y = google_tr[,3]
colnames(train_x)
## [1] "Unemployment" "Rental" "Mortgage" "Jobs"
## [5] "Investing" "DJI_Index" "StdDJI"
test_x = google_ts[,c(1:2,4:8)] test_y =
google_ts[3] train_y_ind =
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
model.matrix(~factor(train_y)-1)
colnames(train_y_ind) =
c("High","Median","Low") train =
cbind(train_x, train_y_ind)
We use non-linear output and display every 2,000 iterations.
nn_single =
neuralnet(High+Median+Low~Unemployment+Rental+Mortgage+Jobs+Inve
sting+DJI_Index+StdDJI, data = train, hidden=4,
linear.output=FALSE,
lifesign='full', lifesign.step=2000)
## hidden: 4 thresh:0.01 rep:1/1 steps:2000 min thresh:
0.13702015 48
## 4000 min thresh:
0.08524054094 ## 6000 min
thresh: 0.08524054094
## 8000 min thresh:
0.08524054094 ## 10000 min
thresh: 0.08524054094 …
## 40000 min thresh: 0.02427719823
## 42000 min thresh: 0.02158221449
## 44000 min thresh:
0.01831644589 ## 46000 min
thresh: 0.01682874388 ## 48000
min thresh: 0.01572773551
## 50000 min thresh: 0.01311388938
## 52000 min thresh: 0.01241004281
## 54000 min thresh: 0.01131407008
## 55420 error: 7.01191 time: 19.33 secs
Below is the prediction function translating this model to generate forecasting
results.
pred = function(nn, dat) {
# compute uses the trained neural net (nn=nn_single), and
# new testing data (dat=google_ts) to generate predictions
(y_hat) # compute returns a list containing:
# (1) neurons: a list of the neurons' output for each layer of
the neural network, and
# (2) net.result: a matrix containing the overall result of
the neural network.
yhat = compute(nn, dat)$net.result
398 11
plot(nn_single)
399
Similarly, we can change hidden to utilize multiple hidden layers. However, a
more complicated model won’t necessarily guarantee an improved
performance.
nn_single =
neuralnet(High+Median+Low~Unemployment+Rental+Mortgage+Jobs+Inve
sting+DJI_Index+StdDJI, data = train, hidden=c(4,5),
linear.output=FALSE,
lifesign='full',
lifesign.step=2000)
## hidden: 4, 5 thresh: 0.01 rep:1/1 steps:2000 min
thresh: 0.307
## 4000 min thresh: 0.2875517033
## 6000 min thresh: 0.1383720887
## 8000 min thresh: 0.1115440575
## 10000 min thresh: 0.09233958192
## 12000 min thresh: 0.0766173347
## 14000 min thresh: 0.05763223509
## 16000 min thresh: 0.03417989426
## 18000 min thresh: 0.01473872843
## 20000 min thresh: 0.01101646653 ## 20741 error:
7.00627 time: 11.3 secs mean(pred(nn_single,
google_ts[,c(1:2,4:8)]) != as.factor(google_ts[,3])) ## [1]
0.03181818182
11.5 Support Vector Machines (SVM)
Recall that the Lazy learning methods in Chap. 6 assigned class labels using
geometrical distances of different features. In multidimensional spaces (multiple
features), we can use spheres with centers determined by the training dataset.
Then, we can assign labels to testing data according to their nearest spherical
center. Let’s see if we make a choose other hypersurfaces that may separate nD
data and indice a classification scheme.
11.5 Support Vector Machines (SVM)
The easiest shape would be a plane. Support Vector Machine (SVM) can use
hyperplanes to separate data into several groups or classes. This is used for
datasets that are linearly separable. Assume that we have only two features, will
you use the A or B hyperplane to separate the data on Fig. 11.12? Perhaps even
another hyperplane, C?
To answer the above question, we need to search for the Maximum Margin
Hyperplane (MMH). That is the hyperplane that creates the greatest separation
between the two closest observations.
We define support vectors as the points from each class that are closest to
the MMH. Each class must have at least one observation as a support vector.
Using support vectors alone is not sufficient for finding the MMH. Although
tricky mathematical calculations are involved, the fundamental process is fairly
simple. Let’s look at linearly separable data and non-linearly separable data
individually.
If the dataset is linearly separable, we can find the outer boundaries of our two
groups of data points. These boundaries are called convex hull (red lines in the
following graph). The MMH (black solid line) is the line that is perpendicular to
the shortest line between the two convex hulls (Fig. 11.13).
400 11
w! !x þb ¼ 0,
ax þ by þ cz þ d ¼ 0,
or equivalently
w1x1 þ w2x2 þ w3x3 þ b ¼ 0:
w! !x þb þ1
and
w! !x þb 1:
We require that all class 1 observations fall above the first plane and all
observations in the other class fall below the second plane. The distance between
two planes is calculated as:
kw!k,
11.5 Support Vector Machines (SVM) where kw!k is the Euclidean norm. To maximize
yiw! !x b 1,8~xi,
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
whereFor each nonlinear programming problem, called the8 means “for all”.
primal problem, there is a related nonlinear programming problem, called the
Lagrangian dual problem. Under certain assumptions for convexity and suitable
constraints, the primal and dual problems have equal optimal objective values.
Primal optimization problems are typically described as:
minxf xð Þ
subject to
gi x0
jð Þ :hð
Þ ¼x 0
minu v ðθðu;vÞÞ
,
subject to u 0,
where
1 2 n T
Lp ¼ 2 k kw Xi 1 αiyiw0 þ xi w 1,where αi 0:
¼
To optimize that objective function, we can set the partial derivatives equal to
zero:
∂ n
403
X
∂w : w¼i
1 αiyi xi ¼
¼ Xi 1 αiyi:
¼
402 11
T
n 1 n
LD ¼ Xi 1 αi 2 Xi 1 αiα0iyiy0ixi x0i:
¼ ¼
^
α^yWhich implies that ifi^b þ xiTw^ 1 ¼ 0yi: f xð Þi > 1, then α^ i ¼ 0. The support
(x) which are not mapped to zero ( f(x) 6¼ 0). In our case, the solution w^ is
^
f xð Þ ¼ wTx ¼ Xi 1 αiyi xi:
¼
That’s where the name of Support Vector Machines (SVM) comes from.
For non-linearly separable data, we need to use a small trick. We still use a plane,
but allow some of the points to be misclassified into the wrong class. To
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
penalize for that, we add a cost term after the Euclidean norm function that we
need to minimize.
Therefore, the solution changes to:
!
min kw!k
þ C Xi n1 ξi
2
¼
s:t:yiw! !x b 1,8!xi,ξi 0,
where C is the cost and ξ i is the distance between the misclassified observation i
and the plane.
We have Lagrange dual problem:
1 2 n n T n
where αi,γi 0:
Similar to what we did above for the separable case, we can use the
derivatives of the primal problem to solve the dual problem.
11.6 Case Study 3: Optical Character Recognition (OCR) 405
Notice the inner product in the final expression. We can replace this inner
product with a kernel function that maps the feature space into a higher
dimensional space (e.g., using a polynomial kernel) or an infinite dimensional
space (e.g., using a Gaussian kernel).
An alternative way to solve for the non-linear separable is called the kernel trick.
That is to add some dimensions (or features) to make these non-linear separable
data to be separable in a higher dimensional space.
How can we do that? We transform our data using kernel functions. A general
form for kernel functions would be:
The Gaussian RBF kernel is similar to RBF neural network and is a good place
to start investigating a dataset.
Protocol:
• Divide the image (typically optical image of handwritten notes on paper) into a
fine grid where each cell contains one glyph (symbol, letter, number).
• Match the glyph in each cell to one of the possible characters in a dictionary.
• Combine individual characters together into words to reconstitute the digital
representation of the optical image of the handwritten notes.
In this example, we use an optical document image (data) that has already
been pre-partitioned into rectangular grid cells containing one character of the
26 English letters, A through Z.
The resulting gridded dataset is distributed by the UCI Machine Learning Data
Repository. The dataset contains 20,000 examples of 26 English capital letters
printed using 20 different randomly reshaped and morphed fonts (Fig. 11.14).
406 11
hand_letters_test) head(hand_letter_predictions)
## [1] C U K U E I
## Levels: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
table(hand_letter_predictions, hand_letters_test$letter)
##
## hand_letter_predictions A B C D E F G H I J
K L
## A 191 0 1 0 0 0 0 0 0 1
0 0 ## B 0 157 0 9 2 0 1 3
0 0 1 0
## C 0 0 142 0 5 0 14 3 2 0
2 4
## D 1 1 0 196 0 1 4 12 5 3
4 4
## E 0 0 8 0 164 2 1 1 0 0
3 5 ## F 0 0 0 0 0 171 4 2 8 2
0 0
## G 1 1 4 1 10 3 150 2 0 0
1 2
## H 0 3 0 1 0 2 2 122 0 2
4 2 ## I 0 0 0 0 0 0 0 0
175 10 0 0 ## J 2 2 0 0 0 3
0 2 7 158 0 0
## K 2 1 11 0 0 0 4 6 0 0
148 0 ## L 0 0 0 0 1 0 1 1
0 0 0 176
## M 0 0 1 1 0 0 1 2 0 0
0 0
## N 0 0 0 1 0 1 0 1 0 0
0 0
## O 0 0 1 2 0 0 2 1 0 2
0 0 ## P 0 0 0 1 0 3 1 0 0
0 0 0 ## Q 0 0 0 0 0 0 9
3 0 0 0 3 ## R 2 5 0 1 1
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
0 2 9 0 0 11 0 ## S 1 2 0
0 1 1 5 0 2 2 0 3
## T 0 0 0 0 3 6 0 1 0 0
1 0
## U 1 0 3 3 0 0 0 2 0 0
0 0
## V 0 0 0 0 0 1 6 3 0 0
0 0
## W 0 0 0 0 0 0 1 0 0 0 0 0
## X 0 1 0 0 2 0 0 1 3 0
2 6
## Y 3 0 0 0 0 0 0 1 0 0
0 0 ## Z 2 0 0 0 2 0 0 0 3
3 0 0 ##
## hand_letter_predictions M N O P Q R S T U V
W X
## A 1 2 2 0 5 0 2 1 1 0
1 0 ## B 3 0 0 2 4 8 5 0 0
3 0 1 ## C 0 0 2 0 0 0 0
0 0 0 0 0
## D 0 6 5 3 1 4 0 0 0 0
0 5 ## E 0 0 0 0 6 0 10 0
0 0 0 4
## F 0 0 0 18 0 0 5 2 0 0
0 1
## G 1 0 0 2 11 2 5 3 0 0
0 1
## H 2 5 23 0 2 6 0 4 1 4
0 0 ## I 0 0 0 1 0 0 3 0 0
0 0 4 ## J 0 0 1 1 4 0 1
0 0 0 0 2
## K 0 2 0 1 1 7 0 1 3 0
0 4 ## L 0 0 0 0 1 0 4 0
0 0 0 1
## M 177 5 1 0 0 0 0 0 4 0
8 0
## N 0 172 0 0 0 3 0 0 1 0
2 0
## O 0 1 132 2 4 0 0 0 3 0
0 0 ## P 0 0 3 168 1 0 0 1 0 0
0 0 ## Q 0 0 5 1 163 0 5 0
0 0 0 0
## R 1 1 1 1 0 176 0 1 0 2
0 0 ## S 0 0 0 0 11 0 135 2
0 0 0 2
## T 0 0 0 0 0 0 3 163 1 0
0 0
## U 0 1 0 1 0 0 0 0 197 0
1 1
## V 0 3 1 0 2 1 0 0 0 152
1 0
## W 2 0 4 0 0 0 0 0 4 7
154 0
## X 0 0 1 0 0 1 2 0 0 0
0 160
## Y 0 0 0 6 0 0 0 3 0 0
0 0 ## Z 0 0 0 0 1 0 18 3 0
0 0 0 ##
408 11
## hand_letter_predictions Y
Z ## A 0
0
11.6 Case Study 3: Optical Character Recognition (OCR) 411
## B 0 0
## C 0 0
## D 3 1
## E 0 3
## F 3 0
## G 0 0
## H 3 0
## I 1 1
## J 0 11
## K 0
0 ## L
0 1 ## M
0 0
## N 0 0
## O 0 0
## P 1 0
## Q 3 0
## R 0
0 ## S 0
10 ## T
5 2
## U 1 0
## V 5 0
## W 0 0
## X 1
1 ## Y 157
0 ## Z 0
164
# look only at agreement vs. non-agreement
# construct a vector of TRUE/FALSE indicating correct/incorrect
predictions agreement <- hand_letter_predictions ==
hand_letters_test$letter
# check if characters agree
table(agreement)
## agreement
## FALSE TRUE ## 780
4220
prop.table(table(agreemen
t))
## agreement
## FALSE TRUE
## 0.156 0.844
11.6.4 Step 4: Improving Model Performance
Replacing the vanilladot linear kernel with rbfdot Radial Basis Function kernel,
i.e., “Gaussian” kernel, may improve the OCR prediction.
hand_letter_classifier_rbf <- ksvm(letter ~ ., data =
hand_letters_train, kernel = "rbfdot")
hand_letter_predictions_rbf <- predict(hand_letter_classifier_rbf,
hand_letters_test)
## agreement_rbf
## FALSE TRUE
## 0.072 0.928
Let’s have another look at the iris data that we saw in Chap. 2.
SVM requires all features to be numeric, and each feature has to be scaled into a
relative small interval. We are using the Edgar Anderson’s Iris Data in R for this
case study. This dataset measures the length and width of sepals and petals from
three Iris flower species.
Let’s load the data first. In this case study we want to explore the variable
Species.
data(iris)
str(iris)
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4
1.5 ... ## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3
0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1
1 1 1 1 1 1 1 1 ...
table(iris$Species)
##
## setosa versicolor virginica
## 50 50 50
410 11
11.7 Case Study 4: Iris Flowers 413
The data look good but we still can normalize the features either by hand or
using an R function.
Next, we can separate the training and testing datasets using 75%–25% rule.
sub<-sample(nrow(iris),
floor(nrow(iris)*0.75)) iris_train<-
iris[sub, ] iris_test<-iris[-sub, ]
We can try the linear and non-linear kernels on the iris data (Figs. 11.15 and
11.16).
require(e1071)
We are going to use kernlab for this case study. However other packages like
e1071 and klaR are available if you are quite familiar with C++. Let’s break down
the function ksvm()
Let’s install the package and play with the data now.
# install.packages("kernlab") library(kernlab)
11.7 Case Study 4: Iris Flowers 415
iris_clas<-ksvm(Species~., data=iris_train,
kernel="vanilladot")
## Setting default kernel parameters
iris_clas
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 1
##
## Linear (vanilla) kernel function.
##
## Number of Support Vectors : 24
##
## Objective Function Value : -1.0066 -0.3309
-13.8658 ## Training error : 0.026786
Here, we used all the variables other than the Species in the dataset as
predictors. We also used kernel vanilladot, which is the linear kernel in this
model. We get a training error that is less than 0.02.
Again, the predict() function is used to forecast the species for a test data. Here,
we have a factor outcome, so we need the command table() to show us how well
do the predictions and actual data match.
iris.pred<-predict(iris_clas, iris_test) table(iris.pred,
iris_test$Species)
##
## iris.pred setosa versicolor virginica ## setosa 13
0 0
## versicolor 0 14 0
## virginica 0 1 10
We can see a single case of Iris virginica misclassified as Iris versicolor. The
taxa of all other flowers are correctly predicted.
To see the results more clearly, we can use the proportional table to show the
agreements of the categories.
agreement<-iris.pred==iris_test$Species prop.table(table(agreement))
## agreement
## FALSE TRUE
## 0.02631578947 0.97368421053
Here ¼¼ means “equal to”. Over 97% of predictions are correct. Nevertheless,
is there any chance that we can improve the outcome? What if we try a Gaussian
kernel?
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
11.7.5 Step 5: RBF Kernel Function
Linear kernel is the simplest one but usually not the best one. Let ’s try the RBF
(Radial Basis “Gaussian” Function) kernel instead.
iris_clas1<-ksvm(Species~., data=iris_train, kernel="rbfdot")
iris_clas1
## Support Vector Machine object of class "ksvm"
##
## SV type: C-svc (classification)
## parameter : cost C = 1
##
## Gaussian Radial Basis kernel function.
## Hyperparameter : sigma =
0.877982617394805 ##
## Number of Support Vectors : 52
##
## Objective Function Value : -4.6939 -5.1534 -16.2297
## Training error : 0.017857
iris.pred1<-predict(iris_clas1, iris_test)
table(iris.pred1, iris_test$Species)
##
## iris.pred1 setosa versicolor
virginica ## setosa 13
0 0
## versicolor 0 14
2 ## virginica 0 1
8
agreement<-iris.pred1==iris_test$Species
prop.table(table(agreement))
## agreement
## FALSE TRUE
## 0.07894736842 0.92105263158
Unfortunately, the model performance is actually worse than the previous one
(you might get slightly different results). This is because this Iris dataset has a
linear feature. In practice, we could try some alternative kernel functions and see
which one fits the dataset the best.
We can tune the SVM using the tune.svm function in the package e1071
(Fig. 11.17).
414 11
11.7 Case Study 4: Iris Flowers 417
Fig. 11.17 Training data,
cross-validation, and testing
data SVM classification 0.6
Classification error
0.4
0.2
0.0
Train
CV
Test
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.03636363636
Further, we can draw a Cross-Validation (CV) plot to gauge the model
performance, see cross-validation details in Chap. 21:
set.seed(2017) require(sparsediscrim); require
(reshape); require(ggplot2)
11.8 Practice
Use the Google trend data. Fit a neural network model with the same training
data as case study 1. This time, use Investing as target and Unemployment,
Rental,RealEstate,Mortgage,Jobs,DJI_Index,StdDJI as predictors. Use three
hidden nodes. Note: remember to change the columns you want to include in the
test dataset when predicting.
The following number is the correlation between predicted and observed values.
## [,1]
## [1,] 0.8845711444
You might get a slightly different results since the weights are generated
randomly.
Use the same data in Chap. 8 – Quality of life and chronic disease (dataset and
metadata doc).
Let’s load the data first. In this case study, we want to use the variable
CHARLSONSCORE as our target variable.
qol<-read.csv("https://umich.instructure.com/files/481332/download?
download_ frd=1") str(qol)
## 'data.frame': 2356 obs. of 41 variables:
## $ ID : int 171 171 172 179 180 180 181 182 183
186 ...
## $ INTERVIEWDATE : int 0 427 0 0 0 42 0 0 0 0 ...
## $ LANGUAGE : int 1 1 1 1 1 1 1 1 1 2 ...
## $ AGE : int 49 49 62 44 64 64 52 48 49 78 ...
## $ RACE_ETHNICITY : int 3 3 3 7 3 3 3 3 3 4
... ## $ SEX : int 2 2 2 2 1 1 2 1
1 1 ... ## $ QOL_Q_01 : int 4 4 3 6 3 3
4 2 3 5 ... ## $ QOL_Q_02 : int 4 3 3 6
2 5 4 1 4 6 ...
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
## $ QOL_Q_03 : int 4 4 4 6 3 6 4 3 4 4 ...
## $ QOL_Q_04 : int 4 4 2 6 3 6 2 2 5 2 ...
## $ QOL_Q_05 : int 1 5 4 6 2 6 4 3 4 3
... ## $ QOL_Q_06 : int 4 4 2 6 1 2 4 1
2 4 ...
## $ QOL_Q_07 : int 1 2 5 -1 0 5 8 4 3 7 ...
## $ QOL_Q_08 : int 6 1 3 6 6 6 3 1 2 4 ...
## $ QOL_Q_09 : int 3 4 3 6 2 2 4 2 2 4 ...
## $ QOL_Q_10 : int 3 1 3 6 3 6 3 2 4 3
... ## $ MSA_Q_01 : int 1 3 2 6 2 3 4 1 1
2 ...
421
11.8 Practice
## 2 1 0 0 1 0 0 0 0 0 0 0
## 3 0 0 0 0 0 0 0 0 0 0 0
## 4 0 0 0 0 0 0 0 0 0 0 0 ## 5
0 0 0 0 0 0 0 0 0 0 0 ## 6 0 0 0
0 0 0 0 0 0 0 0
## 7 0 0 0 0 0 0 0 0 0 0 0
## 8 0 0 0 0 0 0 0 0 0 0 0 ## 9
0 0 0 0 0 0 0 0 0 0 0 ## 10 0 0 0
0 0 0 0 0 0 0 0
## agreement
## FALSE TRUE
## 0.4914089347 0.5085910653
11.9 Appendix
Try to replicate these results with other data from the list of our Case-Studies.
11.10 Assignments: 11. Black Box Machine-Learning Methods: Neural Networks...
In Chap. 11, we learned about predicting the square-root function. It’s just one
instance of the power-function.
• Why did we observe a decrease in the accuracy of the NN prediction of the
square-root outside the interval [0,1] (note we trained inside [0,1])? How can
you improve on the prediction of the square-root network?
Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
• Can you design a more generic NN network that can learn and predict a
powerfunction for a given power (λ2R)?
Use the SOCR Normal and Schizophrenia pediatric neuroimaging study data to
complete the following tasks:
• Conduct some initial data visualization and exploration.
• Use derived neuroimaging biomarkers (e.g., Age, FS_IQ, TBV, GMV, WMV, CSF,
Background, L_superior_frontal_gyrus, R_superior_frontal_gyrus, ...,
brainstem) to train a NN model and predict DX (Normals ¼ 1;
Schizophrenia ¼ 2).
• Try one hidden layer with a different number of nodes.
• Try multiple hidden layers and compare the results to the single layer. Which
model is better?
• Compare the type I (false-positive) and type II (false-negative) errors for the
alternative methods.
• Train separate models to predict DX (diagnosis) for the Male and Female
cohorts, respectively. Explain your findings.
• Train an SVM, using ksvm and svm in e1071, for Age, FS_IQ, TBV, GMV, WMV,
CSF, Background to predict DX. Compare the results of linear, Gaussian, and
polynomial SVM kernels.
• Add Sex to your models and see if this makes a difference.
• Expand the model by training on all derived neuroimaging biomarkers and re-
train the SVM using Age, FS_IQ, TBV, GMV, WMV, CSF, Background,
L_superior_frontal_gyrus, R_superior_frontal_gyrus, ..., brainstem. Again, try
linear, Gaussian, and polynomial kernels. Compare the results.
• Are there differences between the alternative kernels?
• For Age, FS_IQ, TBV, GMV, WMV, CSF, and Background, tune parameters for
Gaussian and polynomial kernels.
422 11
HTTP cookies are used to monitor web-traffic and track users surfing the
Internet. We often notice that promotions (ads) on websites tend to match our
needs, reveal our prior browsing history, or reflect our interests. That is not an
accident. Nowadays, recommendation systems are highly based on machine
learning methods that can learn the behavior, e.g., purchasing patterns, of
individual consumers. In this chapter, we will uncover some of the mystery behind
recommendation systems for transactional records. Specifically, we will (1)
discuss association rules and their support and confidence; (2) the Apriori
algorithm for association rule learning; and (3) cover step-by-step a set of case-
studies, including a toy example, Head and Neck Cancer Medications, and
Grocery purchases.
Association rules are the result of process analytics (e.g., market analysis) that
specify patterns of relationships among items. One specific example would be:
fcharcoal;lighter;chickenwingsg ! fbarbecuesauceg
In words, charcoal, lighter and chicken wings imply barbecue sauce. Those
curly brackets indicate that we have a set. Items in a set are called elements. When
an itemset like {charcoal,lighter,chickenwings,barbecuesauce} appears in our
dataset with some regularity, we can discover the above pattern.
Association rules are commonly used for unsupervised discovery of knowledge
rather than prediction of outcomes. In biomedical research, association rules are
widely used to:
Association rules are mostly applied to transactional data, like business, trade,
service or medical records. These datasets are typically very large in number of
transactions and features. This will add lots of possible orders and patterns when
we try to do analytics, which makes data mining a very hard task.
With the Apriori rule, this problem is easily solved. If we have a simple prior
(belief about the properties of frequent elements), we can efficiently reduce the
number of features or combinations that we need to look at.
The Apriori algorithm has a simple apriori belief that all subsets of a frequent
item-set must also be frequent. This is known as the Apriori property. The full set
in the last example, {charcoal,lighter,chickenwings,barbecuesauce}, can be
frequent if and only if itself and all its subsets of single elements, pairs and triples
occur frequently. We can see that this algorithm is designed for finding patterns
in large datasets. If a pattern happens frequently, it is considered “interesting”.
and Confidence
Support and confidence are the two criteria to help us decide whether a pattern is
“interesting”. By setting thresholds for these two criteria, we can easily limit the
number of interesting rules or item-sets reported.
For item-sets X and Y, the support of an item-set measures how frequently it
appears in the data:
count Xð Þ
support Xð Þ ¼ ,
N
where N is the total number of transactions in the database and count(X) is the
number of observations (transactions) containing the item-set X. Of course, the
union of item-sets is an item-set itself. For example, if Z ¼ X, Y, then
430 12 Apriori Association Rules Learning
support Zð Þ ¼ support Xð ;YÞ:
For a rule X ! Y, the rule’sconfidence measures the relative accuracy of the
rule:
12.4 Building a Set of Rules with the Apriori Principle
support Xð ;YÞ
confidence Xð ! YÞ ¼
support Xð Þ
This measures the joint occurrence of X and Y over the X domain. If whenever X
appears Y tends to be present too, we will have a high confidence(X ! Y). The
ranges of the support and confidence are 0 support, confidence 1. Note that in
probabilistic terms, Confidence (X!Y) is equivalent to the conditional probability
P(YjX).
{peanutbutter} ! {bread} would be an example of a strong rule because it has
high support as well as high confidence in grocery store transactions. Shoppers
tend to purchase bread when they get peanut butter. These items tend to appear in
the same baskets, which yields high confidence for the rule {peanutbutter} !
{bread}.
Assume that a large supermarket tracks sales data by stock-keeping unit (SKU) for
each item, i.e., each item, such as “butter” or “bread”, is identified by an SKU
number. The supermarket has a database of transactions where each transaction is
a set of SKUs that were bought together (Table 12.1).
Suppose the database of transactions consist of following item-sets, each
representing a purchasing order:
require(knitr)
item_table =
as.data.frame(t(c("{1,2,3,4}","{1,2,4}","{1,2}","{2,3,4}",
"{2,3}","{3,4}","{2,4}")))
colnames(item_table) <- c("choice1","choice2","choice3","choice4",
"choice5","choice6","choice7")
kable(item_table, caption = "Item table")
We will use Apriori to determine the frequent item-sets of this database. To do
so, we will say that an item-set is frequent if it appears in at least 3 transactions of
the database, i.e., the value 3 is the support threshold (Table 12.2).
The first step of Apriori is to count up the number of occurrences, i.e., the
support, of each member item separately. By scanning the database for the first
time, we obtain get:
item_table = as.data.frame(t(c(3,6,4,5)))
colnames(item_table) <- c("item1","item2","item3","item4")
rownames(item_table) <- "support"
kable(item_table,caption = "Size 1 Support")
All the item-sets of size 1 have a support of at least 3, so they are all frequent.
The next step is to generate a list of all pairs of frequent items.
For example, regarding the pair {1,2}: the first table of Example 2 shows
items 1 and 2 appearing together in three of the item-sets; therefore, we say that
the support of the item {1,2} is 3 (Tables 12.3 and 12.4).
Table 12.3 Size 2 support {1,2} {1,3} {1,4} {2,3} {2,4} {3,4}
support 3 1 2 3 4 3
432 12 Apriori Association Rules Learning
{2,3,4}
support 2
Table 12.4 Size 3 support
item_table = as.data.frame(t(c(3,1,2,3,4,3)))
colnames(item_table) <-
c("{1,2}","{1,3}","{1,4}","{2,3}","{2,4}","{3,4}")
rownames(item_table) <- "support" kable(item_table,caption =
"Size 2 Support")
The pairs {1,2}, {2,3}, {2,4}, and {3,4} all meet or exceed the minimum
support of 3, so they are frequent. The pairs {1,3} and {1,4} are not and any
larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we can
prune sets: we will now look for frequent triples in the database, but we can
already exclude all the triples that contain one of these two pairs:
item_table = as.data.frame(t(c(2)))
colnames(item_table) <- c("{2,3,4}")
rownames(item_table) <- "support"
kable(item_table,caption = "Size 3 Support")
In the example, there are no frequent triplets – the support of the item-set
{2,3,4} is below the minimal threshold, and the other triplets were excluded
because they were super sets of pairs that were already below the threshold. We
have thus determined the frequent sets of items in the database, and illustrated how
some items were not counted because some of their subsets were already known to
be below the threshold.
Different from our data imports in the previous chapters, transactional data need to
be ingested in R using the read.transactions() function. This function will store
data as a matrix with each row representing an example and each column
representing a feature.
Let’s load the dataset and delete the irrelevant index column. With the write.
csv(Rdata,"path") function we can output our R data file into a local CSV file.
To avoid generating another index column in the output CSV file, we can use the
row.names ¼ F option.
med<-
read.csv("https://umich.instructure.com/files/1678540/download?
download
_frd=1", stringsAsFactors = FALSE)
med<-med[, -1]
write.csv(med, "medication.csv", row.names=F)
Now we can use read.transactions() in the arules package to read the CSV file
we just outputted.
# install.packages("arules")
library(arules)
labels ## 1
09 nacl ## 2 09
nacl bolus ## 3 acetaminophen
multiroute uh
Here we use the option rm.duplicates ¼ T because we may have similar
medication administration records for two different patients. The option skip ¼ 1
means we skip the heading line in the CSV file. Now we get a transactional data
with unique rows.
The summary of a transactional data contains rich information. The first block
of information tells us that we have 528 rows and 88 different medicines in this
matrix. Using the density number we can calculate how many non NA medication
records are in the data. In total, we have 528 88 ¼ 46,464 positions in the matrix.
Thus, there are 46,464 0.0209 ¼ 971 medicines prescribed during the study
period.
The second block lists the most frequent medicines and their frequencies in the
matrix. For example, fentanyl injection uh appeared 211 times; that is 211/528 ¼
40 of the (treatment) transactions. Since fentanyl is frequently used to help prevent
pain after surgery or other medical procedure, we can see that many of these
patients were going through some painful medical procedures.
The last block shows statistics about the size of the transaction. 248 patients
had only one medicine in the study period, while 12 of them had 5 medication
records one for each time point. On average, the patients are having 1.8 different
medicines.
The summary might still be fairly abstract; let’s visualize the data.
inspect(med[1:5,])
## items ## [1]
{acetaminophen uh, ##
cefazolin ivpb uh} ## [2]
{docusate, ##
fioricet, ## heparin
injection, ## ondansetron
injection uh, ## simvastatin}
## [3] {hydrocodone acetaminophen 5mg 325mg} ## [4]
{fentanyl injection uh} ## [5]
{cefazolin ivpb uh, ##
hydrocodone acetaminophen 5mg 325mg}
The inspect() call shows the transactional dataset. We can see that the
medication records of each patient are nicely formatted as item-sets.
We can further analyze the frequent terms using itemFrequency(). This will
show all item frequencies alphabetically ordered from the first five outputs
(Fig. 12.1).
12.6 Case Study 1: Head and Neck Cancer Medications 435
itemFrequency(med[, 1:5])
## 09
nacl ##
0.013257576
## 09 nacl
bolus
##
0.003787879 ## acetaminophen
multiroute uh
##
0.001893939 ## acetaminophen codeine 120
mg 12 mg 5 ml ##
0.001893939 ## acetaminophen
codeine 300mg 30 mg ##
0.020833333 itemFrequencyPlot(med,
topN=20)
The above graph is showing us the top 20 medicines that are most frequently
present in this dataset. Consistent with the prior summary() output, fentanyl is still
the most frequent item. You can also try to plot the items with a threshold for
support. Instead of topN ¼ 20, just use the option support ¼ 0.1, which will give
you all the items have a support greater or equal to 0.1.
The sparse matrix will show what mediations were prescribed for each patient
436 12 Apriori Association Rules Learning
(Fig. 12.2).
Fig. 12.2 A characteristic plot of the prescribed medications (columns) for the first 5 patients
(rows)
image(med[1:5, ])
The image on Fig. 12.2 has 5 rows (we only requested the first 5 patients) and
88 columns (88 different medicines). Although the picture may be a little hard to
interpret, it gives a sense of what kind of medicine is prescribed for each patient in
the study.
Let’s see an expanded graph including 100 randomly chosen patients (Fig. 12.3).
It shows us clearly that some medications are more popular than others. Now,
let’s fit the Apriori model.
12.6.3 Step 3: Training a Model on the Data
Withthedatainplace,wecanbuildtheassociationrulesusingapriori()function.
study. Also, the rules haveto haveleast 25% accuracy.Moreover, minlen¼2 would
be a very helpful option because it removes all rules that have fewer than two
items.
The results suggest we have a new rules object consisting of 29 rules.
med_rule<-apriori(med, parameter=list(support=0.01,
confidence=0.25, minlen=
2))
## Apriori
##
## Parameter specification:
## confidence minval smax arem aval originalSupport maxtime
support minlen ## 0.25 0.1 1 none FALSE
TRUE 5 0.01 2
## maxlen target
ext ## 10 rules
FALSE ##
## Algorithmic control:
## filter tree heap memopt load sort
verbose ## 0.1 TRUE TRUE FALSE TRUE
2 TRUE
##
## Absolute minimum support count: 5
##
## set item appearances ...[0 item(s)] done [0.00s].
## set transactions ...[88 item(s), 528 transaction(s)] done
[0.00s].
## sorting and recoding items ... [16 item(s)] done [0.00s].
## creating transaction tree ... done [0.00s].
## checking subsets of size 1 2 3 4 done [0.00s].
## writing ... [29 rule(s)] done
[0.00s]. ## creating S4 object ...
done [0.00s]. med_rule
## set of 29 rules
12.6.4 Step 4: Evaluating Model Performance
We have 13 rules that contain two items; 12 rules containing 3 items, and the
remaining 4 rules contain 4 items.
The lift column shows how much more likely one medicine is to be prescribed
to a patient given another medicine is prescribed. It is obtained by the following
formula:
confidence Xð ! YÞ lift
Xð ! YÞ ¼ :
support Yð Þ
Note that lift(X ! Y) is the same as lift(Y ! X). The range of lift is [0,1) and higher
lift is better. We don’t need to worry about support since we already set a
threshold that the support will exceed.
Using hte arugleViz package we can visualize the confidence and support
scatter plots for all the rules (Fig. 12.4).
# install.packages("arulesViz")
library(arulesViz)
plot(sort(med_rule))
440 12 Apriori Association Rules Learning
Again, we can utilize the inspect() function to see exactly what are these rules.
inspect(med_rule[1:3])
## lhs rhs
support confidence lift
## [1] {acetaminophen uh} => {cefazolin ivpb uh}
0.01136364 0.4615385 2.256410
## [2] {ampicillin sulbactam ivpb uh} => {heparin injection}
0.01893939 0.3448276 1.733990
## [3] {ondansetron injection uh} => {heparin injection}
0.01704545
0.2727273 1.371429
Here, lhs and rhs refer to “left hand side” and “right hand side” of the rule,
respectively. Lhs is the given condition and rhs is the predicted result. Using the
first row as an example: If a head-and-neck patient has been prescribed
acetaminophen (pain reliever and fever reducer), it is likely that the patient is also
prescribed cefazolin (antibiotic that resist bacterial infections); bacterial infections
are associated with fevers and some cancers.
Sorting the resulting association rules corresponding to high lift values will help us
select the most useful rules.
inspect(sort(med_rule, by="lift")[1:3])
## lhs rhs
support confidence lift
## [1] {fentanyl injection uh,
## heparin injection,
## hydrocodone acetaminophen 5mg 325mg} => {cefazolin
ivpb uh}
0.01515152 0.8000000
3.911111 ## [2] {cefazolin
ivpb uh,
## fentanyl injection uh, ## hydrocodone
acetaminophen 5mg 325mg} => {heparin injection}
0.01515152 0.6153846 3.094505
## [3] {heparin injection,
## hydrocodone acetaminophen 5mg 325mg} => {cefazolin
ivpb uh} 0.03787879 0.6250000 3.055556
These rules may need to be interpreted by clinicians and experts in the
specific context of the study. For instance, the first row, {fentanyl, heparin,
hydrocodone acetaminophen} implies {cefazolin}. Fentanyl and hydrocodone
acetaminophen are both pain relievers that may be prescribed after surgery.
Heparin is usually used before surgery to reduce the risk of blood clots. This rule
may suggest that patients who have undergone surgical treatments may likely need
cefazolin to prevent postsurgical bacterial infection.
12.6 Case Study 1: Head and Neck Cancer Medications 441
We can save these rules into a CSV file using write(). It is similar with the
function write.csv() that we have mentioned in the beginning of this case study.
##
## element (itemset/transaction) length distribution:
## sizes
## 1 2 3 4 5 6 7 8 9 10 11 12 13
14 1 5
## 2159 1643 1299 1005 855 645 545 438 350 246 182 117 78
77 5 5
## 16 17 18 19 20 21 22 23 24 26 27 28
29 32 ## 46 29 14 14 9 11 4 6 1 1
1 1 3 1 ##
## Min. 1st Qu. Median Mean 3rd Qu.
Max. ## 1.000 2.000 3.000 4.409 6.000
32.000
##
## includes extended item information - examples:
## labels level2
level1 ## 1 frankfurter sausage meat
and sausage ## 2 sausage sausage
meat and sausage ## 3 liver loaf
sausage meat and sausage
We will try to find out the top 5 frequent grocery items and plot them (Fig. 12.6).
## Apriori ## Parameter
specification:
## confidence minval smax arem aval originalSupport maxtime
support minlen ## 0.6 0.1 1 none FALSE
TRUE 5 0.006 2
## maxlen target ext
## 10 rules FALSE
## ## Algorithmic
control:
## filter tree heap memopt load sort
verbose ## 0.1 TRUE TRUE FALSE TRUE
2 TRUE
##
## Absolute minimum support count: 59
## set item appearances ...[0 item(s)] done [0.00s].
## set transactions ...[169 item(s), 9835 transaction(s)] done
[0.00s]. ## sorting and recoding items ... [109 item(s)] done
[0.00s].
## creating transaction tree ... done [0.00s].
## checking subsets of size 1 2 3 4 done [0.02s].
## writing ... [8 rule(s)] done
[0.00s]. ## creating S4 object ...
done [0.00s].
groceryrules
## set of 8 rules
12.8 Summary
447
Fig. 12.7 Live demo: association
rule mining
inspect(sort(groceryrules, by = "lift")[1:3])
## lhs rhs support
confidence ## [1] {butter,whipped/sour cream} => {whole milk}
0.006710727 0.6600000
## [2] {butter,yogurt} => {whole milk} 0.009354347
0.6388889 ## [3] {root vegetables,butter} => {whole milk}
0.008235892 0.6377953
## lift
## [1] 2.583008
## [2] 2.500387
## [3] 2.496107
We observe mainly rules between dairy products. It makes sense that customers
pick up milk when they walk down the dairy products isle. Experiment further
with various parameter settings and try to interpret the results in the context of this
grocery case-study (Fig. 12.7).
Mining association rules Demo https://rdrr.io/cran/arules/
# copy-paste this R code into the live online
demo: # https://rdrr.io/snippets/
# press RUN, and examine the results.
# The HYPERLINK "https://archive.ics.uci.edu/ml/datasets/adult" Adult dataset includes 48842 sparse
transactions
# (rows) and 115 items (columns).
library(arules)
data("Adult") rules
<- apriori(Adult,
parameter = list(supp = 0.5, conf = 0.9, target = "rules"))
summary(rules)
inspect(sort(rules, by = "lift")[1:3])
12.8 Summary
• The Apriori algorithm for association rule learning is only suitable for large
transactional data. For some small datasets, it might not be very helpful.
448 12 Apriori Association Rules Learning
• It is useful for discovering associations, mostly in early phases of an
exploratory study.
• Some rules can be built due to chance and may need further verifications.
• See also Chap. 20 (Text Mining and NLP).
Try to replicate these results with other data from the list of our Case-Studies.
Use the SOCR Jobs Data to practice learning via Apriori Association Rules
• Load the Jobs Data. Use this guide to load HTML data.
• Focus on the Description feature. Replace all underscore characters “_” with
spaces.
• Review Chap. 8, use tm package to process text data to plain text. (Hint: need
to apply stemDocument as well, we will discuss more details in Chap. 20.)
• Generate a “transaction” matrix by considering each job as one record and
description words as “transaction” items. (Hint: You need to fill missing
values since records do not have the same length of description.)
• Save the data using write.csv() and then use read.transactions() in arules
package to read the CSV data file. Visualize the item support using item
frequency plots. What terms appear as more popular?
• Fit a model: myrules < apriori(data ¼ jobs,parameter ¼ list(support ¼ 0.02,
confidence ¼ 0.6, minlen ¼ 2)). Try out several rule thresholds trading off
gain and accuracy.
• Evaluate the rules you obtained with lift and visualize their metics.
• Mine medical related rules (e.g., rules include “treatment”, “patient”, “care”,
“diagnos.” Notice that these are word stems).
• Sort the set of association rules for all and medical related subsets.
• Save these rules into a CSV file.
References
Witten, IH, Frank, E, Hall. MA. (2011) Data Mining: Practical Machine Learning Tools and
Techniques, The Morgan Kaufmann Series in Data Management Systems, Elsevier, ISBN
0080890369, 9780080890364.
Soh, PJ. Woo, WL, Sulaiman, HA, Othman, MA, Saat, MS (eds). (2016) Advances in Machine
Learning and Signal Processing: Proceedings of MALSIP 2015, Volume 387 of Lecture Notes
in Electrical Engineering, Springer, ISBN 3319322133, 9783319322131.
Chapter 13
k-Means Clustering
Fig. 13.1 Hotdogs dataset – scatterplot of calories and sodium content blocked by type of meat
Fig. 13.2 Scatterplot of calories and sodium content with meat type labels
452 13 k-Means Clustering
Silhouette plots are useful for interpretation and validation of consistency of all
clustering algorithms. The silhouette value, 2[1,1], measures the similarity
(cohesion) of a data point to its cluster relative to other clusters (separation).
Silhouette plots rely on a distance metric, e.g., the Euclidean distance, Manhattan
distance, Minkowski distance, etc.
• High silhouette value suggest that the data matches its own cluster well.
• A clustering algorithm performs well when most Silhouette values are high.
• Low value indicates poor matching within the neighboring cluster.
• Poor clustering may imply that the algorithm configuration may have too
many or too few clusters.
Suppose a clustering method groups all data points (objects), {X i}i, into k
clusters and define:
• di as the average dissimilarity of Xi with all other data points within its cluster.
di captures the quality of the assignment of X i to its current class label. Smaller
or larger di values suggest better or worse overall assignment for X i to its
cluster, respectively. The average dissimilarity of X i to a cluster C is the average
distance between Xi and all points in the cluster of points labeled C.
• li as the lowest average dissimilarity of X i to any other cluster, that X i is not a
member of. The cluster corresponding to li, the lowest average dissimilarity, is
called the Xi neighboring cluster, as it is the next best fit cluster for Xi.
di
Note that:
• 1 si 1,
• si ! 1 when di li, i.e., the dissimilarity of X i to its cluster, C is much lower relative
to its dissimilarity to other clusters, indicating a good (cluster assignment)
match. Thus, high Silhouette values imply the data is appropriately clustered.
• Conversely, 1 si when li di, di is large, implying a poor match of X i with its
current cluster C, relative to neighboring clusters. X i may be more
appropriately clustered in its neighboring cluster.
• si 0 means that the Xi may lie on the border between two natural clusters.
13.3 The k-Means Clustering Algorithm
The k-means algorithm is one of the most commonly used algorithms for
clustering.
Xi¼1j jpi qi c1c may also be used. For c ¼ 2, the Minkowski distance n
n¼1
454 13 k-Means Clustering
How can we separate clusters using this formula? The k-means protocol is as
follows:
• Initiation: First, we define k points as cluster centers. Often these points are
k random points from the dataset. For example, if k ¼ 3, we choose three
random points in the dataset as cluster centers.
• Assignment: Second, we determine the maximum extent of the cluster
boundaries that all have maximal distance from their cluster centers. Now the
data is separated into k initial clusters. The assignment of each observation to
a cluster is based on computing the least within-cluster sum of squares
according to the chosen distance. Mathematically, this is equivalent to
Voronoi tessellation of the space of the observations according to their mean
distances.
• Update: Third, we update the centers of our clusters to new means of the
cluster centroid locations. This updating phase is the essence of the k-means
algorithm.
Although there is no guarantee that the k-means algorithm converges to a
global optimum, in practice, the algorithm tends to converge, i.e., the
assignments no longer change, to a local minimum as there are only a finite
number of such Voronoi partitionings.
Fig. 13.4 Elbow plot of the
within-group homogeneity
against the number of groups
parameter (k)
We don’t want our number of clusters to be either too large or too small. If it is
too large, the groups are too specific to be meaningful. On the other hand, too
few groups might be too broadly general to be useful. As we mentioned in Chap.
n
7, k ¼ 2 is a good place to start. However, it might generate a large number of
groups. Also, thepffi elbow method may be used to determine the relationship of
k and homogeneity of the observations of each cluster. When we graph within-
group homogeneity against k, we can find an “elbow point” that suggests a
455
minimum k corresponding to relatively large within-group homogeneity (Fig.
13.4).
This graph shows that homogeneity barely increases above the “elbow point”.
There are various ways to measure homogeneity within a cluster. For detailed
explanations please read On clustering validation techniques, Journal of
Intelligent Information Systems Vol. 17, pp. 107–145, by M. Halkidi, Y. Batistakis,
and M. Vazirgiannis (2001).
Adults
The dataset we will be using is the Divorce and Consequences on Young Adults
dataset. This is a longitudinal study focused on examining the consequences of
recent parental divorce for young adults (initially ages 18–23) whose parents had
divorced within 15 months of the study’s first wave (1990–91). The sample
consisted of 257 White respondents with newly divorced parents. Here we have a
subset of this dataset with 47 respondents in our case-studies folder,
CaseStudy01_Divorce_YoungAdults_Data.csv.
456 13 k-Means Clustering
Variables
Let’s load the dataset and pull out a summary of all variables.
divorce<-read.csv("https://umich.instructure.com/files/399118/download?
downl oad_frd=1") summary(divorce)
in Chap. 8). The following line of code generates a new indicator variable for
divorce year ¼ 1990.
divorce$DIVYEAR<-ifelse(divorce$DIVYEAR==89, 0, 1)
We also need another preprocessing step to deal with livewithmom, which has
missing values, livewithmom ¼9. We can impute these using momint and dadint
variables for each specific participant
table(divorce$livewithmom)
##
## 1 2 9
## 31 15 1
divorce[divorce$livewithmom==9, ]
myclusters<-kmeans(mydata, k)
diz_clussters$size
## [1] 12 24 11
At first glance, it seems that 3 worked well for the number of clusters. We
don’t have any cluster that contains a small number of observations. The three
clusters have relatively equal number of respondents.
Silhouette plots represent the most appropriate evaluation strategy to assess
the quality of the clustering. Silhouette values are between 1 and 1. In our case,
two data points correspond to negative Silhouette values, suggesting these cases
may be “mis-clustered” or perhaps are ambiguous, as the Silhouette value is close
to 0. We can observe that the average Silhouette is reasonable, about 0.2 (Fig.
13.5).
13.4 Case Study 1: Divorce and Consequences on Young Adults 459
require(cluster)
The next step would be to interpret the clusters in the context of this social study.
diz_clussters$center
s
## DIVYEAR momint dadint momclose depression
livewithmom
## 1 0.5004720 1.1698438 -0.07631029 1.2049200 -0.1112567
0.1591755
## 2 -0.2953914 -0.5016290 0.36107795 -0.5096937 0.1180883
-0.7107373
## 3 0.0985208 -0.1817299 -0.70455885 -0.2023993 -0.1362761
1.3770536
##
gethitched
## 1
-0.1390230
## 2
-0.1390230
## 3
0.4549845
Fig. 13.6 Barplot illustrating the features discriminating between the three cohorts in the
divorce sonsequences on young adults dataset
avoid getting married. These young adults tends to be not be too emotional
and do not value family.
• Cluster 2: divyear ¼ mostly 89, momint ¼ not close, dadint ¼ very close,
livewithmom ¼ father, depression ¼ mild, marry ¼ do not know/not inclined.
Cluster 2 includes children that mostly live with dad and only feel close to dad.
These people don’t felt severely depressed and are not inclined to marry.
These young adults may prefer freedom and tend to be more naive.
• Cluster 3: divyear ¼ mix of 89 and 90, momint ¼ not close, dadint ¼ not at all,
livewithmom ¼ mother, depression ¼ sometimes, marry ¼ tend to get
married. Cluster 3 contains children that did not feel close to either dad or
mom. They sometimes felt depressed and are willing to build their own family.
These young adults seem to be more independent.
We can see that these three different clusters do contain three alternative
types of young adults. Bar plots provide an alternative strategy to visualize the
difference between clusters (Fig. 13.6).
For each of the three clusters, the bars in the plot above represent the
following order of features DIVYEAR,momint,dadint,momclose,depression,
livewithmom,gethitched.
Let’s still use the divorce data to illustrate a model improvement using k-means+
+. (Appropriate) initialization of the k-means algorithm is of paramount
importance. The k-means++ extension provides a practical strategy to obtain an
optimal initialization for k-means clustering using a predefined kpp_init
method.
# install.packages("matrixStats")
require(matrixStats)
kpp_init = function(dat, K)
{ x = as.matrix(dat) n =
nrow(x)
# Randomly choose a first center centers
= matrix(NA, nrow=K, ncol=ncol(x))
set.seed(123) centers[1,] =
as.matrix(x[sample(1:n, 1),]) for (k in
2:K) {
# Calculate dist^2 to closest center for each
point dists = matrix(NA, nrow=n, ncol=k-1) for
(j in 1:(k-1)) { temp = sweep(x, 2, centers[j,],
'-') dists[,j] = rowSums(temp^2)
463
}
dists = rowMins(dists)
# Draw next center with probability proportional to dist^2
cumdists = cumsum(dists)
prop = runif(1, min=0, max=cumdists[n])
centers[k,] = as.matrix(x[min(which(cumdists > prop)),])
}
return(centers)
} clust_kpp = kmeans(di_z, kpp_init(di_z, 3), iter.max=100,
algorithm='Lloyd')
Fig. 13.9 Evolution of the average silhouette value with respect to the number of clusters
Similar to what we performed for KNN and SVM, we can tune the k-means
parameters, including centers initialization and k (Fig. 13.9).
n_rows <- 21 mat =
matrix(0,nrow = n_rows)
for (i in 2:n_rows){
set.seed(321)
clust_kpp = kmeans(di_z, kpp_init(di_z, i), iter.max=100,
algorithm='Lloyd
') sil = silhouette(clust_kpp$cluster,
dis) mat[i] = mean(as.matrix(sil)
[,3])
}
colnames(mat) <- c("Avg_Silhouette_Value")
mat
##
Avg_Silhouette_Value ##
465
[1,] 0.0000000
## [2,] 0.1948335
## [3,] 0.1980686
## [4,] 0.1789654
## [5,] 0.1716270
## [6,] 0.1546357
## [7,]
0.1622488 ## [8,]
0.1767659 ## [9,]
0.1928883
## [10,] 0.2026559
## [11,] 0.2006313
## [12,] 0.1586044
## [13,] 0.1735035
## [14,] 0.1707446
## [15,] 0.1626367
## [16,] 0.1609723
## [17,] 0.1785733
## [18,] 0.1839546
## [19,] 0.1660019
## [20,]
0.1573574 ## [21,]
0.1561791
ggplot(data.frame(k=2:n_rows,sil=mat[2:n_rows]),aes(x=k,y=sil))+
geom_line()+
scale_x_continuous(breaks = 2:n_rows)
This suggeststhatk 3 may beanappropriate numberofclusterstouse inthiscase.
Next, let’s set the maximal iteration of the algorithm and rerun the model
with optimal k¼2, k¼3 or k¼10. Below, we just demonstrate the results for k ¼ 3.
There are still 2 mis-clustered observations, which is not a significant
improvement on the prior model according to the average Silhouette measure
(Fig. 13.10).
k <- 3 set.seed(31) clust_kpp = kmeans(di_z, kpp_init(di_z, k),
iter.max=200, algorithm="MacQuee n") sil3 =
silhouette(clust_kpp$cluster, dis) summary(sil3)
Fig. 13.10 Silhouette plot for the optimal k ¼ 3 andd kpp_init Initialization
Note that we now see 3 cases of group 1 that have negative silhouette values
(previously we had only 2), albeit the overall average silhouette remains 0.2.
First, we need to load the dataset into R and report its summary and dimensions.
trauma<-read.csv("https://umich.instructure.com/files/399129/download?
downlo ad_frd=1", sep = " ") summary(trauma); dim(trauma)
## Max. :20.000
## [1] 1000 9
In the summary we see two factors race and traumatype. Traumatype codes
the real classes we are interested in. If the clusters created by the model are
quite similar to the trauma types, our model may have a quite reasonable
interpretation. Let’s also create a dummy variable for each racial category.
trauma$black<-ifelse(trauma$race=="black", 1, 0)
trauma$hispanic<-ifelse(trauma$race=="hispanic", 1, 0)
trauma$other<-ifelse(trauma$race=="other", 1, 0)
trauma$white<-ifelse(trauma$race=="white", 1, 0)
Then, we will remove traumatype the class variable from the dataset to avoid
biasing the clustering algorithm. Thus, we are simulating a real biomedical
casestudy where we do not necessarily have the actual class information
available, i.e., classes are latent features.
Similar to case-study 1, let’s standardize the dataset and fit a k-means model.
set.seed(1234) trauma_clusters<-kmeans(tr_z, 6)
Here we use k ¼ 6 in the hope that we may have 5 of these clusters match the
specific 5 trauma types. In this case study, we have 1000 observations and k ¼ 6
may be a reasonable option.
13.6 Case Study 2: Pediatric Trauma 469
Fig. 13.11 Key predictors discriminating between the 6 cohorts in the trauma study
To assess the clustering model results, we can examine the resulting clusters (Fig.
13.11).
trauma_clusters$centers
trauma$clusters<-trauma_clusters$cluster
table(trauma$clusters, trauma$traumatype)
##
## dvexp neglect physabuse psychabuse sexabuse
## 1 0 0 100 0 100 ## 2
10 118 0 61 0 ## 3 23
133 0 79 0
## 4 100 0 0 0 0
## 5 100 0 0 0 0
## 6 17 99 0 60 0
We can see that all of the children in Cluster 4 belong to dvexp (exposure to
domestic violence or intimate partner violence). If we use the mode of each
cluster to be the class for that group of children, we can classify 63 sexabuse
cases, 279 neglect cases, 41 physabuse cases, 100 dvexp cases, and another 71
neglect cases. That is 554 cases out of 1,000 cases identified with correct class.
The model has a problem in distinguishing between neglect and psychabuse, but
it has a good accuracy.
Let’s review the output Silhouette value summary. It works well as only a
small portion of samples appear mis-clustered.
dis_tra = dist(tr_z) sil_tra =
silhouette(trauma_clusters$cluster, dis_tra)
summary(sil_tra)
## [1] 0.2245298
# The sil object colnames are ("cluster", "neighbor", "sil_width")
13.6 Case Study 2: Pediatric Trauma 471
Fig. 13.12 Evolution of the average silhouette value with respect to the number of clusters
Next, let’s try to tune k with k-means++ and see if k ¼ 6 appears to be optimal
(Fig. 13.12).
mat = matrix(0,nrow = 11)
for (i in 2:11){
set.seed(321)
clust_kpp = kmeans(tr_z, kpp_init(tr_z, i), iter.max=100,
algorithm='Lloyd
') sil = silhouette(clust_kpp$cluster,
dis_tra) mat[i] = mean(as.matrix(sil)
[,3])
}
mat
## [,1]
## [1,] 0.0000000
## [2,] 0.2433222
## [3,] 0.1675486
## [4,] 0.1997315
## [5,] 0.2116534
## [6,] 0.2400086
## [7,] 0.2251367
## [8,] 0.2199859
## [9,]
0.2249569 ##
[10,] 0.2347122
## [11,]
0.2304451
ggplot(data.frame(k=2:11,sil=mat[2:11]),aes(x=k,y=sil))+geom_line()
+scale_x_ continuous(breaks = 2:11)
472 13 k-Means Clustering
Finally, let’s use k-means++ with k ¼ 6 and set the algorithm’s maximal
iteration before rerunning the experiment:
set.seed(1234) clust_kpp = kmeans(tr_z, kpp_init(tr_z, 6),
iter.max=100, algorithm='Lloyd') sil = silhouette(clust_kpp$cluster,
dis_tra) summary(sil)
## [1] 0.2400086
Use the Boys Town Study of Youth Development data, second case study,
CaseStudy02_Boystown_Data.csv, which we used in Chap. 7, to find clusters
using variables like GPA, alcohol abuse, attitudes on drinking, social status, parent
closeness, and delinquency for clustering (all variables other than gender and ID).
First, we must load the data and transfer sex, dadjob, and momjob into
dummy variables.
boystown<-
read.csv("https://umich.instructure.com/files/399119/download?down
load_frd=1", sep=" ") boystown$sex<-boystown$sex-1 boystown$dadjob <-
(-1)*(boystown$dadjob-2) boystown$momjob <- (-1)*(boystown$momjob-2)
str(boystown)
Fig. 13.13 Main features discriminating between the 3 cohorts in the divorce impact on youth
study
Then, extract all the variables, except the first two columns (subject
Next, we need to standardize and clustering the data with k¼3. You may have
the following centers (numbers could be a little different) (Fig. 13.13).
## gpa Alcoholuse alcatt dadjob momjob
dadclose
## 1 -0.5101243 -0.08555163 -0.30098866 0.1939577 0.04868109
1.1914502
## 2 -0.2753631 0.49998217 0.13804858 -0.2421906 -0.30151766
-0.4521484
## 3 0.6590193 -0.51256447 0.04599325 0.1451756 0.31107377
-0.2896562
## momclose larceny
vandalism
## 1 0.65647213 -0.1755012
-0.4453044
## 2 -0.33341358 -0.4017282
0.5252308
## 3 -0.06343891 0.5769583
-0.2981561
Add k-means cluster labels as a new (last) column back in the original dataset.
To investigate the gender distribution within different clusters we may use
aggregate().
## clusters sex
## 1 1 0.6875000
## 2 2 0.5802469
## 3 3 0.6760563
Here clusters is the new vector indicating cluster labels. The gender
distribution does not vary much between different cluster labels (Fig. 13.14).
474 13 k-Means Clustering
13.7 Hierarchical Clustering
# install.packages("ggdendro") require(ggdendro)
ggdendrogram(as.dendrogram(pitch_ward), leaf_labels=FALSE,
labels=FALSE)
13.7 Hierarchical Clustering
476 13 k-Means Clustering
Fig. 13.17 Silhouette plot for hierarchical clustering using the Ward method
mean(sil_ward[,"sil_width"])
Generally speaking, the best result should come from wald linkage, but you
should also try complete linkage (method ¼ ‘complete’). We can see that the
hierarchical clustering result (average silhouette value 0.24) mostly agrees with
the prior k-means (0.2) and k-means++ (0.2) results (Fig. 13.17).
summary(sil_ward)
## Silhouette of 47 units in 10 clusters from silhouette.default(x =
cutree( pitch_ward, k = 10), dist = dis) :
## Cluster sizes and average silhouette widths:
## 4 5 6 3 6 12 ##
0.25905454 0.29195989 0.29305926 -0.02079056 0.19263836 0.26268274
## 5 2 3
1 ## 0.32594365 0.44074717 0.08760990
0.00000000
## Individual silhouette widths:
## Min. 1st Qu. Median Mean 3rd Qu.
Max. ## -0.1477 0.1231 0.2577 0.2399 0.3524
0.5176 plot(sil_ward)
477
13.8 Gaussian Mixture Models
More details about Gaussian mixture models (GMM) are provided in the
supporting materials online. Below is a brief introduction to GMM using the
Mclust function in the R package mclust.
For multivariate mixture, there are totally 14 possible models:
• "EII" ¼ spherical, equal volume
• "VII" ¼ spherical, unequal volume
• "EEI" ¼ diagonal, equal volume and shape
• "VEI" ¼ diagonal, varying volume, equal shape
• "EVI" ¼ diagonal, equal volume, varying shape
• "VVI" ¼ diagonal, varying volume and shape
• "EEE" ¼ ellipsoidal, equal volume, shape, and orientation
• "EVE" ¼ ellipsoidal, equal volume and orientation (*)
• "VEE" ¼ ellipsoidal, equal shape and orientation (*)
• "VVE" ¼ ellipsoidal, equal orientation (*)
• "EEV" ¼ ellipsoidal, equal volume and equal shape
• "VEV" ¼ ellipsoidal, equal shape
• "EVV" ¼ ellipsoidal, equal volume (*)
• "VVV" ¼ ellipsoidal, varying volume, shape, and orientation
For more practical details, you may refer to Mclust. For more theoretical
details, see C. Fraley and A. E. Raftery (2002).
Let’s use the Divorce and Consequences on Young Adults dataset for a
demonstration.
library(mclust)
## [1] "EEE"
Thus, the optimal model here is "EEE" (Figs. 13.18, 13.19, and 13.20).
plot(gmm_clust,what = "density")
plot(gmm_clust,what = "classification")
13.8 Gaussian Mixture Models
478 13 k-Means Clustering
Fig. 13.18 Bayesian information criterion plots for different GMM classification models for the
divorce youth data
13.9 Summary
Use the Amyotrophic Lateral Sclerosis (ALS) dataset. This case-study examines the
patterns, symmetries, associations and causality in a rare but devastating disease,
amyotrophic lateral sclerosis (ALS). A major clinically relevant question in this
biomedical study is: What patient phenotypes can be automatically and reliably
identified and used to predict the change of the ALSFRS slope over time?. This
problem aims to explore the data set by unsupervised learning.
• Load and prepare the data.
• Perform summary and preliminary visualization.
480 13 k-Means Clustering
References
References
Wu, J. (2012) Advances in K-means Clustering: A Data Mining Thinking, Springer Science &
Business Media, ISBN 3642298079, 9783642298073.
Dinov, ID. (2008) Expectation Maximization and Mixture Modeling Tutorial. Statistics Online
Computational Resource. UCLA: Statistics Online Computational Resource. Retrieved from:
http://escholarship.org/uc/item/1rb70972.
Celebi, ME (ed.) (2014) Partitional ClusteringAlgorithms,
SpringerLink: Bücher, ISBN 3319092596, 9783319092591.
Fraley, C and Raftery, AE. (2002). Model-based clustering, discriminant analysis, and density
estimation. Journal of the American Statistical Association, 97, 611–631.
Chapter 14
Model Performance Assessment
## minor_disease
severe_disease ## 10
0.1979698 0.8020302
## 12 0.1979698 0.8020302
## 26 0.3468705 0.6531295
## 37 0.1263975 0.8736025
## 41 0.7290209
0.2709791 ## 43 0.3163673
0.6836327
These can be contrasted against the C5.0 classification label results:
14.2 Evaluation Strategies 477
pred_tree<-predict(qol_model, qol_test)
head(pred_tree)
## [1] severe_disease severe_disease severe_disease severe_disease
## [5] minor_disease severe_disease
## Levels: minor_disease severe_disease
More details about binary test assessment are available on the Scientific Methods
for Health Sciences (SMHS) EBook site. Table 14.2 summarizes the key measures
commonly used to
evaluate the
performance
See also SMHS EBook;ofPower, Sensitivity and Specificity section.
We talked about this confusion matrices in Chap. 9. For binary classes, these will
be 2 2 matrices. Each of the cells has specific meaning, see the 2 2 Table 14.2
where
• True Positive(TP): Number of observations that correctly classified as “yes” or
“success”
• True Negative(TN): Number of observations that correctly classified as “no”
or
“failure”
14.2 Evaluation Strategies 479
TP þ TN TP þ TN accuracy ¼
¼ :
TP þ TN þ FP þ FN Total number of observations
On the other hand, the error rate, or proportion of incorrectly classified
observations, is calculated using:
FP þ FN FP þ FN
errorrate ¼ ¼¼
TP þ TN þ FP þ FN Total number of observations ¼ 1
accuracy:
If we look at the numerator and denominator carefully, we can see that the error
rate and accuracy add up to 1. Therefore, 95% accuracy implies a 5% error rate.
In R, we have multiple ways to obtain confusion matrices. The simplest way
would be to use table(). For example, in Chap. 8, to report a plain 2 2 table we
used:
hn_test_pred<-predict(hn_classifier, hn_test)
table(hn_test_pred, hn_med_test$stage)
##
## hn_test_pred early_stage
later_stage ## early_stage
69 23 ## later_stage
8 0
Then why did we use CrossTable() function back in Chapter 8? Because it
reports additional useful information about the model performance.
library(gmodels)
CrossTable(hn_test_pred, hn_med_test$stage)
## Cell Contents
##
|-------------------------|
## | N
| ## | Chi-square
contribution | ## |
N / Row Total |
## | N / Col Total |
## | N / Table Total
| ##
|-------------------------|
## Total Observations in Table: 100
480 14 Model Performance Assessment
## | hn_med_test$stage
## hn_test_pred | early_stage | later_stage | Row
Total | ##
-------------|-------------|-------------|-------------|
## early_stage | 69 | 23 |
92 | ## | 0.048 | 0.160 |
|
## | 0.750 | 0.250 |
0.920 | ## | 0.896 | 1.000 |
|
## | 0.690 | 0.230 |
| ##
-------------|-------------|-------------|-------------|
## later_stage | 8 | 0 |
8 | ## | 0.550 | 1.840 |
| ## | 1.000 | 0.000 |
0.080 | ## | 0.104 | 0.000 |
|
## | 0.080 | 0.000 |
| ##
-------------|-------------|-------------|-------------|
## Column Total | 77 | 23 |
100 | ## | 0.770 | 0.230 |
|
##
-------------|-------------|-------------|-------------|
With both tables, we can calculate accuracy and error rate by hand.
accuracy<-
(69+0)/100
accuracy ## [1]
0.69
error_rate<-(23+8)/100
error_rate
## [1] 0.31 1-
accuracy
## [1] 0.31
For matrices larger than 2 2, all diagonal elements are observations that have
been correctly classified and off-diagonal elements are those that have been
incorrectly classified.
Test_Y, and the second argument, of the same length, represents the vector of
predicted labels.
This example was presented as the first case-study in Chap. 9.
library(caret)
qol_pred<-predict(qol_model, qol_test)
confusionMatrix(table(qol_pred, qol_test$cd),
positive="severe_disease")
## Confusion Matrix and Statistics
##
##
## qol_pred minor_disease
severe_disease ## minor_disease
149 89
## severe_disease 74 131
##
## Accuracy : 0.6321
## 95% CI : (0.5853,
0.6771) ## No Information Rate :
0.5034 ## P-Value [Acc > NIR]
: 3.317e-08 ##
## Kappa : 0.2637
## Mcnemar's Test P-Value : 0.2728
##
## Sensitivity : 0.5955
## Specificity : 0.6682
## Pos Pred Value : 0.6390
## Neg Pred Value : 0.6261
## Prevalence : 0.4966
## Detection Rate : 0.2957
## Detection Prevalence : 0.4628
## Balanced Accuracy : 0.6318
##
## 'Positive' Class :
severe_disease
14.2.4 The Kappa (κ) Statistic
The Kappa statistic was originally developed to measure the reliability between
two human raters. It can be harnessed in machine-learning applications to compare
the accuracy of a classifier, where onerater represents the ground truth (for
labeled data, these are the actual values of each instance) and the secondrater
represents the results of the automated machine-learning classifier. The order of
listing the raters is irrelevant.
Kappa statistic measures the possibility of a correct prediction by chance alone
and answers the question of Howmuchbetteristheagreement(between the ground
truth and the machine-learning prediction) than
wouldbeexpectedbychancealone? Its value is between 0 and 1. When κ ¼ 1, we
have a perfect agreement between a computed prediction (typically the result of a
model-based or model-free technique forecasting an outcome of interest) and an
482 14 Model Performance Assessment
P að Þ P eð Þ
kappa ¼ :
1 P eð Þ
P(a) and P(e) simply denote probability of actual and expected agreement
between the classifier and true values.
table(qol_pred, qol_test$cd)
##
## qol_pred minor_disease severe_disease ## minor_disease
149 89
## severe_disease 74 131
p_a<-(149+131)/(149+89+74+131) p_a
## [1] 0.6320542
Similarly:
In our case:
14.2 Evaluation Strategies 483
table(qol_pred, qol_test$cd)
##
## qol_pred minor_disease severe_disease ## minor_disease
149 89
## severe_disease 74 131
Summary of the Kappa Score for Calculating Prediction Accuracy
compare among a set of different classifiers. It takes into account random chance
(agreement with a random classifier). That makes Kappa more meaningful than
simply using accuracy as a metric. For instance, the interpretation of an
ObservedAccuracyof80% is relative to the ExpectedAccuracy.
ObservedAccuracyof80% is more impactful for an ExpectedAccuracyof50%
compared to ExpectedAccuracyof75%.
Accuracy
OA ¼ ¼ 0:6:
• Expected Accuracy (EA) is the accuracy that any random classifier would be
expected to achieve based on the given confusion matrix. EA is the
proportionofinstances of each class (True and False), along with the number of
instances that the automated classifier agreed with the ground truth label. The
EA is calculated by multiplying the marginal frequencies of True for the true-
state and the machine classified instances, and dividing by the total number of
instances. The marginal frequency of True for the true-state is 75 (50 + 25)
EA TrueðÞ ¼ ¼ 42:5:
14.2 Evaluation Strategies 485
We similarly compute the EA(False) for the second, False, outcome, by using
the marginal frequencies for the true-state ((False| true state) ¼ 75 ¼ 50 + 25) and
the ML classifier (False| classifier) ¼ 65(40 + 25). Then, the expected accuracy
for the True outcome is:
EA FalseðÞ ¼ ¼ 32:5:
OA EA
ðKappaÞκ ¼ ¼¼ 0:2:
1 EA
If we take a closer look at the confusionMatrix() output, we find there are two
important statistics “sensitivity” and “specificity”.
Sensitivity, or true positive rate, measures the proportion of “success”
observations that are correctly classified.
TP
sensitivity ¼ :
TP þ FN
TN
specificity ¼ :
TN þ FP
## [1] 0.5954545
spec<-149/(149+74) spec
## [1] 0.6681614
Sensitivity and specificity both range from 0 to 1. For either measure, a value
of 1 implies that the positive and negative predictions are very accurate. However,
simultaneously high sensitivity and specificity may not be attainable in real world
situations. There is a tradeoff between sensitivity and specificity. To compromise,
some studies loosen the demands on one and focus on achieving high values on the
other.
TP
precision ¼ : TP
þ FP
Recall is the proportion of true “positives” among all “true positive” conditions.
A model with high recall captures most “interesting” cases.
TP
recall ¼ :
TP þ FN
Again, let’s calculate these by hand for the QoL data:
prec<-131/(131+74) prec
## [1] 0.6390244
recall<-131/(131+89) recall
## [1] 0.5954545
Another way to obtain precision would be posPredValue() under the caret
package. Remember to specify which one is the “success” class.
## [1] 0.6390244
From the definitions of precision and recall, we can derive the type 1 error and
type 2 errors as follow:
FP
error1 ¼ 1 Precision ¼, and
TP þ FP FN
error2 ¼ 1 Recall ¼: TP þ FN
Thus, we can compute the type 1 error (0.36) and type 2 error (0.40).
The F-measure or F1-score combines precision and recall using the harmonic mean
assuming equal weights. High F-score means high precision and high recall. This
is a convenient way of measuring model performances and comparing models.
2 precision recall 2 TP
F measure ¼ ¼
recall þ precision 2 TP þ FP þ FN
Let’s calculate the F1-score by hand using the confusion matrix derived from
the Quality of Life prediction:
F1<-(2*prec*recall)/(prec+recall); F1
## [1] 0.6164706
## [1] 0.6164706
488 14 Model Performance Assessment
pred<-ROCR::prediction(predictions=pred_prob[, 2],
labels=qol_test$cd)
# avoid naming collision (ROCR::prediction), as
# there is another prediction function in neuralnet package.
pred_prob[, 2] is the probability of classifying each observation as
"severe_disease". The above code saved all the model prediction information into
object pred.
The ROC (Receiver Operating Characteristic) curves are often used to examine
the tradeoff between detecting true positives and avoiding the false positives (Fig.
14.1).
ROC curve
20%
0%
Fig. 14.1 Schematic of quantifying the efficacy of a classification method using the area under
the
ROC curve
489
14.3 Visualizing Performance Tradeoffs (ROC Curve)
0.8
0.6
True positive rate
0.4
0.2
0.0
Fig. 14.2 ROC curve of the prediction of disease severity using the quality of life (QoL) data
roc_auc<-performance(pred, measure="auc")
Now the roc_auc is stored as a S4 object. This is quite different than data frame
and matrices. First, we can use str() function to see its structure.
str(roc_auc)
## Formal class 'performance' [package "ROCR"] with 6 slots
## ..@ x.name : chr "None"
## ..@ y.name : chr "Area under the ROC curve"
## ..@ alpha.name : chr "none"
## ..@ x.values : list()
## ..@ y.values :List of 1
## .. ..$ : num 0.65
## ..@ alpha.values: list()
The ROC object has six members. The AUC value is stored in y.values. To
extract that we use the @ symbol according to the output of the str() function.
[email protected]
491
## [[1]]
## [1] 0.6496739
Thus, the obtained AUC ¼ 0.65, which suggests a fair classifier, according to
the above scoring schema.
14.4 Estimating Future Performance (Internal Statistical Validation)
Statistical Validation)
The evaluation methods we have talked about are all measuring re-substitution
error. That is, building the model on training data and measuring the model error
on separate testing data. This is one way of dealing with unseen data. First, let ’s
introduce the basic ideas, and more details will be presented in Chap. 21.
The idea is to partition the entire dataset into two separate datasets, using one of
them to create the model and the other to test the model performances. In practice,
we usually use a fraction (e.g., 50%, or ) of our data for training the model, and
reserve the rest (e.g., 50%, or ) for testing. Note that the testing data may also be
further split into proportions for internal repeated (e.g., cross-validation) testing
and final external (independent) testing.
The partition has to be randomized. In R, the best way of doing this is to create
a parameter that randomly draws numbers and use this parameter to extract
random rows from the original dataset. In Chap. 11, we used this method to
partition the Google Trends data.
sub<-sample(nrow(google_norm),
floor(nrow(google_norm)*0.75)) google_train<-
google_norm[sub, ] google_test<-google_norm[-sub, ]
sub<-createDataPartition(google_norm$RealEstate, p=0.75,
list = F) google_train<-google_norm[sub, ] google_test<-
google_norm[-sub, ]
492 14 Model Performance Assessment
To make sure that the model can be applied to future datasets, we can partition
the original dataset into three separate subsets. In this way, we have two subsets
for testing. The additional validation dataset can alleviate the probability that we
have a good model due to chance (non-representative subsets). A common split
among training, test, and validation subsets would be 50%, 25%, and 25%
respectively.
sub<-sample(nrow(google_norm),
floor(nrow(google_norm)*0.50)) google_train<-
google_norm[sub, ] google_test<-google_norm[-sub, ]
sub1<-sample(nrow(google_test),
floor(nrow(google_test)*0.5)) google_test1<-
google_test[sub1, ] google_test2<-google_test[-sub1, ]
nrow(google_norm) ## [1] 731 nrow(google_train)
## [1] 365
nrow(google_test1)
## [1] 183
nrow(google_test2)
## [1] 183
However, when we only have a very small dataset, it’s difficult to split off
too much data as this reduces the sample further. There are the following two
options for evaluation of model performance using (independent) unseen data:
cross-validation and holdout methods. These are implemented in the caret
package.
14.4.2 Cross-Validation
For complete details see DSPA Cross-Validation (Chap. 21). Below, we describe
the fundamentals of cross-validation as an internal statistical validation technique.
This technique is known as k-fold cross-validation or k-fold CV, which is a
standard for estimating model performance. K-fold CV randomly divides the
original data into k separate random subsets called folds.
A common practice is to use k ¼ 10 or 10-fold CV to split the data into 10
different subsets. Each time using one of the subsets to be the test set and the rest
to build the model. createFolds() under caret package will help us to do so.
seet.seed() insures the folds created are the same if you run the code line twice.
1234 is just a random number. You can use any number for set.seed().
We use the normalized Google Trend dataset in this section.
library("caret")
set.seed(1234)
493
folds<-createFolds(google_norm$RealEstate, k=10)
str(folds)
## List of 10
## $ Fold01: int [1:73] 5 9 11 12 18 19 28 29 54 65 ...
## $ Fold02: int [1:73] 14 24 35 49 52 61 63 76 99 115 ...
## $ Fold03: int [1:73] 1 8 41 45 51 74 78 92 100 104 ...
14.4 Estimating Future Performance (Internal Statistical Validation)
# install.packages("sparsediscrim") require(sparsediscrim)
folds2 = cv_partition(1:nrow(google_norm), num_folds=10)
str(folds2)
## List of 10
## $ Fold1 :List of 2
## ..$ training: int [1:657] 4 5 6 8 9 10 11 12 16 17 ...
## ..$ test : int [1:74] 287 3 596 1 722 351 623 257 568
414 ...
## $ Fold2 :List of 2
## ..$ training: int [1:658] 1 2 3 5 6 7 8 9 10 11 ...
## ..$ test : int [1:73] 611 416 52 203 359 195 452 258 614
121 ...
## $ Fold3 :List of 2
## ..$ training: int [1:658] 1 2 3 4 5 7 8 9 10 11 ...
## ..$ test : int [1:73] 182 202 443 152 486 229 88 158 178
293 ...
## $ Fold4 :List of 2
## ..$ training: int [1:658] 1 2 3 4 5 6 7 8 9 10 ...
## ..$ test : int [1:73] 646 439 362 481 183 387 252 520 438
586 ...
## $ Fold5 :List of 2
## ..$ training: int [1:658] 1 2 3 4 5 6 7 8 9 10 ...
## ..$ test : int [1:73] 503 665 47 603 348 125 719 11 461 361
...
## $ Fold6 :List of 2
## ..$ training: int [1:658] 1 2 3 4 6 7 9 10 11 12 ...
## ..$ test : int [1:73] 666 411 159 21 565 298 537 262 131
600 ...
## $ Fold7 :List of 2
## ..$ training: int [1:658] 1 2 3 4 5 6 7 8 9 10 ...
494 14 Model Performance Assessment
## ..$ test : int [1:73] 269 572 410 488 124 447 313 255 360
473 ...
## $ Fold8 :List of 2
## ..$ training: int [1:658] 1 2 3 4 5 6 7 8 9 11 ...
## ..$ test : int [1:73] 446 215 256 116 592 284 294 300 402
455 ...
## $ Fold9 :List of 2
## ..$ training: int [1:658] 1 2 3 4 5 6 7 8 9 10 ...
## ..$ test : int [1:73] 25 634 717 545 76 378 53 194 70
346 ...
## $ Fold10:List of 2
## ..$ training: int [1:658] 1 2 3 4 5 6 7 8 10 11 ...
## ..$ test : int [1:73] 468 609 40 101 595 132 248 524 376
618 ...
Now, we have 10 different subsets in the folds object. We can use lapply() to
fit the model. 90% of data will be used for training so we use [x,] to represent all
observations not in a specific fold. In Chap. 11 we showed building a neutral
network model for the Google Trends data. We can do the same for each fold
manually; train, test, aggregate the results, and report the agreement (correlations
between the predicted and observed RealEstate values).
library(neuralnet)
fold_cv<-lapply(folds, function(x){ google_train<-
google_norm[-x, ] google_test<-google_norm[x, ]
google_model<-
neuralnet(RealEstate~Unemployment+Rental+Mortgage+Jobs+Inves
ting+DJI_Index+StdDJI, data=google_train) google_pred<-
compute(google_model, google_test[, c(1:2, 4:8)]) pred_results<-
google_pred$net.result
pred_cor<-cor(google_test$RealEstate, pred_results)
return(pred_cor)
})
str(fold_cv)
## List of 10
## $ Fold01: num [1, 1] 0.977
## $ Fold02: num [1, 1] 0.97
## $ Fold03: num [1, 1] 0.972
## $ Fold04: num [1, 1]
0.979 ## $ Fold05: num [1,
1] 0.976 ## $ Fold06: num
[1, 1] 0.974
## $ Fold07: num [1, 1] 0.971
## $ Fold08: num [1, 1] 0.982
## $ Fold09: num [1, 1] -0.516
## $ Fold10: num [1, 1] 0.974
From the output, we know that in most of the folds the model predicts very
well. In a typical run, one fold may yield bad results. We can use the mean of
these 10 correlations to represent the overall model performance. But first, we
need to use unlist() function to transform fold_cv into a vector.
mean(unlist(fold_cv))
495
## [1] 0.8258223801
The second method is called bootstrap sampling. In k-fold CV, each observation
can only be used once. However, bootstrap sampling is a sampling process with
replacement. Before selecting a new sample, it recycles every observation so that
each observation could appear in multiple folds.
14.5 Assignment: 14. Evaluation of Model Performance
A very special setting of bootstrap uses at each iteration 63.2% of the original
data as our training dataset and the remaining 36.8% as the test dataset. Thus,
compared to k-fold CV, bootstrap sampling is less representative of the full
dataset. A special case of bootstrapping, 0.632 bootstrap, addresses this issue by
changing the final performance metric using the following formula:
This synthesizes the optimistic model performance on training data with the
pessimistic model performance on test data by weighting the corresponding errors.
This method is extremely good for small samples.
To see the rationale behind 0.632 bootstrap, consider a standard training set T
of cardinality n, where our bootstrap sampling generates m new training sets T i,
each of size n0. Sampling from T is uniform with replacement, suggests that some
observations may be repeated in each sample T i. Suppose the size of the sub-
samples are of the same order as T, i.e., n 0 ¼ n, then for large n the sample D i is
1
expected to have 1 e 0:632 unique cases from the complete original collection T,
the remaining proportion 0.368 are expected to be repeated duplicates. Hence, the
name 0.632 bootstrap sampling. In general, for large n n0, the sample Di is
expected to have n1 enn0 unique cases, see On Estimating the Size and
Confidence of a Statistical
Audit).
496 14 Model Performance Assessment
Having the bootstrap samples, the m models can be fitted (estimated) and
aggregated, e.g., by averaging the outputs (for regression) or by using voting
methods (for classification). We will discuss this more in later chapters.
Try to apply the same techniques to some of the other data in the list of Case-
Studies.
The ABIDE dataset includes imaging, clinical, genetics and phenotypic data for
over 1000 pediatric cases – Autism Brain Imaging Data Exchange (ABIDE).
• Apply C5.0 to predict on part of data (training data).
• Evaluate the model’s performance, using confusion matrices, accuracy, κ,
precision, and recall, F-measure, etc.
• Explain and compare each evaluation.
• Use the ROC to examine the tradeoff between detecting true positives and
avoiding the false positives and report AUC.
• Finally, apply cross validation on C5.0 and report the CV error.
• You may apply the same analysis workflow to evaluate the performance of
alternative methods (e.g., KNN, SVM, LDA, QDA, Neural Networks, etc.)
References
SciKit: http://scikit-learn.org/stable/modules/classes.html
Sammut, C, Webb, GI (eds.) (2011) Encyclopedia of Machine Learning, Springer Science &
Business Media, ISBN 0387307680, 9780387307688.
Japkowicz, N, Shah. M. (2011) Evaluating Learning Algorithms: A Classification Perspective,
Cambridge University Press, ISBN 1139494147, 9781139494144.
Chapter 15
Improving Model Performance
One of the methods for improving model performance relies on tuning, which is
the process of searching for the best parameters for a specific method. Table
15.1 summarizes the parameters for each method we covered in previous
chapters.
In Chap. 7, we used KNN and plugged in random k parameters for the number of
clusters. This time, we will test multiple k values simultaneously and pick the one
with the highest accuracy. When using the caret package, we need to specify a
498 15 Improving Model Performance
Let’s see how accurate this “optimal model” is in terms of the re-substitution
error. Again, we will use the predict() function specifying the object m and the
dataset boystown_n. Then, we can report the contingency table showing the
agreement between the predictions and real class labels.
set.seed(1234)
p<-predict(m, boystown_n)
table(p, boystown_n$grade)
##
## p above_avg
avg_or_below ## above_avg
132 17 ## avg_or_below
2 49
This model has (17 + 2)/200 ¼ 0.09 re-substitution error (9%). This means that
in the 200 observations that we used to train this model, 91% of them were
correctly classified. Note that re-substitution error is different from accuracy.
The accuracy of this model is 0.8, which is reported by a model summary call. As
mentioned in Chap. 14, we can obtain prediction probabilities for each
observation in the original boystown_n dataset.
head(predict(m, boystown_n, type = "prob"))
## above_avg
avg_or_below ## 1
0.0000000 1.0000000 ##
2 1.0000000 0.0000000
## 3 0.7142857 0.2857143
## 4 0.8571429 0.1428571
## 5 0.2857143 0.7142857
## 6 0.5714286 0.4285714
15.2 Using caret for Automated Parameter Tuning 501
The default setting of train() might not meet the specific needs for every study.
In our case, the optimal k might be smaller than 5. The caret package allows us to
customize the settings for train(). caret::trianControl() can help us to customize
re-sampling methods. There are 6 popular re-sampling methods that we might
want to use in the following table (Table 15.2).
These methods are helping us find representative samples to train the
model. Let’s use 0.632 bootstrap for example. Just specify method¼"boot632" in
the trainControl() function. The number of different samples to include can be
customized by number¼ option. Another option in trainControl() is about the
model performance evaluation. We can change our preferred method of
evaluation to select the optimal model. The oneSE method chooses the simplest
model within one standard error of the best performance to be the optimal
model. Other methods are also available in caret package. For detailed
information, type best in R console.
We can also specify a list of k values we want to test by creating a matrix or a
grid.
ctrl<-trainControl(method="boot632", number=25,
selectionFunction="oneSE") grid<-expand.grid(.k=c(1, 3, 5, 7, 9))
# Creates a data frame from all combinations of the supplied factors
Table 15.2 Six complementary methods for customizing the caret::trainControl() re-sampling
Resampling method Method name Additional options and default values
Holdout sampling LGOCV p¼0.75 (training data proportion)
k-fold cross-validation cv number¼10 (number of folds)
Repeated k-fold cross validation repeatedcv number¼10 (number of folds),
(number of iterations)
repeats¼10
Bootstrap sampling boot number¼25 (resampling iterations)
0.632 bootstrap boot632 number¼25 (resampling iterations)
Leave-one-out cross-validation LOOCV None
Usually, to avoid ties, we prefer to choose an odd number of clusters k. Now
the constraints are all set. We can start to select models again using train().
set.seed(123)
m<-train(grade~., data=boystown_n, method="knn",
metric="Kappa",
trControl=ctrl,
tuneGrid=grid)
m
## k-Nearest Neighbors
##
## 200 samples
502 15 Improving Model Performance
## 10 predictor
## 2 classes: 'above_avg', 'avg_or_below'
##
## No pre-processing
## Resampling: Bootstrapped (25 reps)
## Summary of sample sizes: 200, 200, 200, 200, 200, 200, ...
## Resampling results across tuning parameters:
##
## k Accuracy Kappa
## 1 0.8726660
0.7081751
## 3 0.8457584 0.6460742
## 5 0.8418226
0.6288675 ## 7 0.8460327
0.6336463 ## 9 0.8381961
0.6094088
##
## Kappa was used to select the optimal model using the one SE
rule.
## The final value used for the model was k
= 1.
Here we added metric¼"Kappa" to include the Kappa statistics as one of the
criteria to select the optimal model. We can see the output accuracy for all the
candidate models are better than the default bootstrap sampling. The optimal
model has k¼3, a high accuracy 0.846, and a high Kappa statistic, which is much
better than the model we had in Chap. 7. As you can see from the output, the SE
rule no longer choses the model with the highest accuracy or Kappa statistic to be
the “optimal model”. It is a more comprehensive method than only looks at one
statistic or a single quality measure.
set you can’t improve the model predictive force, but just decrease the
variance, narrowly tuning the prediction to the expected outcome.
• Boosting is a two-step approach, where one first uses subsets of the original
data to produce a series of moderately performing models and then “boosts”
their performance by combining them together using a particular cost
function (e.g., Accuracy). Unlike bagging, in classical boosting, the subset
creation is not random and depends upon the performance of the previous
models: every new subset contains the elements that were (likely to be)
misclassified by previous models. Usually, we prefer weaker classifiers in
boosting. For example, a prevalent choice is to use stump (level-one decision
tree) in AdaBoost (Adaptive Boosting).
15.2.3 Bagging
qol<-read.csv("https://umich.instructure.com/files/481332/download?
download_ frd=1")
qol<-qol[!qol$CHARLSONSCORE==-9 , -c(1, 2)] qol$CHARLSONSCORE<-
as.factor(qol$CHARLSONSCORE)
## agreement
## FALSE TRUE
## 0.001718213 0.998281787
This model works very well with its training data. It labeled 99.8% of the cases
correctly. To see its performances on feature data, we apply the carettrain()
function again with 10 repeated CV as re-sampling method. In caret, bagged trees
method is called treebag.
library(caret)
set.seed(123)
ctrl<-trainControl(method="repeatedcv", number = 10, repeats = 10)
train(CHARLSONSCORE~., data=as.data.frame(qol), method="treebag",
trControl= ctrl)
## Bagged CART
##
## 2328 samples
## 38 predictor
## 11 classes: '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
'10' ##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 10 times)
## Summary of sample sizes: 2095, 2096, 2093, 2094, 2098, 2097, ...
## Resampling
results: ##
## Accuracy Kappa
## 0.5234615
0.2173193
We got an accuracy of 52% and a fair Kappa statistics. This result is better than
our previous prediction attempt in Chap. 11 using the ksvm() function alone
(~50%). Here, we combined the prediction results of 38 decision trees to get this
level of prediction accuracy.
In addition to decision tree classification, caret allows us to explore
alternative bag() functions. For instance, instead of bagging based on decision
trees, we can bag using an SVM model. caret provides a nice setting for SVM
training, making predictions and counting votes in a list object svmBag. We can
examine these objects by using the str() function.
str(svmBag)
## List of 3
## $ fit :function (x,
y, ...) ## $ pred :function
(object, x)
## $ aggregate:function (x, type = "class")
Clearly, fit provides the training functionality, pred the prediction and
forecasting on new data, and aggregate is a way to combine many models and
achieve voting-based consensus. Using the member operator, the $ sign, we can
explore these three types of elements of the svmBag object. For instance, the
fit element may be extracted from the SVM object by:
15.2 Using caret for Automated Parameter Tuning 505
svmBag$fit
## function (x, y, ...)
## {
## loadNamespace("kernlab")
## out <- kernlab::ksvm(as.matrix(x),
y, prob.model = is.factor(y),
## ...)
## out
## }
## <environment: namespace:caret>
fit relies on the ksvm() function in the kernlab package, which means this
package needs to be loaded. The other two methods, pred and aggregate, may
be explored in a similar way. They just follow the SVM model building and testing
process we discussed in Chap. 11.
This svmBag object could be used as an optional setting in the train() function.
However, this option requires that all features are linearly independent with
trivial covariances, which may be rare in real world data.
15.2.4 Boosting
Bagging uses equal weights for all learners we included in the model. Boosting is
quite different in terms of weights. Suppose we have the first learner correctly
classifying 60% of the observations. This 60% of data may be less likely to be
included in the training dataset for the next learner. So, we have more learners
working on “hard-to-classify” observations.
Mathematically, we are using a weighted sum of functions to predict the
outcome class labels. We can try to fit the true model using weighted additive
modeling. We start with a random learner that can classify some of the
observations correctly, possibly with some errors.
^y1 ¼ l1:
This l1 is our first learner and^y1 denotes its predictions (this equation is in
matrix form). Then, we can calculate the residuals of our first learner.
E1 ¼ y v1 ^y1,
^y2 ¼ l2:
506 15 Improving Model Performance
^
E2 ¼ E1 v2 y2:
L ¼ v1 l1 þ v2 l2 þ ... þ vk lK,
One approach to train and build random forests relies on using randomForest()
under the randomForest package. It has the following components:
m<-randomForest(expression, data, ntree=500, mtry=sqrt(p))
• expression: the class variable and features we want to include in the model.
• data: training data containing class and features.
• ntree: number of voting decision trees.
• mtry: optional integer specifying the number of features to randomly select at
each split. The p stands for number of features in the data.
Let’s build a random forest using the Quality of Life dataset.
# install.packages("randomForest")
library(randomForest)
set.seed(123)
rf<-randomForest(CHARLSONSCORE~.,
15.2 Using caret for Automated Parameter Tuning 507
data=qol) rf
##
## Call:
## randomForest(formula = CHARLSONSCORE ~ ., data = qol)
## Type of random forest: classification
## Number of trees: 500
## No. of variables tried at each split: 6
##
## OOB estimate of error rate:
46.13% ## Confusion matrix:
## 0 1 2 3 4 5 6 7 8 9 10
class.error ## 0 574 301 2 0 0 0 0 0 0 0
0 0.3454960 ## 1 305 678 1 0 0 0 0 0 0
0 0 0.3109756 ## 2 90 185 2 0 0 0 0
0 0 0 0 0.9927798
## 3 25 101 1 0 0 0 0 0 0 0 0
1.0000000 ## 4 5 19 0 0 0 0 0 0 0 0
0 1.0000000 ## 5 3 4 0 0 0 0 0 0 0
0 0 1.0000000
## 6 1 4 0 0 0 0 0 0 0 0 0
1.0000000 ## 7 1 1 0 0 0 0 0 0 0 0
0 1.0000000 ## 8 7 8 0 0 0 0 0 0 0
0 0 1.0000000
## 9 3 5 0 0 0 0 0 0 0 0 0
1.0000000 ## 10 1 1 0 0 0 0 0 0 0 0
0 1.0000000
By default the model contains 500 decision trees and tried 6 variables at each
split. Its OOB, or out-of-bag, error rate is about 46%, which corresponds to a poor
accuracy rate (54%). Note that the OOB error rate is not re-substitution error. The
confusion matrix next to it is reflecting OOB error rate for specific classes. All
of these error rates are reasonable estimates of future performances with
unseen data. We can see that this model is so far the best of all models, although
it is still not good at predicting high CHARLSONSCORE.
The caret package also supports random forest model building and evaluation. It
reports more detailed model performance evaluations. As usual, we need to
specify a re-sampling method and a parameter grid. As an example, we use the
10-fold CV re-sampling method. The grid for this model contains information
about the mtry parameter (the only tuning parameter for random forest).
Previously we tried the default value p38ffiffiffiffiffi ¼ 6 (38 is the number of
features). This time we could compare multiple mtry parameters.
library(caret) ctrl<-
trainControl(method="cv", number=10)
grid_rf<-expand.grid(.mtry=c(2, 4, 8,
16))
508 15 Improving Model Performance
Next, we apply the train() function with our ctrl and grid_rf settings.
set.seed(123)
m_rf <- train(CHARLSONSCORE ~ ., data = qol, method =
"rf", metric = "Kappa", trControl = ctrl, tuneGrid =
grid_rf) m_rf
## Random Forest
##
## 2328 samples
## 38 predictor
## 11 classes: '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
'10' ##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 2095, 2096, 2093, 2094, 2098, 2097, ...
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa
## 2 0.5223871
0.1979731
## 4 0.5403799 0.2309963
## 8 0.5382674
0.2287595 ## 16
0.5421562 0.2367477
##
## Kappa was used to select the optimal model using the largest
value.
## The final value used for the model was mtry
= 16.
This call may take a while to complete. The result appears to be a good model,
when mtry¼16 we reached a relatively high accuracy and good kappa statistic.
This is a very good result for a learner with 11 classes.
algorithm including Breiman and Freund, and the other is Zhu’s SAMME
algorithm. Let’s see some examples:
set.seed(123)
qol<-read.csv("https://umich.instructure.com/files/481332/download?
download_ frd=1")
qol<-qol[!qol$CHARLSONSCORE==-9 , -c(1, 2)] qol$CHARLSONSCORE<-
as.factor(qol$CHARLSONSCORE)
https://rdrr.io/cran/adabag/man
adabag-package.htm
Table 15.3 Performance evaluation for several classification, prediction, and clustering
methods
Model Learning task Method Parameters
KNN Classification knn k
Naïve Bayes Classification nb fL,usekernel
Decision Trees Classification C5.0 model,trials,winnow
OneR Rule Learner Classification OneR None
RIPPER Rule Learner Classification JRip NumOpt
Linear Regression Regression lm None
Regression Trees Regression rpart cp
Model Trees Regression M5 pruned,smoothed,rules
Neural Networks Dual use nnet size,decay
Support Vector Machines Dual use svmLinear C
(Linear Kernel)
Support Vector Machines Dual use svmRadial C,sigma
(Radial Basis Kernel)
Random Forests Dual use rf mtry
511
References
References
Zhu, J, Zou, H, Rosset, S, Hastie, T. (2009) Multi-class AdaBoost, Statistics and Its Interface, 2,
349–360.
Breiman, L. (1998): Arcing classifiers, The Annals of Statistics, 26(3), 801–849.
Freund, Y, Schapire, RE. (1996) Experiments with a new boosting algorithm, In Proceedings of
the
Thirteenth International Conference on Machine Learning, 148–156, Morgan Kaufmann
Chapter 16
Specialized Machine Learning Topics
This chapter presents some technical details about data formats, streaming,
optimization of computation, and distributed deployment of optimized learning
algorithms. Chapter 22 provides additional optimization details. We show format
conversion and working with XML, SQL, JSON, 15 CSV, SAS and other data
objects. In addition, we
illustrateSQLserverqueries,describeprotocolsformanaging,classifyingandpredictin
g outcomes from data streams, and demonstrate strategies for optimization,
improvement of computational performance, parallel (MPI) and graphics (GPU)
computing.
The Internet of Things (IoT) leads to a paradigm shift of scientific inference –
from static data interrogated in a batch or distributed environment to on-demand
service-based Cloud computing. Here, we will demonstrate how to work with
specialized data, data-streams, and SQL databases, as well as develop and assess
on-the-fly data modeling, classification, prediction and forecasting methods.
Important examples to keep in mind throughout this chapter include high-
frequency data delivered real time in hospital ICU’s (e.g., microsecond
Electroencephalography signals, EEGs), dynamically changing stock market data
(e.g., Dow Jones Industrial Average Index, DJI), and weather patterns.
We will present (1) format conversion of XML, SQL, JSON, CSV, SAS and
other data objects, (2) visualization of bioinformatics and network data, (3)
protocols for managing, classifying and predicting outcomes from data streams,
(4) strategies for optimization, improvement of computational performance,
parallel (MPI) and graphics (GPU) computing, and (5) processing of very large
datasets.
Unlike the case studies we saw in the previous chapters, some real world data may
not always be nicely formatted, e.g., as CSV files. We must collect, arrange,
wrangle, and harmonize scattered information to generate computable data objects
that can be further processed by various techniques. Data wrangling and
preprocessing may take
513 16 Specialized Machine Learning Topics
© Ivo D. Dinov 2018 513
I. D. Dinov, Data Science and Predictive Analytics, https://doi.org/10.1007/978-3-319-72347-
1_16
over 80% of the time researchers spend interrogating complex multi-source data
archives. The following procedures will enhance your skills in collecting and
handling heterogeneous real world data. Multiple examples of handling long-and-
wide data, messy and tidy data, and data cleaning strategies can be found in this
JSS Tidy Data article by Hadley Wickham.
The R package rio imports and exports various types of file formats, e.g., tab-
separated (.tsv), comma-separated (.csv), JSON (.json), Stata (.dta), SPSS (.sav
and .por), Microsoft Excel (.xls and .xlsx), Weka (.arff), and SAS (.sas7bdat
and .xpt).
rioprovidesthreeimportantfunctionsimport(),export() andconvert().
They are intuitive, easy to understand, and efficient to execute. Take Stata (.dta)
files as an example. First, we can download 02_Nof1_Data.dta from our datasets
folder.
# install.packages("rio")
library(rio)
# Download the SAS .DTA file first
locally # Local data can be loaded by:
#nof1<-import("02_Nof1_Data.dta")
# the data can also be loaded from the server remotely as well:
nof1<-read.csv("https://umich.instructure.com/files/330385/download?
download
_frd=1")
str(nof1)
## 'data.frame': 900 obs. of 10
variables: ## $ ID : int 1 1 1 1 1
1 1 1 1 1 ...
## $ Day : int 1 2 3 4 5 6 7 8 9 10 ...
## $ Tx : int 1 1 0 0 1 1 0 0 1 1 ...
## $ SelfEff : int 33 33 33 33 33 33 33 33 33 33 ...
## $ SelfEff25: int 8 8 8 8 8 8 8 8 8 8 ...
## $ WPSS : num 0.97 -0.17 0.81 -0.41 0.59 -1.16 0.3 -0.34
-0.74 -0.38
...
## $ SocSuppt : num 5 3.87 4.84 3.62 4.62 2.87 4.33 3.69 3.29 3.66
...
## $ PMss : num 4.03 4.03 4.03 4.03 4.03 4.03 4.03 4.03 4.03
4.03 ...
## $ PMss3 : num 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03
1.03 ...
## $ PhyAct : int 53 73 23 36 21 0 21 0 73 114 ...
The data are automatically stored as a data frame. Note that rio sets
stingAsFactors¼FALSE as default.
rio can help us export files into any other format we choose. To do this we
have to use the export() function.
#Sys.getenv("R_ZIPCMD", "zip") # Get the C Zip application
Sys.setenv(R_ZIPCMD="E:/Tools/ZIP/bin/zip.exe")
Sys.getenv("R_ZIPCMD", "zip")
## [1]
"E:/Tools/ZIP/bin/zip.exe"
export(nof1, "02_Nof1.xlsx")
16.1 Working with Specialized Data and Databases 515
This line of code exports the Nof1 data in xlsx format located in the R working
directory. Mac users may have a problem exporting *.xslx files using rio because
of a lack of a zip tool, but still can output other formats such as ".csv". An
alternative strategy to save an xlsxfile is to use package xlsx with default
row.name¼TRUE.
rio also provides a one step process to convert and save data into alternative
formats. The following simple code allows us to convert and save the
02_Nof1_Data.dtafile we just downloaded into a CSV file.
# convert("02_Nof1_Data.dta", "02_Nof1_Data.csv")
convert("02_Nof1.xlsx",
"02_Nof1_Data.csv")
You can see a new CSV file popup in the current working directory. Similar
transformations are available for other data formats and types.
Let’s use as an example the CDC Behavioral Risk Factor Surveillance System
(BRFSS) Data, 2013-2015. This file for the combined landline and cell phone
data set was exported from SAS V9.3 in the XPT transport format. This file
contains 330 variables and can be imported into SPSS or STATA. Please note:
some of the variable labels get truncated in the process of converting to the XPT
format.
Be careful – this compressed (ZIP) file is over 315MB in size!
# install.packages("Hmisc")
library(Hmisc)
memory.size(max=T)
## [1] 115.81
pathToZip <-
tempfile()
download.file("http://www.socr.umich.edu/data/DSPA/BRFSS_2013_2014_2
015.zip" , pathToZip)
# let's just pull two of the 3 years of data (2013
and 2015) brfss_2013 <- sasxport.get(unzip(pathToZip)
[1]) ## Processing SAS dataset LLCP2013 ..
dim(brfss_2013); object.size(brfss_2013)
summary(gml1)
## ##
Call:
## glm(formula = has_plan ~ as.factor(x.race), family =
binomial,
## data = brfss_2013)
## ## Deviance
Residuals:
## Min 1Q Median 3Q
Max ## -2.1862 0.4385 0.4385 0.4385
0.8047
## ##
Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 2.293549 0.005649 406.044 <2e-
16 ***
## as.factor(x.race)2 -0.721676 0.014536 -49.647 <2e-
16 ***
## as.factor(x.race)3 -0.511776 0.032974 -15.520 <2e-
16 ***
## as.factor(x.race)4 -0.329489 0.031726 -10.386 <2e-
16 ***
## as.factor(x.race)5 -1.119329 0.060153 -18.608
<2e-16 *** ## as.factor(x.race)6 -0.544458 0.054535
-9.984 <2e-16 *** ## as.factor(x.race)7 -0.510452
0.030346 -16.821 <2e-16 *** ## as.factor(x.race)8
-1.332005 0.012915 -103.138 <2e-16 *** ##
as.factor(x.race)9 -0.582204 0.030604 -19.024 <2e-16
*** ## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1
' ' 1 ##
16.1 Working with Specialized Data and Databases 517
Next, we’ll examine the odds (rather the log odds ratio, LOR) of having a
health care plan (HCP) by race (R). The LORs are calculated for two array
dimensions, separately for each race level (presence of health care plan (HCP) is
binary, whereas race (R) has 9 levels, R1, R2, ..., R9). For example, the odds
ratio of having a HCP for R1 : R2 is:
P HCPðjR1Þ
host='genome-mysql.cse.ucsc.edu')
result <- dbGetQuery(ucscGenomeConn,"show databases;");
dbListTables(myConnection)
## character(0)
# Add tables to the local SQL DB data(USArrests);
dbWriteTable(myConnection, "USArrests", USArrests)
"brfss_2013", brfss_2013)
"brfss_2015", brfss_2015)
## [1] TRUE
16.1 Working with Specialized Data and Databases 519
## avg(Assault)
## 1 48.00
## 2 81.00
## 3 152.00
## 4 211.50
## 5 271.00
## 6 190.00
## 7 83.00
## 8 109.00
## 9 109.00
## 10 120.00
## 11 57.00
## 12 56.00
## 13 236.00 ## 14 188.00
## 15 186.00
## 16 102.00
## 17 156.00
## 18 113.00
## 19 122.25
## 20 229.50
## 21 151.00
## 22 231.50
## 23 172.00
## 24 145.00
## 25 255.00
## 26 120.00
## 27 110.00
## 28 204.00
## 29 237.50
## 30 252.00
## 31 147.50
16.1 Working with Specialized Data and Databases 521
## 32 149.00
## 33 254.00
## 34 174.00
## 35 159.00
## 36 276.00
## 1 0.4992652
## 2 -1.4952515
## 3 -2.5037326
## 4 -1.3536797
# reset the DB query #
dbClearResult(myQuery)
# clean up
dbDisconnect(myConnection)
## [1] TRUE
522 16 Specialized Machine Learning Topics
We are already familiar with (pseudo) random number generation (e.g., rnorm
(100, 10, 4) or runif(100, 10,20)), which generate algorithmically computer values
subject to specified distributions. There are also web services, e.g., random.org,
that can provide true random numbers based on atmospheric noise, rather than
using a pseudo random number generation protocol. Below is one example of
generating a total of 300 numbers arranged in 3 columns, each of 100 rows of
random integers (in decimal format) between 100 and 200.
#https://www.random.org/integers/?
num=300&min=100&max=200&col=3&base=10& format=plain&rnd=new
siteURL <- "http://random.org/integers/" # base URL
shortQuery<-"num=300&min=100&max=200&col=3&base=10&format=plain&rnd=
new" completeQuery <- paste(siteURL, shortQuery, sep="?") # concat
url and submit query string
rngNumbers <- read.table(file=completeQuery) # and read the
data rngNumbers
## V1 V2 V3
## 1 144 179 131
## 2 127 160 150
## 3 142 169 109
…
## 98 178 103 134
## 99 173 178 156
## 100 117 118 110
RCurl package provides an amazing tool for extracting and scraping information
from websites. Let’s install it and extract information from a SOCR website.
# install.packages("RCurl")
library(RCurl)
## Loading required package: bitops
web<-getURL("http://wiki.socr.umich.edu/index.php/SOCR_Data",
followlocation
= TRUE)
str(web, nchar.max = 200)
## chr "<!DOCTYPE html>\n<html lang=\"en\" dir=\"ltr\"
class=\"client-nojs\ ">\n<head>\n<meta charset=\"UTF-8\"
/>\n<title>SOCR Data - SOCR</title>\n<me ta http-equiv=\"X-UA-
Compatible\" content=\"IE=EDGE\" />"| __truncated__
The web object looks incomprehensible. This is because most websites are
wrapped in XML/HTML hypertext or include JSON formatted metadata. RCurl
deals with special HTML tags and website metadata.
16.1 Working with Specialized Data and Databases 523
To deal with the web pages only, httr package would be a better choice than
RCurl. It returns a list that makes much more sense.
#install.packages("httr")
library(httr)
web<-GET("http://wiki.socr.umich.edu/index.php/SOCR_Data")
str(web[1:3])
## List of 3
## $ url : chr
"http://wiki.socr.umich.edu/index.php/SOCR_Data" ## $
status_code: int 200
## $ headers :List of 12
## ..$ date : chr "Mon, 03 Jul 2017 19:09:56 GMT"
## ..$ server : chr "Apache/2.2.15 (Red Hat)"
## ..$ x-powered-by : chr "PHP/5.3.3"
## ..$ x-content-type-options: chr "nosniff"
## ..$ content-language : chr "en"
## ..$ vary : chr "Accept-Encoding,Cookie"
## ..$ expires : chr "Thu, 01 Jan 1970 00:00:00 GMT"
## ..$ cache-control : chr "private, must-revalidate, max-
age=0"
## ..$ last-modified : chr "Sat, 22 Oct 2016 21:46:21 GMT"
## ..$ connection : chr "close"
## ..$ transfer-encoding : chr "chunked"
## ..$ content-type : chr "text/html; charset=UTF-8"
## ..- attr(*, "class")= chr [1:2] "insensitive" "list"
A combination of the RCurl and the XML packages could help us extract only the
plain text in our desired webpages. This would be very helpful to get information
from heavy text-based websites.
web<-getURL("http://wiki.socr.umich.edu/index.php/SOCR_Data",
followlocation = TRUE)
#install.packages("XML")
library(XML)
web.parsed<-htmlParse(web, asText = T)
plain.text<-xpathSApply(web.parsed, "//p", xmlValue)
cat(paste(plain.text, collapse = "\n"))
## The links below contain a number of datasets that may be used for
demonst ration purposes in probability and statistics education.
There are two types of data - simulated (computer-generated using
random sampling) and observed (research, observationally or
experimentally acquired).
##
## The SOCR resources provide a number of mechanisms to simulate
data using computer random-number generators. Here are some of the
most commonly used S
OCR generators of simulated data:
##
## The following collections include a number of real observed
524 16 Specialized Machine Learning Topics
The process that extracting data from complete web pages and storing it in
structured data format is called scraping. However, before starting a data scrape
from a website, we need to understand the underlying HTML structure for that
specific website. Also, we have to check the terms of that website to make sure
that scraping from this site is allowed.
The R package rvest is a very good place to start “harvesting” data from
websites.
To start with, we use read_html() to store the SOCR data website into a
xmlnode object.
library(rvest)
SOCR<-
read_html("http://wiki.socr.umich.edu/index.php/SOCR_Data")
SOCR
## {xml_document}
## <html lang="en" dir="ltr" class="client-nojs">
## [1] <head>\n<meta http-equiv="Content-Type" content="text/html;
charset= ...
## [2] <body class="mediawiki ltr sitedir-ltr ns-0 ns-subject page-
SOCR_Dat ...
From the summary structureofSOCR,wecandiscover that there aretwoimportant
hypertext section markups <head> and <body>. Also, notice that the SOCR data
website uses <title> and </title> tags to separate title in the <head> section. Let’s
use html_node() to extract title information based on this knowledge.
Here we used %>% operator, or pipe, to connect two functions. The above line
of code creates a chain of functions to operate on the SOCR object. The first
function in the chain html_node() extracts the title from head section. Then,
html_text() translates HTML formatted hypertext into English. More on R piping
can be found in the magrittr package.
Another function, rvest::html_nodes() can be very helpful in scraping. Similar
to html_node(), html_nodes() can help us extract multiple nodes in an xmlnode
object. Assume that we want to obtain the meta elements (usually page
description, keywords, author of the document, last modified, and other
metadata) from the SOCR data website. We apply html_nodes() to the SOCR
object to extract the hypertext data, e.g., lines starting with <meta> in the <head>
section of the HTML page source. It is optional to use html_attrs(), which extracts
attributes, text and tag names from HTML, obtain the main text attributes.
meta<-SOCR %>% html_nodes("head meta") %>%
html_attrs() meta
## [[1]]
## http-equiv
content ## "Content-Type" "text/html;
charset=UTF-8"
##
## [[2]]
## charset
## "UTF-8"
##
## [[3]]
## http-equiv
content ## "X-UA-Compatible"
"IE=EDGE"
##
## [[4]]
## name content
## "generator" "MediaWiki 1.23.1"
##
## [[5]]
## name content
## "ResourceLoaderDynamicStyles"
""
16.1.7 Parsing JSON from Web APIs
nof1<-GET("https://umich.instructure.com/files/1760327/download?
download_frd
=1")
nof1
## Response [https://instructure-
uploads.s3.amazonaws.com/account_1770000000
0000001/attachments/1760327/02_Nof1_Data.json?response-content-
disposition=a ttachment%3B%20filename%3D%2202_Nof1_Data.json%22%3B
%20filename%2A%3DUTF-8%2 7%2702%255FNof1%255FData.json&X-Amz-
Algorithm=AWS4-HMAC-SHA256&X-Amz-Credent ial=AKIAJFNFXH2V2O7RPCAA
%2F20170703%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Da
te=20170703T190959Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-
Amz-Signa
ture=ceb3be3e71d9c370239bab558fcb0191bc829b98a7ba61ac86e27a2fc3c1e8c
e]
## Date: 2017-07-03 19:10
## Status: 200
## Content-Type: application/json
## Size: 109 kB
## [{"ID":1,"Day":1,"Tx":1,"SelfEff":33,"SelfEff25":8,"WPSS"...
## {"ID":1,"Day":2,"Tx":1,"SelfEff":33,"SelfEff25":8,"WPSS"... ##
{"ID":1,"Day":3,"Tx":0,"SelfEff":33,"SelfEff25":8,"WPSS"... ##
{"ID":1,"Day":4,"Tx":0,"SelfEff":33,"SelfEff25":8,"WPSS"... ##
{"ID":1,"Day":5,"Tx":1,"SelfEff":33,"SelfEff25":8,"WPSS"... ##
{"ID":1,"Day":6,"Tx":1,"SelfEff":33,"SelfEff25":8,"WPSS"... ##
{"ID":1,"Day":7,"Tx":0,"SelfEff":33,"SelfEff25":8,"WPSS"...
## {"ID":1,"Day":8,"Tx":0,"SelfEff":33,"SelfEff25":8,"WPSS"... ##
{"ID":1,"Day":9,"Tx":1,"SelfEff":33,"SelfEff25":8,"WPSS"...
## {"ID":1,"Day":10,"Tx":1,"SelfEff":33,"SelfEff25":8,"WPSS"...
## ...
We can see that JSON objects are very simple. The data structure is organized
using hierarchies marked by square brackets. Each piece of information is
formatted as a {key:value} pair.
The package jsonlite is a very useful tool to import online JSON formatted
datasets into data frame directly. Its syntax is very straight-forward.
#install.packages("jsonlit
e") library(jsonlite)
nof1_lite<-
fromJSON("https://umich.instructure.com/files/1760327/download?
download_frd=1") class(nof1_lite)
## [1] "data.frame"
16.1.8 Reading and Writing Microsoft Excel Spreadsheets
Using XLSX
We can transfer a xlsx dataset into CSV and use read.csv() to load this kind of
dataset. However, R provides an alternative read.xlsx() function in package xlsx to
simplify this process. Take our 02_Nof1_Data.xls data in the class file as an
example. We need to download the file first.
16.1 Working with Specialized Data and Databases 527
# install.packages("xlsx")
library(xlsx)
nof1<-read.xlsx("C:/Users/Folder/02_Nof1.xlsx", 1)
str(nof1)
## 'data.frame': 900 obs. of 10
variables: ## $ ID : num 1 1 1 1 1
1 1 1 1 1 ...
## $ Day : num 1 2 3 4 5 6 7 8 9 10 ...
## $ Tx : num 1 1 0 0 1 1 0 0 1 1 ...
528 16 Specialized Machine Learning Topics
## $ SelfEff : num 33 33 33 33 33 33 33 33 33 33
... ## $ SelfEff25: num 8 8 8 8 8 8 8 8 8 8 ...
## $ WPSS : num 0.97 -0.17 0.81 -0.41 0.59 -1.16 0.3 -0.34
-0.74 -0.38
...
## $ SocSuppt : num 5 3.87 4.84 3.62 4.62 2.87 4.33 3.69 3.29 3.66
...
## $ PMss : num 4.03 4.03 4.03 4.03 4.03 4.03 4.03 4.03 4.03
4.03 ...
## $ PMss3 : num 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03 1.03
1.03 ...
## $ PhyAct : num 53 73 23 36 21 0 21 0 73 114 ...
The last argument, 1, stands for the first excel sheet, as any excel file may
include a large number of tables in it. Also, we can download the xls or xlsxfile
into our R working directory so that it is easier to find the file path.
Sometimes more complex protocols may be necessary to ingest data from
XLSX documents. For instance, if the XLSX doc is large, includes many tables
and is only accessible via HTTP protocol from a web-server. Below is an example
of downloading the second table, ABIDE_Aggregated_Data, from the multitable
Autism/ABIDE XLSX dataset:
# install.packages("openxlsx");
library(openxlsx) tmp = tempfile(fileext =
".xlsx")
download.file(url =
"https://umich.instructure.com/files/3225493/download?do
wnload_frd=1",
destfile = tmp, mode="wb") df_Autism <-
openxlsx::read.xlsx(xlsxFile = tmp, sheet =
"ABIDE_Aggregated_Data", skipEmptyRows = TRUE) dim(df_Autism) ##
[1] 1098 2145
16.2 Working with Domain-Specific Data
Genetic data are stored in widely varying formats and usually have more feature
variables than observations. They could have 1,000 columns and only 200 rows.
One of the commonly used pre-processng steps for such datasets is variable
selection. We will talk about this in Chap. 17.
The Bioconductor project created powerful R functionality (packages and tools)
for analyzing genomic data, see Bioconductor for more detailed information.
16.2 Working with Domain-Specific Data 529
Social network data and graph datasets describe the relations between nodes
(vertices) using connections (links or edges) joining the node objects. Assume we
have N objects, we can have N ∗ (N 1) directed links establishing paired
associations between the nodes. Let’s use an example with N¼4 to demonstrate a
simple graph potentially modeling the node linkage Table 16.1.
If we change the a ! b to an indicator variable (0 or 1) capturing whether we
have an edge connecting a pair of nodes, then we get the graph adjacency matrix.
Edge lists provide an alternative way to represent network connections. Every
line in the list contains a connection between two nodes (objects) (Table 16.2).
The edge list on Table 16.2 lists three network connections: object 1 is linked to
object 2; object 1 is linked to object 3; and object 2 is linked to object 3. Note that
edge lists can represent both directed as well as undirected networks or graphs.
We can imagine that if N is very large, e.g., social networks, the data
representation and analysis may be resource intense (memory or computation). In
R, we have multiple packages that can deal with social network data. One user-
friendly example is provided using the igraph package. First, let’s build a toy
example and visualize it using this package (Fig. 16.1).
#install.packages("igraph")
library(igraph)
g<-graph(c(1, 2, 1, 3, 2, 3, 3, 4),
n=10) plot(g)
Here c(1,2,1,3,2,3,3,4) is an edge list with 4 rows and n¼10 indicates that we
have 10 nodes (objects) in total. The small arrows in the graph show the directed
network connections. We might notice that 5-10 nodes are scattered out in the
graph. This is because they are not included in the edge list, so there are no
network connections between them and the rest of the network.
9
6
4 7
3
2
1 10
Now let’s examine the co-appearance network of Facebook circles. The data
contains anonymized circles (friends lists) from Facebook collected from survey
participants using a Facebook app. The dataset only includes edges (circles,
88,234) connecting pairs of nodes (users, 4,039) in the member social networks.
The values on the connections represent the number of links/edges within a
circle. We have a huge edge-list made of scrambled Facebook user IDs. Let’s load
this dataset into R first. The data is stored in a text file. Unlike CSV files, text
files in table format need to be imported using read.table(). We are using the
header¼F option to let R know that we don’t have a header in the text file that
contains only tab-separated node pairs (indicating the social connections, edges,
between Facebook users).
soc.net.data<-
read.table("https://umich.instructure.com/files/2854431/downlo ad?
download_frd=1", sep=" ", header=F) head(soc.net.data)
## V1 V2
## 1 0 1
## 2 0 2
## 3 0 3
## 4 0 4
## 5 0 5
## 6 0 6
Now the data is stored in a data frame. To make this dataset ready for igraph
processing and visualization, we need to convert soc.net.data into a matrix object.
# remove the first 347 edges (to wipe out the degenerate "0" node)
graph_m<-graph.edgelist(soc.net.data.mat[-c(0:347), ], directed =
F)
Before we display the social network graph we may want to examine our model
first.
summary(graph_m)
## IGRAPH U--- 4038 87887 --
This is an extremely brief yet informative summary. The first line U---4038
87887 includes potentially four letters and two numbers. The first letter could be
U or D indicating undirected or directed edges. A second letter N would mean that
the objects set has a “name” attribute. A third letter is for weighted (W) graph.
Since we didn’t add weight in our analysis the third letter is empty (“-“). A fourth
character is an indicator for bipartite graphs, whose vertices can be divided into
twodisjoint sets where each vertex from one set connects to one vertex in the other
set. The two numbers following the 4 letters represent the number of nodes and the
numberofedges, respectively. Now let’s render the graph (Fig. 16.2).
plot(graph_m)
This graph is very complicated. We can still see that some words are
surrounded by more nodes than others. To obtain such information we can use the
degree() function, which lists the number of edges for each node.
degree(graph_m)
Skimming the table we can find that the107-th user has as many as 1,044
connections, which makes the user a highly-connected hub. Likely, this node may
have higher social relevance.
Some edges might be more important than other edges because they serve as a
bridge to link a cloud of nodes. To compare their importance, we can use the
betweenness centrality measurement. Betweenness centrality measures centrality
in a network. High centrality for a specific node indicates influence.
betweenness () can help us to calculate this measurement.
betweenness(graph_m)
http://socr.umich.edu/html/Navigators.htm
http://socr.ucla.edu/SOCR_HyperTree.jso
Fig. 16.3 Live demo: a dynamic graph representation of the SOCR resources
We can try another example using SOCR hierarchical data, which is also
available for dynamic exploration as a tree graph. Let’s read its JSON data source
using the jsonlite package (Fig. 16.3).
16.2 Working with Domain-Specific Data 533
tree.json<-fromJSON("http://socr.ucla.edu/SOCR_HyperTree.json",
simplifyDataFrame = FALSE)
# install.packages("data.tree")
library(data.tree) tree.graph<-
as.Node(tree.json, mode = "explicit")
In this graph, "AboutSOCR", which is located at the center, represents the root
node of the tree graph.
The proliferation of Cloud services and the emergence of modern technology in all
aspects of human experiences leads to a tsunami of data much of which is
streamed real-time. The interrogation of such voluminous data is an increasingly
important area of research. Data streams are ordered, often unbounded sequences
of data points created continuously by a data generator. All of the data mining,
interrogation and forecasting methods we discuss here are also applicable to data
streams.
16.3.1 Definition
Y ¼ fy1;y2;y3;;yt;g,
where the (time) index, t, reflects the order of the observation/record, which may
be single numbers, simple vectors in multidimensional space, or objects, e.g.,
structured Ann Arbor Weather (JSON) and its corresponding structured form.
Some streaming data is streamed because it’s too large to be downloaded shotgun
style and some is streamed because it’s continually generated and serviced. This
presents the potential problem of dealing with data streams that may be unlimited.
Notes:
• Data sources: Real or synthetic stream data can be used. Random simulation
streams may be created by rstream. Real stream data may be piped from
financial data providers, the WHO, World Bank, NCAR and other sources.
• Inference Techniques: Many of the data interrogation techniques we have seen
can be employed for dynamic stream data, e.g., factas, for PCA, rEMM and
birch for clustering, etc. Clustering and classification methods capable of
processing data streams have been developed, e.g., Very Fast Decision Trees
536 16 Specialized Machine Learning Topics
The Rstream package provides data stream mining algorithms using fpc, clue,
cluster, clusterGeneration, MASS, and proxy packages. In addition, the package
streamMOA provides an rJava interface to the Java-based data stream clustering
algorithms available in the Massive Online Analysis (MOA) framework for stream
classification, regression and clustering.
If you need a deeper exposure to data streaming in R, we recommend you go
over the stream vignettes.
We will now try k-means and density-based data stream clustering algorithm, D-
Stream, where micro-clusters are formed by grid cells of size gridsize with density
of a grid cell (Cm) is least 1.2 times the average cell density. The model is updated
with the next 500 data points from the stream.
First, let’s run the k-means clustering with k ¼ 5 clusters and plot the resulting
micro- and macro-clusters (Fig. 16.5).
16.3 Data Streaming 537
Fig. 16.5 Micro and macro clusters of a 5-means clustering of the first 500 points of the
streamed simulated 2D Gaussian kernels
We can re-cluster the data using k-means with 5 clusters and plot the resulting
micro- and macro-clusters (Fig. 16.6).
Note the subtle changes in the clustering results between kmc and km_G5.
538 16 Specialized Machine Learning Topics
Fig. 16.6 Micro- and macro- clusters of a 5-means clustering of the next 1,000 points of the
streamed simulated 2D Gaussian kernels
For DSD objects, some basic stream functions include print(), plot(), and
write_stream(). These can save part of a data stream to disk. DSD_Memory and
DSD_ReadCSV objects also include memberfunctionslike reset_stream() to reset
the position in the stream to its beginning.
To request a new batch of data points from the stream we use get_points(). This
chooses a random cluster (based on the probability weights in p_weight) and a
point is drawn from the multivariate Gaussian distribution (mean ¼ mu,
covariance matrix ¼ Σ) of that cluster. Below, we pull n ¼ 10 new data points from
the stream
(Fig. 16.7).
540 16 Specialized Machine Learning Topics
Fig. 16.7 Scatterplot of the next batch of 700 random Gaussian points in 2D
## 9 0.5030676 0.7560124 ## 10
0.7930719 0.0937701 new_p <-
get_points(stream_5G, n = 100, class
= TRUE)
head(new_p, n = 20)
## X1 X2 class
## 1 0.7915730 0.09533001
4 ## 2 0.4305147 0.36953997
2 ## 3 0.4914093 0.82120395
3
## 4 0.7837102 0.06771246 4
## 5 0.9233074 0.48164544 5
## 6 0.8606862 0.49399269 5
## 7 0.3191884 0.27607324 2
## 8 0.2528981 0.27596700 2
## 9 0.6627604 0.68988585
3 ## 10 0.7902887 0.09402659
4 ## 11 0.7926677 0.09030248
4
## 12 0.9393515 0.50259344 5
## 13 0.9333770 0.62817482 5
## 14 0.7906710 0.10125432 4
## 15 0.1798662 0.24967850 2
## 16 0.7985790 0.08324688
4 ## 17 0.5247573 0.57527380 3
"pc")
16.3 Data Streaming 541
## 18 0.2358468 0.23087585
2
## 19 0.8818853 0.49668824 5
## 20 0.4255094 0.81789418 3
plot(stream_5G, n = 700, method
=
Note that if you add noise to your stream, e.g., stream_Noise <-
DSD_Gaussians(k ¼ 5,d ¼ 4,noise ¼ .1,p ¼ c(0.1,0.5,0.3,
0.9,0.1)), then the noise points that are not classified as part of any cluster will
have an NA class label.
set.seed(12345)
stream_Bench <- DSD_Benchmark(1)
stream_Bench
## Benchmark 1: Two clusters moving diagonally from left to right,
meeting in
## the center (5% noise).
## Class: DSD_MG, DSD_R, DSD_data.frame,
DSD ## With 2 clusters in 2 dimensions.
542 16 Specialized Machine Learning Topics
Time is 1 library("animation")
reset_stream(stream_Bench)
animate_data(stream_Bench,n=10000,horizon=100,xlim=c(0,1),
ylim=c(0,1))
This benchmark generator creates two 2D clusters moving in 2D. One moves
from top-left to bottom-right, the other from bottom-left to top-right. Then they
meet at the center of the domain, the 2 clusters overlap and then split again.
Concept drift in the stream can be depicted by requesting (10) times 300 data
points from the stream and animating the plot. Fast-forwarding the stream can be
accomplished by requesting, but ignoring, (2000) points in between the (10) plots.
The output of the animation below is suppressed to save space.
These data represent the X and Y spatial knee-pain locations for over 8,000
patients, along with labels about the knee Front, Back, Left and Right. Let’s try to
read the SOCR Knee Pain Dataset as a stream.
library("XML"); library("xml2"); library("rvest")
wiki_url <-
read_html("http://wiki.socr.umich.edu/index.php/SOCR_Data_KneePa
inData_041409")
html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body-primary" role="main">\n\t<a
id="top ...
## 9 0.32329635
0.4942197 ## 10
544 16 Specialized Machine Learning Topics
0.30744849 0.5086705
streamKnee
## Memory Stream Interface
## Class: DSD_Memory, DSD_R, DSD_data.frame, DSD
## With NA clusters in 2 dimensions
## Contains 8666 data points - currently at position 11 - loop is
TRUE
# Stream pointer is in position 11 now
## x y
## 208 0.1870048
0.6329480 ## 209
0.1220285 0.4132948
streamKnee
## Memory Stream Interface
## Class: DSD_Memory, DSD_R, DSD_data.frame, DSD
## With NA clusters in 2 dimensions
## Contains 8666 data points - currently at position 210 - loop is
TRUE
## Number of micro-clusters: 16
## Number of macro-clusters: 11
Fig. 16.9 Data stream clustering and classification of the SOCR knee-pain dataset (n¼500)
Fig. 16.10 5-Means stream clustering of the SOCR knee pain data
head(get_centers(dsc_streamKnee))
## [,1] [,2]
## [1,] 0.05 0.45
## [2,] 0.05 0.55
## [3,] 0.15 0.35
## [4,] 0.15 0.45
## [5,] 0.15 0.55 ## [6,] 0.15 0.65 plot(dsc_streamKnee,
streamKnee, xlim=c(0,1), ylim=c(0,1))
1 k
0 Purity ¼ X maxjci \ tj 1,
N i¼1
Fig. 16.11 Animated continuous 5-means stream clustering of the knee pain data
16.3 Data Streaming 547
Fig. 16.12 Continuous stream clustering and purity index across iterations
animate_data(streamKnee, n=1000, horizon=100,xlim=c(0,1), ylim =
c(0,1))
## points purity
## 1 1 0.9600000
## 2 101 0.9043478
## 3 201 0.9500000
…
## 49 4801 0.9047619
## 50 4901 0.8850000
Figure 16.13 shows the average clustering purty as we evaluate the stream
clustering across the streaming points.
# Synthetic Gaussian example
# stream <- DSD_Gaussians(k = 3, d = 2, noise = .05)
# dstream <- DSC_DStream(gridsize =
.1) # update(dstream, stream, n
548 16 Specialized Machine Learning Topics
= 2000)
# evaluate(dstream, stream, n = 100)
## 49 4801 0.9772727
## 50 4901 0.9777778
16.3 Data Streaming 549
Here and in previous chapters, e.g., Chap. 15, we notice that R may sometimes be
slow and memory-inefficient. These problems may be severe, especially for
datasets with millions of records or when using complex functions. There are
packages for processing large datasets and memory optimization – bigmemory,
biganalytics, bigtabulate, etc.
550 16 Specialized Machine Learning Topics
16.4 Optimization and Improving the Computational Performance
We have also seen long execution times when running processes that ingest, store
or manipulate huge data.frame objects. The dplyr package, created by Hadley
Wickham and Romain Francoi, provides a faster route to manage such large
datasets in R. It creates an object called tbl, similar to data.frame, which has an in-
memory column-like structure. R reads these objects a lot faster than data frames.
To make a tbl object we can either convert an existing data frame to tbl or
connect to an external database. Converting from data frame to tbl is quite easy.
All we need to do is call the function as.tbl().
#install.packages("dplyr")
library(dplyr) nof1_tbl<-
as.tbl(nof1); nof1_tbl
## # A tibble: 900 × 10
## ID Day Tx SelfEff SelfEff25 WPSS SocSuppt PMss PMss3
PhyAct ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
<dbl> <dbl> <dbl> ## 1 1 1 1 33 8 0.97
5.00 4.03 1.03 53 ## 2 1 2 1 33 8 -0.17
3.87 4.03 1.03 73 ## 3 1 3 0 33 8
0.81 4.84 4.03 1.03 23 …
## 8 1 8 0 33 8 -0.34 3.69 4.03
1.03 0 ## 9 1 9 1 33 8 -0.74
3.29 4.03 1.03 73 ## 10 1 10 1 33 8
-0.38 3.66 4.03 1.03 114 ## # ... with 890 more rows
This looks like a normal data frame. If you are using R Studio, displaying the
nof1_tbl will show the same output as nof1.
551
16.4.2 Making Data Frames Faster with Data.Table
Similar to tbl, the data.table package provides another alternative to data frame
object representation. data.table objects are processed in R much faster compared
to standard data frames. Also, all of the functions that can accept data frame could
be applied to data.table objects as well. The function fread() is able to read a local
CSV file directly into a data.table.
#install.packages("data.table") library(data.table)
nof1<-fread("C:/Users/Dinov/Desktop/02_Nof1_Data.csv")
nof1[ID==1, mean(PhyAct)]
## [1] 52.66667
This useful functionality can also help us run complex operations with only a
few lines of code. One of the drawbacks of using data.table objects is that they are
still limited by the available system memory.
# install.packages("ff")
library(ff)
552 16 Specialized Machine Learning Topics
# vitalsigns<-read.csv.ffdf(file="UQ_VitalSignsData_Case04.csv",
header=T)
vitalsigns<read.csv.ffdf(file="https://umich.instructure.com/files/
366335/download? download_frd=1", header=T)
As mentioned earlier, we cannot apply functions directly on this object.
mean(vitalsigns$Pulse)
## Warning in mean.default(vitalsigns$Pulse): argument is not
numeric or
## logical: returning NA
## [1] NA
For basic calculations on such large datasets, we can use another package,
ffbase. It allows operations on ffdf objects using simple tasks like: mathematical
operations, query functions, summary statistics and bigger regression models
using packages like biglm, which will be mentioned later in this chapter.
# install.packages("ffbase") library(ffbase)
mean(vitalsigns$Pulse)
## [1] 108.7185
To measure how much time can be saved for different methods, we can use
function system.time().
system.time(mean(vitalsigns$Pulse))
## user system elapsed
## 0 0 0
This means calculating the mean of Pulse column in the vitalsigns dataset takes
less than 0.001 seconds. These values will vary between computers, operating
systems, and states of operations.
We will introduce two packages for parallel computing multicore and snow (their
core components are included in the package parallel). They both have a different
way of multitasking. However, to run these packages, you need to have a
relatively modern multicore computer. Let’s check how many cores your
computer has. This function parallel::detectCores() provides this functionality.
parallel is a base package, so there is no need to install it prior to using it.
library(parallel); detectCores()
## [1] 8
So, there are eight (8) cores in my computer. I will be able to run up to 6-8
parallel jobs on this computer.
The multicore package simply uses the multitasking capabilities of the kernel,
the computer’s operating system, to “fork” additional R sessions that share the
same memory. Imagine that we open several R sessions in parallel and let each of
them do part of the work. Now, let’s examine how this can save time when
running complex protocols or dealing with large datasets. To start with, we can
use the mclapply() function, which is similar to lapply(), which applies functions
to a vector and returns a vector of lists. Instead of applying functions to vectors
mcapply() divides the complete computational task and delegates portions of it to
554 16 Specialized Machine Learning Topics
each available core. To demonstrate this procedure, we will construct a simple, yet
time
16.5 Parallel Computing
consuming, task of generating random numbers. Also, we can use the system.
time() function to track execution time.
set.seed(123)
system.time(c1<-rnorm(10000000))
The unlist() is used at the end to combine results from different cores into a
single vector. Each line of code creates 10,000,000 random numbers. The c1 call
took the longest time to complete. The c2 call used two cores to finish the task
(each core handled 5,000,000 numbers) and used less time than c1. Finally, c4
used all four cores to finish the task and successfully reduced the overall time.
We can see that when we use more cores the overall time is significantly
reduced.
The snow package allows parallel computing on multicore multiprocessor
machines or a network of multiple machines. It might be more difficult to use but
it’s also certainly more flexible. First we can set how many cores we want to use
via makeCluster() function.
# install.packages("snow")
library(snow) cl<-
makeCluster(2)
This call might cause your computer to pop up a message warning about access
though the firewall. To do the same task we can use parLapply() function in the
snow package. Note that we have to call the object we created with the previous
makeCluster() function.
555
system.time(c2<-unlist(parLapply(cl, c(5000000, 5000000),
function(x) { rnorm(x)})))
## user system elapsed
## 0.11 0.11 0.64
While using parLapply(), we have to specify the matrix and the function that
will be applied to this matrix. Remember to stop the cluster we made after
completing the task, to release back the system resources.
stopCluster(cl)
# install.packages("doParallel")
library(doParallel)
cl<-makeCluster(4)
registerDoParallel(cl)
unregister<-registerDoSEQ()
16.6 Deploying Optimized Learning Algorithms
556 16 Specialized Machine Learning Topics
16.5.4 GPU Computing
Modern computers have graphics cards, GPUs (Graphical Processing Units), that
consists of thousands of cores, however they are very specialized, unlike the
standard CPU chip. If we can use this feature for parallel computing, we may
reach amazing performance improvements, at the cost of complicating the
processing algorithms and increasing the constraints on the data format. Specific
disadvantages of GPU computing include reliance on proprietary manufacturer
(e.g., NVidia) frameworks and Complete Unified Device Architecture (CUDA)
programming language. CUDA allows programming of GPU instructions into a
common computing language. This paper provides one example of using GPU
computation to significantly improve the performance of advanced neuroimaging
and brain mapping processing of multidimensional data.
The R package gputools is created for parallel computing using NVidia CUDA.
Detailed GPU computing in R information is available online.
As we mentioned earlier, some tasks can be parallelized easier than others. In real
world situations, we can pick the algorithms that lend themselves well to
parallelization. Some of the R packages that allow parallel computing using ML
algorithms are listed below.
biglm allows training regression models with data from SQL databases or large
data chunks obtained from the ff package. The output is similar to the standard
lm() function that builds linear models. However, biglm operates efficiently on
massive datasets.
with bigrf
The bigrf package can be used to train random forests combining the foreach and
doParallel packages. In Chap. 15, we presented random forests as machine
557
learners ensembling multiple tree learners. With parallel computing, we can split
the task of creating thousands of trees into smaller tasks that can be outsourced to
each available compute core. We only need to combine the results at the end.
Then, we will obtain the exact same output in a relatively shorter amount of time.
with caret
Combining the caret package with foreach, we can obtain a powerful method to
deal with time-consuming tasks like building a random forest learner. Utilizing the
same example we presented in Chap. 15, we can see the time difference of
utilizing the foreach package.
#library(caret)
system.time(m_rf <- train(CHARLSONSCORE ~ ., data = qol, method =
"rf", metric = "Kappa", trControl = ctrl, tuneGrid = grid_rf))
It took more than a minute to finish this task in standard execution model
purely relying on the regular caret function. Below, this same model training
completes much faster using parallelization (less than half the time) compared to
the standard call above.
set.seed(123) cl<-
makeCluster(4)
registerDoParallel(cl)
getDoParWorkers()
## [1] 4
system.time(m_rf <- train(CHARLSONSCORE ~ ., data = qol, method =
"rf", metric = "Kappa", trControl = ctrl, tuneGrid = grid_rf))
## user system
elapsed ## 4.61
0.02 47.70 unregister<-
registerDoSEQ()
16.7 Practice Problem
Try to analyze the co-appearance network in the novel “Les Miserables”. The data
contains the weighted network of co-appearances of characters in Victor Hugo’s
novel “Les Miserables”. Nodes represent characters as indicated by the labels and
edges connect any pair of characters that appear in the same chapter of the book.
The values on the edges are the number of such co-appearances.
558 16 Specialized Machine Learning Topics
16.8 Assignment: 16. Specialized Machine Learning Topics
miserables<-
read.table("https://umich.instructure.com/files/330389/download?
download_frd¼1", sep¼"", header¼F) head(miserables)
Also, try to interrogate some of the larger datasets we have by using alternative
parallel computing and big data analytics.
• Download the Main SOCR Wiki Page and compare RCurl and httr.
• Read and write XML code for the SOCR Main Page.
• Scrape the data from the SOCR Main Page.
• Download 03_les_miserablese_GraphData.txt
• Visualize this undirected network.
• Summary the graph and explain the output.
• Calculate degree and the centrality of this graph.
• Find out some important characters.
• Will the result change or not if we assume the graph is directed?
References
Data Streams in R:https://cran.r-project.org/web/packages/stream/vignettes/stream.pdf
Dplyr:https://cran.rstudio.com/web/packages/dplyr/vignettes/introduction.html
doParallel:https://cran.rproject.org/web/packages/doParallel/vignettes/gettingstartedParallel.pdf
Mailund, T. (2017) Beginning Data Science in R: Data Analysis, Visualization, and Modelling for
the Data Scientist, Apress, ISBN 1484226712, 9781484226711
Chapter 17
Variable/Feature Selection
The different types of feature selection methods have their own pros and cons. In
this chapter, we are going to introduce the randomized wrapper method using the
Boruta package, which utilizes the random forest classification method to output
variable importance measures (VIMs). Then, we will compare its results with
Recursive Feature Elimination, a classical deterministic wrapper method.
First things first, let’s explore the dataset we will be using. Case Study 15,
Amyotrophic Lateral Sclerosis (ALS), examines the patterns, symmetries,
associations and causality in a rare but devastating disease, amyotrophic lateral
sclerosis (ALS), also known as Lou Gehrig disease. This ALS case-study reflects
a large clinical trial including big, multi-source and heterogeneous datasets. It
would be interesting to interrogate the data and attempt to derive potential
biomarkers that can be used for detecting, prognosticating, and forecasting the
progression of this neurodegenerative disorder. Overcoming many scientific,
technical and infrastructure barriers is required to establish complete, efficient,
and reproducible protocols for such complex data. These pipeline workflows
start with ingesting the raw data, preprocessing, aggregating, harmonizing,
analyzing, visualizing and interpreting the findings.
In this case-study, we use the training dataset that contains 2223 observations
and 131 numeric variables. We select ALSFRSslope as our outcome variable, as it
captures the patients’ clinical decline over a year. Although we have more
17.2 Case Study: ALS 563
observations than features, this is one of the examples where multiple features are
highly correlated. Therefore, we need to preprocess the variables, e.g., apply
feature selection, before commencing with predictive analytics.
The dataset is located in our case-studies archive. We can use read.csv() to directly
import the CSV dataset into R using the URL reference.
ALS.train<-
read.csv("https://umich.instructure.com/files/1789624/download?do
wnload_frd=1") summary(ALS.train)
## ID Age_mean Albumin_max
Albumin_median ## Min. : 1.0 Min. :18.00 Min. :
37.00 Min. :34.50 ## 1st Qu.: 614.5 1st Qu.:47.00
1st Qu.:45.00 1st Qu.:42.00
## Median :1213.0 Median :55.00 Median :47.00 Median :44.00
## Mean :1214.9 Mean :54.55 Mean :47.01 Mean :43.95
## 3rd Qu.:1815.5 3rd Qu.:63.00 3rd Qu.:49.00 3rd
Qu.:46.00 ## Max. :2424.0 Max. :81.00 Max. :
70.30 Max. :51.10 …
## Urine.Ph_median
Urine.Ph_min ## Min. :
5.000 Min. :5.000 ##
1st Qu.:5.000 1st Qu.:5.000
## Median :6.000 Median :
5.000 ## Mean :5.711
Mean :5.183 ## 3rd
Qu.:6.000 3rd Qu.:5.000
## Max. :9.000 Max. :8.000
There are 131 features and some of variables represent statistics like max, min
and median values of the same clinical measurements.
Now let’s explore the Boruta() function in the Boruta package to perform
variables selection, based on random forest classification. Boruta() includes the
following components:
#
install.packages("Boruta"
) library(Boruta)
set.seed(123)
als<-Boruta(ALSFRS_slope~.-ID, data=ALS.train, doTrace=0)
print(als)
## Boruta performed 99 iterations in 4.683657 mins.
## 28 attributes confirmed important: ALSFRS_Total_max,
## ALSFRS_Total_median, ALSFRS_Total_min, ALSFRS_Total_range,
## Creatinine_median and 23 more;
## 59 attributes confirmed unimportant: Albumin_max,
Albumin_median,
## Albumin_min, ALT.SGPT._max, ALT.SGPT._median and 54 more;
## 12 tentative attributes left: Age_mean, Albumin_range,
## Creatinine_max, Hematocrit_median, Hematocrit_range and 7
Fig. 17.1 Ranked variables importance using box and whisker plots for each feature
We can see that plotting the graph is easy but extracting matched feature names
may require more work. The basic plot is done by this call plot(als, xlab¼"",
xaxt¼"n"), where xaxt¼"n" means we suppress plotting of x-axis. The following
lines in the script reconstruct the x-axis plot. lz is a list created by the lapply()
function. Each element in lz contains all the important scores for a single feature in
the original dataset. Also, we excluded all rejected features with infinite
importance. Then, we sorted these non-rejected features according to their median
importance and printed them on the x-axis by using axis().
We have already seen similar groups of boxplots back in Chaps. 3 and 4. In this
graph, variables with green boxes are more important than the ones represented
with red boxes, and we can see the range of importance scores within a single
variable in the graph.
It may be desirable to get rid of tentative features. Notice that this function
should be used only when strict decision is highly desired, because this test is
much weaker than Boruta and can lower the confidence of the final result.
final.als<-TentativeRoughFix(als)
17.2 Case Study: ALS 567
print(final.als)
## Boruta performed 99 iterations in 4.683657 mins.
## Tentatives roughfixed over the last 99 iterations.
## 32 attributes confirmed important: ALSFRS_Total_max,
## ALSFRS_Total_median, ALSFRS_Total_min, ALSFRS_Total_range,
## Creatinine_median and 27 more;
## 67 attributes confirmed unimportant: Age_mean,
Albumin_max, ## Albumin_median, Albumin_min,
Albumin_range and 62 more; final.als$finalDecision
## Age_mean
Albumin_max ## Rejected
Rejected
## Albumin_median
Albumin_min
## Rejected
Rejected ## Albumin_range
ALSFRS_Total_max ## Rejected
Confirmed
## ALSFRS_Total_median
ALSFRS_Total_min ## Confirmed
Confirmed
…
## Urine.Ph_max
Urine.Ph_median ## Rejected
Rejected
## Urine.Ph_min
##
Rejected ## Levels: Tentative
Confirmed Rejected
getConfirmedFormula(final.als)
## ALSFRS_slope ~ ALSFRS_Total_max+ALSFRS_Total_median +
ALSFRS_Total_min +
## ALSFRS_Total_range + Creatinine_median + Creatinine_min +
## hands_max + hands_median + hands_min +
hands_range+Hematocrit_max+
##
Hematocrit_min+Hematocrit_range+Hemoglobin_median+Hemoglobin_range
+
## leg_max + leg_median + leg_min + leg_range + mouth_max +
## mouth_median + mouth_min + mouth_range + onset_delta_mean +
## pulse_max+respiratory_median + respiratory_min +
respiratory_range+
## trunk_max + trunk_median + trunk_min + trunk_range
## <environment: 0x000000000989d6f8>
# report the Boruta "Confirmed" & "Tentative" features, removing
the
"Rejected" ones
print(final.als$finalDecision[final.als$finalDecision %in%
c("Confirmed", "Tentative")])
## ALSFRS_Total_max ALSFRS_Total_median
ALSFRS_Total_min ## Confirmed
Confirmed Confirmed ## ALSFRS_Total_range
Creatinine_median Creatinine_min ##
Confirmed Confirmed Confirmed ##
hands_max hands_median hands_min ##
Confirmed Confirmed Confirmed ##
568 17 Variable/Feature Selection
Let’s compare the Boruta results against a classical variable selection method—
recursive feature elimination (RFE). First, we need to load two packages: caret and
randomForest. Then, as we did in Chap. 15, we must specify a resampling method.
Here we use 10-fold CV to do the resampling.
library(caret)
library(randomForest)
set.seed(123)
control<-rfeControl(functions = rfFuncs, method = "cv", number=10)
Now, all preparations are complete and we are ready to do the RFE variable
selection.
rf.train<-rfe(ALS.train[, -c(1, 7)], ALS.train[, 7],
sizes=c(10,20,30,40), rfeControl=control) rf.train
## Recursive feature selection
## Outer resampling method: Cross-Validated (10
fold) ## Resampling performance over subset size:
## Variables RMSE Rsquared RMSESD RsquaredSD
Selected ## 10 0.3500 0.6837 0.03451
0.03837
## 20 0.3471 0.6894 0.03230 0.03374
17.2 Case Study: ALS 569
Fig. 17.2 Root-mean square cross-validation error rate for random forest classification of the
ALS study against the number of features
Using the functions predictors() and getSelectedAttributes(), we can compare
the final results of the two alternative feature selection methods.
intersect(predBoruta, predRFE)
570 17 Variable/Feature Selection
There are 26 common variables chosen by the two techniques, which suggests
that both the Boruta and RFE methods are robust. Also, notice that the Boruta
method can give similar results without utilizing the size option. If we want to
consider tenormore differentsizes,theprocedurewillbequite time consuming. Thus,
the Boruta method is effective when dealing with complex real world problems.
Next, we can contrast the Boruta feature selection results against another classical
variable selection method – stepwise model selection. Let’s start with fitting a
bidirectional stepwise linear model-based feature selection.
data2 <- ALS.train[, -1]
# Define a base model - intercept only
base.mod <- lm(ALSFRS_slope ~ 1 , data=
data2) # Define the full model - including
all predictors all.mod <- lm(ALSFRS_slope
~ . , data= data2) # ols_step <-
lm(ALSFRS_slope ~ ., data=data2)
ols_step <- step(base.mod, scope = list(lower=base.mod, upper =
all.mod), direction = 'both', k=2, trace = F) summary(ols_step);
ols_step
## Call:
## lm(formula = ALSFRS_slope ~ ALSFRS_Total_range +
ALSFRS_Total_median +
## ALSFRS_Total_min + Calcium_range + Calcium_max +
bp_diastolic_min +
## onset_delta_mean + Calcium_min + Albumin_range +
Glucose_range +
## ALT.SGPT._median + AST.SGOT._median + Glucose_max +
Glucose_min +
## Creatinine_range + Potassium_range + Chloride_range +
Chloride_min+ ## Sodium_median + respiratory_min
+respiratory_range+respiratory_max+
## trunk_range + pulse_range + Bicarbonate_max +
Bicarbonate_range +
## Chloride_max + onset_site_mean + trunk_max +
Gender_mean + ## Creatinine_min, data = data2)
##
## Residuals:
## Min 1Q Median 3Q
Max ## -2.22558 -0.17875 -0.02024 0.17098
17.2 Case Study: ALS 571
1.95100
##
## Coefficients:
## Estimate Std. Error t value
Pr(>|t|) ## (Intercept) 4.176e-01 6.064e-01
0.689 0.491091 ## ALSFRS_Total_range -2.260e+01
1.359e+00 -16.631 < 2e-16 *** ## ALSFRS_Total_median
-3.388e-02 2.868e-03 -11.812 < 2e-16 *** ##
ALSFRS_Total_min 2.821e-02 3.310e-03 8.524 < 2e-
16 *** …
## trunk_max 2.288e-02 8.453e-03 2.706
0.006854 ** ## Gender_mean -3.360e-02 1.751e-02
-1.919 0.055066 . ## Creatinine_min 7.643e-04
4.977e-04 1.536 0.124771
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1
' ' 1 ##
## Residual standard error: 0.3355 on 2191 degrees of freedom
## Multiple R-squared: 0.7135, Adjusted R-squared:
0.7094 ## F-statistic: 176 on 31 and 2191 DF, p-
value: < 2.2e-16
##
## Call:
## lm(formula = ALSFRS_slope ~ ALSFRS_Total_range +
ALSFRS_Total_median +
## ALSFRS_Total_min + Calcium_range + Calcium_max +
bp_diastolic_min +
## onset_delta_mean + Calcium_min + Albumin_range +
Glucose_range +
## ALT.SGPT._median + AST.SGOT._median + Glucose_max +
Glucose_min +
## Creatinine_range + Potassium_range + Chloride_range
+Chloride_min+ ## Sodium_median +
respiratory_min+respiratory_range+respiratory_max+
## trunk_range + pulse_range + Bicarbonate_max +
Bicarbonate_range +
## Chloride_max + onset_site_mean + trunk_max + Gender_mean +
## Creatinine_min, data = data2)
##
## Coefficients:
## (Intercept) ALSFRS_Total_range
ALSFRS_Total_median ## 4.176e-01
-2.260e+01 -3.388e-02 ## ALSFRS_Total_min
Calcium_range Calcium_max ## 2.821e-02
2.410e+02 -4.258e-01 ## bp_diastolic_min
onset_delta_mean Calcium_min ## -2.249e-03
-5.461e-05 3.579e-01 ## Albumin_range
Glucose_range ALT.SGPT._median ## -2.305e+00
-1.510e+01 -2.300e-03 ## AST.SGOT._median
Glucose_max Glucose_min ## 3.369e-03
3.279e-02 -3.507e-02 ## Creatinine_range
Potassium_range Chloride_range ## 5.076e-01
-4.535e+00 5.318e+00 ## Chloride_min
Sodium_median respiratory_min ## 1.672e-02
-9.830e-03 -1.453e-01
## respiratory_range respiratory_max trunk_range
## -5.834e+01 1.712e-01 -8.705e+00 ##
pulse_range Bicarbonate_max Bicarbonate_range ##
-5.117e-01 7.526e-03 -2.204e+00 ##
Chloride_max onset_site_mean trunk_max ##
572 17 Variable/Feature Selection
## respiratory_range 5.756735
## respiratory_max 5.041816
## trunk_range 2.819029
## pulse_range 1.696811
574 17 Variable/Feature Selection
17.3 Practice Problem
## Bicarbonate_max
2.568068 ## Bicarbonate_range
2.303757
## Chloride_max
1.750666 ## onset_site_mean
1.663481 ## trunk_max
2.706410 ## Gender_mean
1.919380 ## Creatinine_min
1.535642
# plot predStepwise
# plot(predStepwise)
# Boruta vs. Stepwise feataure selection
intersect(predBoruta, stepwiseConfirmedVars)
## [1] "ALSFRS_Total_median" "ALSFRS_Total_min"
"ALSFRS_Total_range" ## [4] "Creatinine_min"
"onset_delta_mean" "respiratory_min"
## [7] "respiratory_range" "trunk_max" "trunk_range"
There are about nine common variables chosen by the Boruta and Stepwise
feature selection methods.
There is another more elaborate stepwise feature selection technique that is
implemented in the function MASS::stepAIC() that is useful for a wider range of
object classes.
The data summary shows that we have several factor variables. After
converting their type to numeric we find some missing data. We can manage this
issue by selecting only the complete observation of the original dataset or by using
multivariate imputation, see Chap. 3.
as.numeric)) alzh<-alzh[complete.cases(alzh), ]
For simplicity, here we eliminated the missing data and are left with 408
complete observations. Now, we can apply the Boruta method for feature
selection.
You might get a result that is a little bit different. We can plot the variable
importance graph using some previous knowledge (Fig. 17.3). The final step is to
get rid of the tentative features.
Can you reproduce these results? Also try to apply some of these techniques to
other data from the list of our Case-Studies.
17.4 Assignment: 17. Variable/Feature Selection
Fig. 17.3 Variable importance plot of predicting diagnosis for the Alzheimer’s disease case-study
References
Guyon, E, Gunn, S, Nikravesh, M, Zadeh, LA (eds.) (2008) Feature Extraction: Foundations and
Applications, Springer, ISBN 3540354883, 9783540354888
Liu, H and Motoda, H (eds.) (2007) Computational Methods of Feature Selection, Chapman &
Hall/CRC, ISBN 1584888792, 9781584888796
Pacheco, ER (2015) Unsupervised Learning with R, Packt Publishing, ISBN 1785885812,
9781785885815
Chapter 18
Regularized Linear Modeling and Controlled
Variable Selection
Many biomedical and biosocial studies involve large amounts of complex data,
including cases where the number of features (k) is large and may exceed the
number of cases (n). In such situations, parameter estimates are difficult to
compute or may be unreliable as the system is underdetermined. Regularization
provides one approach to improve model reliability, prediction accuracy, and
result interpretability. It is based on augmenting the primary fidelity term of the
objective function used in the model-fitting process with a dual regularization
term that provides restrictions on the parameter space.
Classical techniques for choosing important covariates to include in a model of
complex multivariate data rely on various types of stepwise variable selection
processes, see Chap. 17. These tend to improve prediction accuracy in certain
situations, e.g., when a small number of features are strongly predictive, or
associated, with the clinical outcome or biosocial trait. However, the prediction
error may be large when the model relies purely on a fidelity term. Including a
regularization term in the optimization of the cost function improves the
prediction accuracy. For example, below we show that by shrinking large
regression coefficients, ridge regularization reduces overfitting and decreases
the prediction error. Similarly, the Least Absolute Shrinkage and Selection
Operator (LASSO) employs regularization to perform simultaneous parameter
estimation and variable selection. LASSO enhances the prediction accuracy and
provides a natural interpretation of the resulting model. Regularization refers to
forcing certain characteristics of model-based scientific inference, e.g.,
discouraging complex models or extreme explanations, even if they fit the data
well, by enforcing model generalizability to prospective data, or restricting model
overfitting of accidental samples.
In this chapter, we extend the mathematical foundation we presented in
Chap. 5 and (1) discuss computational protocols for handling complex high-
dimensional data, (2) illustrate model estimation by controlling the false-positive
rate of selection of salient features, and (3) derive effective forecasting models.
18.3 Regularized Linear Modeling 579
We should review the basics of matrix notation, linear algebra, and matrix
computing we covered in Chap. 5. At the core of matrix manipulations are
scalars, vectors and matrices.
• yi: output or response variable, i ¼ 1, ..., n (cases/subjects).
y¼ ,
and
0 x1,1
x1,2 B
x
CC
X ¼x2,1 x2,2 x A:
B
@⋮ ⋮
If we assume that the covariates are orthonormal, i.e., we have a special kind of a
design matrix XTX ¼ I, then:
min1 y Xβ 2
,
2
β2 ℝkNk k
where SNλ is a soft thresholding operator translating values towards zero, instead
of setting smaller values to zero and leaving larger ones untouched as the hard
thresholding operator would.
• Ridge regression estimates minimize the following objective funciton:
β minℝk N k y Xβ
k22 þλ k β k22, 1
2
^ ^ OLS
which yields estimates β j ¼ ð1 þ NλÞ11β j . Thus, ridge regression shrinks all
coefficients by a uniform factor, (1 + Nλ) , and does not set any coefficients to
zero.
Fig. 18.1 Plot of the MSE rate of the ridge-regularized linear model of MLB player ’s weight
against the regularization weight parameter λ (log scale on the x-axis)
582 18 Regularized Linear Modeling and Controlled Variable Selection
Fig. 18.2 Plot of the effect-size coefficients (Age and Height) of the ridge-regularized linear
model of MLB player’s weight against the regularization weight parameter λ
Fig. 18.3 Effect of the regularization weight parameter λ on the model coefficients (Age and
Height) of the ridge-regularized linear model of MLB player’s weight
# Data: https://umich.instructure.com/courses/38100/files/folder/data
(01a_data.txt)
data <-
read.table('https://umich.instructure.com/files/330381/download?down
load_frd=1', as.is=T, header=T) attach(data); str(data)
# Training Data
# Full Model: x <- model.matrix(Weight ~ ., data = data[1:900, ])
# Reduced Model x <- model.matrix(Weight ~ Age + Height,
data = data[1:900, ]) # creates a design (or model) matrix,
and adds 1 column for outcome according to the formula. y <-
data[1:900, ]$Weight
# Testing Data
x.test <- model.matrix(Weight ~ Age + Height, data =
data[901:1034, ])
y.test <- data[901:1034, ]$Weight
# install.packages("glmnet")
library("glmnet")
coef(cv.ridge)
## 4 x 1 sparse Matrix of class "dgCMatrix"
## 1
## (Intercept) -55.7491733 ##
(Intercept) .
## Age 0.6264096 ## Height 3.2485564
sqrt(cv.ridge$cvm[cv.ridge$lambda ==
cv.ridge$lambda.1se])
## [1] 17.94358
#plot variable feature coefficients against the shrinkage parameter
lambda.
glmmod <-glmnet(x, y, alpha = 0)
plot(glmmod, xvar="lambda") grid()
## [1] 264.083
and the coefsolution tends towards the ordinary least squares (OLS) and the
coefficients tend to zero. At the other extreme, as λ ! 0, the resulting
modelficients exhibit
k 2 p i¼ ¼ ¼
Fig. 18.4 Comparison of the coefficients of determination (R2) for three alternative models
Table 18.1 Results of the test dataset errors (MSE) for the three methods
LM LASSO Ridge
305.1995 261.8194 264.083
lm.pred <- predict(lm.fit, newx = x.test)
LM.MSE <- mean((y - lm.pred)^2); LM.MSE
mean((y.test - mean(y.test))^2)
# convert to markdown
kable(MSE_Table, format="pandoc", caption="Results of test dataset
errors", align=c("c", "c", "c"))
As both the inputs (features or predictors) and the output (response) are
observed for the testing data, we can learn the relationship between the two
types of features (controlled covariates and observable responses). Most often,
18.3 Regularized Linear Modeling 587
Prior to fitting regularized linear modeling and estimating the effects, covariates
may be standardized. This can be accomplished by using the classic “z-score”
formula. This puts each predictor on the same scale (unitless quantities) - the
^
mean is 0 and the variance is 1. We use β 0 ¼ y, for the mean intercept
parameter, and estimate the coefficients or the remaining predictors. To
facilitate interpretation of the model or results in the context of the specific
case-study, we can transform the results back to the original scale/units after the
model is estimated.
The basic setting here is: given a set of predictors X, find a function, f(X), to
model or predict the outcome Y.
Let’s denote the objective (loss or cost) function by L(y, f(X)). It determines
adequacy of the fit and allows us to estimate the squared error loss:
L yð ;f Xð ÞÞ ¼ ðy f Xð ÞÞ2:
EhðY f Xð ÞÞ2i ) f ¼ E Y½ jX ¼ x:
• And the expectation of the observed outcome given the data, E[Y| X ¼ x], is
a linear function, which in certain situations can be expressed as:
588 18 Regularized Linear Modeling and Controlled Variable Selection
To solve for the effect-sizes β, we can multiply both sides of the equation by
the inverse of its (right hand side) multiplier:
X T X 1 X T Y ¼ X T X 1 X T X β ¼ β:
Despite its wide use and elegant theory, linear regression has some shortcomings.
• Prediction accuracy – Often can be improved upon;
• Model interpretability – Linear model does not automatically do variable
selection.
^
Given a new input, x0, how do we assess our prediction f xð 0Þ?
h ^ Þ2i
EPE xð 0Þ ¼ E YE0 f xð 0 0 0
¼ Varð Þ þ MSE
where
590 18 Regularized Linear Modeling and Controlled Variable Selection
• Var(E): irreducible error variance
^ ^
•• BiasVar f x f xðð 00ÞÞ ::sample-to-sample variability ofaverage difference of
^
^f xð 0Þ & f f x(xð0).0Þ, and
1
MSE^ ^f ¼ m Xim1 yi ^f xð Þi 2:
¼
^
If f(x) linear, f will have low bias but possibly high variance, e.g., in
highdimensional setting due to correlated predictors, over (k features n cases),
or underdetermination (k > n). The goal is to minimize total error by trading off
bias and precision:
h
E xð Þ ¼ EY ^f xð Þ2i ¼ |E^f xð Þffl{z 2 f xð Þffl}2 þE| h^f xð Þfi ffl{zEvariance^f
:
xð Þ2ffl}i þ |{z}σ2 noise
When the true Y vs. X relation is not known, in nite data may be necessary to
^
calibrate the model f and it may be impractical to jointly reduce both the model
bias and variance. In general, minimizing the bias at the same time as minimizing
the variance may not be possible.
Figure 18.5 illustrates diagrammatically the dichotomy between bias
(accuracy) and precision (variability). Additional information is available in the
SOCR SMHS EBook.
18.4 Linear Regression
592 18 Regularized Linear Modeling and Controlled Variable Selection
Fig. 18.5 Graphical representation of the four extreme scenarios for bias and precision
As before, we start with a given X and look for a (linear) function, f(X), to model
or predict y subject to certain objective cost function, e.g., squared error loss.
Adding a second term to the cost function minimization process yields (model
parameter) estimates expressed as:
8 <
Consider Jð Þ ¼β Xjk¼1 β2j ¼k β k22 (Ridge Regression, RR). Then, the formulation
9= 2
βj2 ;: n p
k: ¼ ¼ ¼
Or, alternatively:
RR
β^ð Þt ¼ argminβ Xi 1 yi Xj 1 xijβj! , n
p 2
¼ ¼
subject to
X
j¼1
k β2j
t:
594 18 Regularized Linear Modeling and Controlled Variable Selection
18.5.2 Role of the Regularization Parameter
and results in more shrinkage offi the coef cients, i.e., we introduce bias at
18.5.3 LASSO
The LASSO (Least Absolute Shrinkage and Selection Operator) regularization relies
on:
k
X
Jð Þ ¼β
j1j βj j¼k βk1,
¼
k: ¼ ¼ ¼
þ λJ fð Þ,
More information about this specific study and the included derived
neuroimaging biomarkers is available online. A link to the data and a brief
summary of the features are included below:
• 05_PPMI_top_UPDRS_Integrated_LongFormat1.csv.
• Data elements include: FID_IID, L_insular_cortex_ComputeArea,
L_insular_cortex_Volume, R_insular_cortex_ComputeArea, R_insular_cortex_
Volume, L_cingulate_gyrus_ComputeArea, L_cingulate_gyrus_Volume,
R_cingulate_gyrus_ComputeArea, R_cingulate_gyrus_Volume, L_caudate_
ComputeArea, L_caudate_Volume, R_caudate_ComputeArea, R_caudate_
Volume, L_putamen_ComputeArea, L_putamen_Volume, R_putamen_
ComputeArea, R_putamen_Volume, Sex, Weight, ResearchGroup, Age,
chr12_rs34637584_GT, chr17_rs11868035_GT, chr17_rs11012_GT, chr17_
rs393152_GT, chr17_rs12185268_GT, chr17_rs199533_GT, UPDRS_part_I,
UPDRS_part_II, UPDRS_part_III, time_visit.
Note that the dataset includes missing values and repeated measures.
The goal of this demonstration is to use OLS, ridgeregression, and the LASSO
to find the best predictive model for the clinical outcomes – UPRDR score
596 18 Regularized Linear Modeling and Controlled Variable Selection
(vector) and Research Group (factor variable), in terms of demographic, genetics,
and neuroimaging biomarkers.
We can utilize the glmnet package in R for most calculations.
#### Initial Stuff ####
# clean up
rm(list=ls())
# load required packages # install.packages("arm") library(glmnet)
library(arm) library(knitr) # kable function to convert tabular R-
results into Rmd tables
# pick a random seed, but set.seed(seed) only effects next block of
code! seed = 1234
## data1.completeRowIndexes
## FALSE TRUE
## 609 1155
prop.table(table(data1.completeRowIndexes))
## data1.completeRowIndexes
## FALSE TRUE
## 0.3452381 0.6547619
attach(data1)
# View(data1[data1.completeRowIndexes, ])
# define response and predictors
y <- data1$UPDRS_part_I + data1$UPDRS_part_II +
data1$UPDRS_part_III table(y) # Show Clinically relevant
classification
## y
##0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22 23 24
##54 20 25 12 8 7 11 16 16 9 21 16 13 13 22 25 21 31 25 29 29
28 20 25 28 ##25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
42 43 44 45 46 47 48 49 ##26 35 41 23 34 32 31 37 34 28 36 29 27
22 19 17 18 18 19 16 9 10 12 9 11
##50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 66 68 69 71 75
80 81 82 ##7 10 11 5 7 4 1 5 9 4 3 2 1 6 1 2 1
2 1 1 2 3 1 y <- y[data1.completeRowIndexes]
## time_visit
## Min. : 0.00
## 1st Qu.: 9.00
## Median :24.00
## Mean :23.83
## 3rd Qu.:36.00
## Max. :54.00
# randomly split data into training (80%) and test (20%) sets
set.seed(seed)
train = sample(1 : nrow(X), round((4/5) *
nrow(X))) test = -train
# subset training
data yTrain =
y[train] XTrain =
X[train, ]
XTrainOLS = cbind(rep(1, nrow(XTrain)), XTrain)
Figures 18.6 and 18.7 show plots of the LASSO results and are obtained using
the R script below. Note that the top-horizontal axis lables indicate the number
of non-trivial parameters in the resulting model corresponding to the log(λ),
which is labeled on the bottom-horizontal axis.
Fig. 18.6 Relations between LASSO-regularized model coefficient sizes (y-axis), magnitude of
the regularization parameter (bottom axis), and the efficacy of the model selection, i.e.,
number of non-trivial coefficients (bottom axis)
600 18 Regularized Linear Modeling and Controlled Variable Selection
Fig. 18.7 Relations between Ridge-regularized model coefficient sizes (y-axis), magnitude of
the regularization parameter (bottom axis), and the efficacy of the model selection, i.e.,
number of non-trivial coefficients (bottom axis)
### Plot Solution Path ###
# LASSO plot(fitLASSO, xvar="lambda",
label="TRUE")
# add label to upper x-axis
mtext("LASSO regularizer: Number of Nonzero (Active)
Coefficients", side=3, line=2.5)
Similarly, the plot for the Ridge regularization can be obtained by:
### Plot Solution Path ###
# Ridge plot(fitRidge, xvar="lambda",
label="TRUE")
# add label to upper x-axis
mtext("Ridge regularizer: Number of Nonzero (Active)
Coefficients", side=3, line=2.5)
Let’s try to compare the paths of the LASSO and Ridge regression solutions.
Below, you will see that the curves of LASSO are steeper and non-differentiable
at some points, which is the result of using the L1 norm. On the other hand, the
Ridge path is smoother and asymptotically tends to 0 as λ increases.
Let’s start by examining the joint objective function (including LASSO and
Ridge terms):
to Ridge and LASSO regularization. The following two natural questions raise:
fidelity term (OLS solution). Thus, the effect of’ 0 α 1 is limited to the size and
shape of the penalty region. Let s try to visualize the feasible region as:
• centrosymmetric, when α ¼ 0, and
Y=scale(mlb$Height) X =
scale(mlb[,c(5,6)]) beta1=seq(-0.556,
1.556, length.out = 100) beta2=seq(-
0.661, 0.3386, length.out = 100) df <-
expand.grid(beta1 = beta1, beta2 = beta2)
b = as.matrix(df)
df$sse <- rep(t(Y)%*%Y,100*100)-2*b%*%t(X)%*%Y + diag(b%*%t(X)%*%X
%*%t(b))
base <- ggplot(df) + stat_contour(aes(beta1, beta2, z =
sse),breaks = round(quantile(df$sse, seq(0, 0.2, 0.03)), 0), size
= 0.5,color="darkorchid2",alpha=0.8)+
scale_x_continuous(limits = c(-0.4,1))+
scale_y_continuous(limits = c(-0.55,0.4))+
coord_fixed(ratio=1)+
geom_point(data = points,aes(x,y))+
geom_text(data = points,aes(x,y,label=z),vjust = 2,size=3.5)+
geom_segment(aes(x = -0.4, y = 0, xend = 1, yend = 0),colour =
"grey46",
arrow =
arrow(length=unit(0.30,"cm")),size=0.5,alpha=0.8)+
geom_segment(aes(x = 0, y = -0.55, xend = 0, yend =
0.4),colour="grey46", arrow =
arrow(length=unit(0.30,"cm")),size=0.5,alpha=0.8)
plot_alpha = function(alpha=0,restrict=0.2,beta1_range=0.2,
annot=c(0.15,-0.25,0.205,-0.05)){ a=alpha; t=restrict;
k=beta1_range; pos=data.frame(V1=annot[1:4])
tex=paste("(",as.character(annot[3]),",",as.character(annot[4]
),")",
sep = "")
K = seq(0,k,length.out = 50)
y = unlist(lapply((1-a)*K^2/2+a*K-t,result,a=(1-a)/2,b=a))
[seq(1,99,by=2)] fills = data.frame(x=c(rev(-
K),K),y1=c(rev(y),y),y2=c(-rev(y),-y)) p<-
base+geom_line(data=fills,aes(x = x,y = y1),colour = "salmon1",
18.6 Implementation of Regularization 603
# $\alpha=1$ - LASSO
t=0.22
K = seq(0,t,length.out = 50)
fills = data.frame(x=c(-rev(K),K),y1=c(rev(t-K),c(t-K)), y2=c(-
rev(t-K),-c(t-K))) p6 <- base + geom_segment(aes(x = 0, y = t,
xend = t, yend = 0),colour = "salmon1",
alpha=0.1,size=0.2)+ geom_segment(aes(x = 0, y = t, xend = -t,
yend = 0),colour = "salmon1",
alpha=0.1,size=0.2)+ geom_segment(aes(x = 0, y = -t, xend = t,
yend = 0),colour = "salmon1",
alpha=0.1,size=0.2)+ geom_segment(aes(x = 0, y = -t, xend = -t,
yend = 0),colour = "salmon1",
alpha=0.1,size=0.2)+ geom_polygon(data = fills, aes(x,
y1),fill = "red", alpha = 0.2)+ geom_polygon(data = fills,
aes(x, y2), fill = "red", alpha = 0.2)+ geom_segment(aes(x =
0.12 , y = -0.25, xend = 0.22, yend = 0), colour = "magenta",
arrow = arrow(length=unit(0.30,"cm")),alpha=0.8)+
ggplot2::annotate("text", x = 0.11, y = -0.36, label =
"(0.22,0)\n Point of Contact \n i.e Coef of LASSO",size=3)+
604 18 Regularized Linear Modeling and Controlled Variable Selection
xlab( expression(beta[1]))+
ylab( expression(beta[2]))+
theme(legend.position="none")
+
ggtitle(expression(paste(alpha, "=1 (LASSO)")))
Then, let’s add the six feasible regions corresponding to α ¼ 0 (Ridge), α ¼ ,
α ¼ , α ¼ , α ¼ and α ¼ 1 (LASSO).
Figures 18.8, 18.9 and 18.10 provide some intuition into the continuum from
Ridge to LASSO regularization. The feasible regions are drawn as ellipse contours
of the SSE in red. Curves around the corresponding feasible regions represent the
1 αk k2 þk k
boundary of the constraint functionβ 2 α β 1 t.
2
In this example, β2 shrinks to 0 for α ¼ , α ¼ , α ¼ and α ¼ 1.
We observe that it is almost impossible for the contours of Ridge regression to
touch the circle at any of the coordinate axes. This is also true in higher
dimensions (nD), where the L1 and L2 metrics are unchanged and the 2D ellipse
representations of the feasibility regions become hyper-ellipsoidal shapes.
Generally, as α goes from 0 to 1, the coefficients of more features tend to
shrink towards 0. This specific property makes LASSO useful for variable
selection.
Let’s compare the feasibility regions corresponding to Ridge (top, p1) and
LASSO (bottom, p6) regularization.
plot(p1)
18.6 Implementation of Regularization 605
Fig. 18.10 SSE contour and penalty region for six continuous values of the alpha parameter
illustrating the smooth transition from Ridge (α¼ 0) to LASSO (α¼ 1) regularization
plot(p6)
Then, we can plot the progression from Ridge to LASSO. This composite plot is
intense and may take several minutes to render, Fig. 18.10! Finally, Fig. 18.11
depicts the MSE of the cross-validated LASSO-regularized model against the
magnitude of number of non-trivial coefficients (top axis). The dashed vertical
lines suggest an optimal range [3:9] for number of features to include in the
model.
library("gridExtra")
grid.arrange(p1,p2,p3,p4,p5,p6,nrow=3)
18.6 Implementation of Regularization 607
Fig. 18.11 MSE of the cross-validated LASSO-regularized model against the magnitude of the
regularization parameter (bottom axis), and the efficacy of the model selection, i.e., number
of non-trivial coefficients (top axis). The dashed vertical lines suggest an optimal range for the
penalty term and the number of features
Efficiently obtaining the entire solution path is nice, but we still have to choose
a specific λ regularization parameter. This is critical as λ controlsthebiasvariance
tradeoff. Traditional model selection methods rely on various metrics like
Mallows’ Cp, AIC, BIC, and adjusted R2.
Internal statistical validation (Cross validation) is a popular modern
alternative, which offers some of these benefits:
• Choice is based on predictive performance, • Makes fewer
model assumptions,
• More widely applicable.
18.6.5 Cross Validation Motivation
Ideally, we would like a separate validation set for choosing λ for a given method.
Reusing training sets may encourage overfitting and using testing data to pick λ
608 18 Regularized Linear Modeling and Controlled Variable Selection
may underestimate the true error rate. Often, when we do not have enough data
for a separate validation set, cross-validation provides an alternative strategy.
We have already seen examples of using cross-validation, e.g., Chap. 14, and
Chap. 21 provides additional details about this internal statistical assessment
strategy.
We can use either automated or manual cross-validation. In either case, the
protocol involves the following iterative steps:
1. Randomly split the training data into n parts (“folds”).
2. Fit a model using data in n 1 folds for multiple λs.
3. Calculate some prediction quality metrics (e.g., MSE, accuracy) on the
lastremaining fold, see Chap. 14.
4. Repeat the process and average the prediction metrics across iterations.
Common choices of n are 5, 10, and n (which corresponds to leave-one-out
CV). One standard error rule is to choose λ corresponding to the smallest model
with MSE within one standard error of the minimum MSE.
## [1] 200.5609
#### 10-fold cross validation ####
# Ridge Regression
set.seed(seed) # set seed
# (10-fold) cross validation for Ridge Regression
cvRidge = cv.glmnet(XTrain, yTrain, alpha = 0)
plot(cvRidge)
mtext("CV Ridge: Number of Nonzero (Active) Coefficients", side=3,
line=2.5)
# Report MSE Ridge
predRidge <- predict(cvRidge, s = cvRidge$lambda.1se, newx = XTest)
testMSE_Ridge <- mean((predRidge - yTest)^2); testMSE_Ridge
## [1] 195.7406
“poisson” or “cox” models; for “gaussian” models it gives the fitted values.
• type ¼ "nonzero", returns a list of the indices of the nonzero coefficients for
each value of `s`.
For a fair comparison, let’s also obtain an OLS stepwise model selection, see
Chap. 17.
dt = as.data.frame(cbind(yTrain,XTrain))
ols_step <- lm(yTrain ~., data = dt)
ols_step <- step(ols_step, direction = 'both', k=2, trace = F)
summary(ols_step)
##
## Call:
## lm(formula = yTrain ~ L_cingulate_gyrus_ComputeArea +
R_cingulate_gyrus_Volume +
## L_caudate_Volume + L_putamen_ComputeArea + L_putamen_Volume
+
## R_putamen_ComputeArea + Weight + Age + chr17_rs11012_GT +
## chr17_rs393152_GT + chr17_rs12185268_GT + UPDRS_part_I,
data = dt)
##
## Residuals:
## Min 1Q Median 3Q
Max ## -29.990 -9.098 -0.310 8.373
49.027
##
## Coefficients:
## Estimate Std. Error t value
Pr(>|t|) ## (Intercept) -2.8179771 4.5458868
-0.620 0.53548 ## L_cingulate_gyrus_ComputeArea 0.0045203
0.0013422 3.368 0.00079 *** ## R_cingulate_gyrus_Volume
-0.0010036 0.0003461 -2.900 0.00382 ** ## L_caudate_Volume
-0.0021999 0.0011054 -1.990 0.04686 * ## L_putamen_ComputeArea
-0.0087295 0.0045925 -1.901 0.05764 . ## L_putamen_Volume
0.0035419 0.0017969 1.971 0.04902 *
## R_putamen_ComputeArea 0.0029862 0.0019036 1.569
0.11706
## Weight 0.0424646 0.0268088 1.584
0.11355 ## Age 0.2198283 0.0522490
4.207 2.84e-05 *** ## chr17_rs11012_GT -4.2408237
1.8122682 -2.340 0.01950 *
## chr17_rs393152_GT -3.5818432 2.2619779 -1.584
0.11365 ## chr17_rs12185268_GT 8.2990131 2.7356037
3.034 0.00248 ** ## UPDRS_part_I 3.8780897 0.2541024
15.262 < 2e-16 *** ## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1
' ' 1 ##
## Residual standard error: 13.41 on 911 degrees of freedom
## Multiple R-squared: 0.2556, Adjusted R-squared: 0.2457
18.6 Implementation of Regularization 611
the optimal one. k¼2 specifies AIC and BIC criteria, and you can choose k log
(n).
Then, we use the ols_step model to predict the outcome Y for some new test
data.
betaHatOLS_step = ols_step$coefficients
var_step <- colnames(ols_step$model)[-1]
XTestOLS_step = cbind(rep(1, nrow(XTest)),
XTest[,var_step]) predOLS_step = XTestOLS_step%*
%betaHatOLS_step testMSEOLS_step = mean((predOLS_step -
yTest)^2)
# Report MSE OLS Stepwise feature
selection testMSEOLS_step ## [1]
186.3043
Alternatively, we can predict the outcomes directly using the predict()
function, and the results should be identical:
## [1] TRUE
Let’s identify the most important (predictive) features, which can then be
interpreted in the context of the specific data.
# Determine final models
# Extract Coefficients
# OLS coefficient estimates
betaHatOLS = fitOLS$coefficients
# LASSO coefficient estimates
betaHatLASSO = as.double(coef(fitLASSO, s = cvLASSO$lambda.1se)) #
s is lam bda
# Ridge coefficient estimates betaHatRidge =
as.double(coef(fitRidge, s = cvRidge$lambda.1se))
Figure 18.13 shows a rank-ordered list of the key predictors of the clinical
outcome variable (total UPDRS, y <- data1$UPDRS_part_I + data1
$UPDRS_part_II+data1$UPDRS_part_III).
Fig. 18.13 Variables importance plot for the three alternative models
varNames; length(varNames)
[which(coef(fitLASSO,s=cvLASSO$lambda.min)!=0)-1]
intersect(var_step,var_lasso)
## Age
0.2097678904 ## chr12_rs34637584_GT
.
## chr17_rs11868035_GT -0.0094055047
## chr17_rs11012_GT .
## chr17_rs393152_GT .
## chr17_rs12185268_GT 0.2688574886
## chr17_rs199533_GT
0.3730813890 ## UPDRS_part_I
3.7697168303
## time_visit .
615
18.7 Knock-off Filtering: Simulated Example
Stepwise variable selection for OLS selects 12 variables, whereas LASSO selects 9
variables with the best λ. There are 6 common variables identified as salient
features by both OLS and LASSO.
18.6.12 Summary
Traditional linear models are useful but also have their shortcomings:
• Prediction accuracy may be sub-optimal.
• Model interpretability may be challenging (especially when a large number of
features are used as regressors).
• Stepwise model selection may improve the model performance and add some
interpretations, but still may not be optimal.
Regularization adds a penalty term to the estimation:
• Enables exploitation of the bias-variance tradeoff.
• Provides flexibility on specifying penalties to allow for continuous variable
selection.
• Allows incorporation of prior knowledge.
Variable selection that controls the false discovery rate (FDR) of salient features
can be accomplished in different ways. Knockoff filtering represents one
strategy for controlled variable selection. To show the usage of knockoff.filter
we start with a synthetic dataset constructed so that the true coefficient vector
β has only a few nonzero entries.
The essence of knockoff filtering is based on the following three-step
process:
• Construct the decoy features (knockoff variables), one for each real observed
feature. These act as controls for assessing the importance of the real
variables.
• For each feature, Xi, compute the knockoff statistic, Wj, which measures the
~
importance of the variable, relative to its decoy counterpart, X i.
616 18 Regularized Linear Modeling and Controlled Variable Selection
• Determine the overall knockoff threshold. This is computed by rank-ordering
the Wj statistics (from large to small), walking down the list of Wj’s, selecting
variables Xj corresponding to positive Wj’s, and terminating this search the
last time the ratio of negative to positive Wj’s is below the default FDR q
value, e.g., q ¼ 0.10.
Mathematically, we consider Xj to be unimportant (i.e., peripheral or
extraneous) if the conditional distribution of Y given X1, ..., Xp does not depend
on Xj. Formally, Xj is unimportant if it is conditionally independent of Y given all
other features, Xj:
Y⊥Xj j Xj:
FDR ^S ¼ E
#j2^S : xj unimportant2 ! q eð
:g:10%Þ:
#j ^S
# Problem data
X = matrix(rnorm(n*p), nrow=n, ncol=p)
nonzero = sample(p, k)
beta = amplitude * (1:p %in% nonzero)
y.sample <- function() X %*% beta + rnorm(n)
To begin with, we will invoke the knockoff.filter using the default settings.
#
install.packages("knockoff
") library(knockoff) y =
y.sample()
result = knockoff.filter(X, y)
617
print(result)
## Call:
## knockoff.filter(X = X, y = y)
##
## Selected variables:
## [1] 6 29 30 42 52 54 63 68 70 83 88 96 102 113 115
135 138
## [18] 139 167 176 179 194 212 220 225 228 241 248 265 273 287 288
295
The false discovery proportion (fdp) is:
fdp <-
function(selected)
sum(beta[selected]
== 0) / max(1,
length(selected)
)
fdp(result$selected)
## [1] 0.09090909
The default settings of the knockoff filter use a test statistic based on LASSO
-knockoff.stat.lasso_signed_max, which computes the Wj statistics that quantify
~
the discrepancy between a real (Xj) and a decoy, knockoff (X j), feature:
Wj ¼ maxXj;X~j sgnXj X~ j:
Effectively, the Wj statistics measure how much more important the variable
~
Xj is relative to its decoy counterpart X j. The strength of the importance of Xj
~
relative to X j is measured by the magnitude of Wj.
The knockoff package includes several other test statistics, with appropriate
names prefixed by knockoff.stat. For instance, we can use a statistic based on
forward selection ( fs) and a lower target FDR of 0.10.
## [1] 0.1428571
One can also define additional test statistics, complementing the ones
included in the package already. For instance, if we want to implement the
following teststatistics:
618 18 Regularized Linear Modeling and Controlled Variable Selection
Wj ¼ kXjt:yk kX~t :yk:
Let’s illustrate controlled variable selection via knockoff filtering using the real
PD dataset.
The goal is to determine which imaging, genetics and phenotypic covariates
are associated with the clinical diagnosis of PD. The dataset is publicly available
online.
# table(data1.completeRowIndexes)
prop.table(table(data1.completeRowIndexes))
## data1.completeRowIndexes
## FALSE TRUE
## 0.3452381 0.6547619
# attach(data1)
# View(data1[data1.completeRowIndexes, ])
data2 <- data1[data1.completeRowIndexes, ]
Dx_label <- data2$ResearchGroup;
table(Dx_label)
## Dx_label
## Control PDSWEDD
## 121 897 137
We now construct the design matrix X and the response vector Y. The
features (columns of X) represent covariates that will be used to explain the
response Y.
620 18 Regularized Linear Modeling and Controlled Variable Selection
summary(X)
## L_insular_cortex_ComputeArea
L_insular_cortex_Volume ## Min. : 50.03
Min. : 22.63
## 1st Qu.:2174.57 1st Qu.: 5867.23
## Median :2522.52 Median : 7362.90
## Mean :2306.89 Mean : 6710.18
## 3rd Qu.:2752.17 3rd Qu.: 8483.80
## Max. :3650.81 Max. :13499.92
…
## chr17_rs393152_GT chr17_rs12185268_GT chr17_rs199533_GT
time_visit ## Min. :0.0000 Min. :0.0000 Min. :
0.0000 Min. : 0.00 ## 1st Qu.:0.0000 1st Qu.:0.0000
1st Qu.:0.0000 1st Qu.: 9.00
## Median :0.0000 Median :0.0000 Median :0.0000
Median :24.00 ## Mean :0.4468 Mean :0.4268
Mean :0.4052 Mean :23.83 ## 3rd Qu.:1.0000 3rd
Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:36.00 ## Max. :
2.0000 Max. :2.0000 Max. :2.0000 Max. :54.00
mode(X) <- 'numeric'
The knockoff filter is designed to control the FDR under Gaussian noise. A quick
inspection of the response vector shows that it is highly non-Gaussian (Figs.
18.14 and 18.5).
Fig. 18.14 Histogram of the outcome clinical diagnostic variable (Y) for the Parkinson’s disease
case-study
Fig. 18.15 Log-transformed histogram of the outcome clinical diagnostic variable (Y)
hist(Y, breaks='FD')
hist(log(Y), breaks='FD')
Fig. 18.16 Logistic curve transforming a continuous variable into a probability value
1 y
y¼ þ , x ¼ ln ,
x
1 e 1 y
,¼ 1 þ eaoþPk¼1
akxk
where the coefficients ao (intercept) and effects ak, k 1, 2, ..., l, are estimated
using GLM according to a maximum likelihood approach. Using this model allows
18.8 PD Neuroimaging-Genetics Case-Study 623
Fig. 18.17 Estimate of the logistic function for the clinical outcome (CO) probability based on
the surgeon’s experience (SE)
# library(ggplot2)
ggplot(mydata, aes(x=SE, y=CO)) + geom_point() +
stat_smooth(method="glm",method.args=list(family="binomial"),se=
FALSE)
Graph of a logistic regression curve showing probability of surviving the
surgery versus surgeon’s experience, Fig. 18.17.
The graph shows the probability of the clinical outcome, survival, (Y-axis)
versus the surgeon’s experience (X-axis), with the logistic regression curve
fitted to the data.
mylogit <- glm(CO ~ SE, data = mydata, family = "binomial")
summary(mylogit)
## Call:
## glm(formula = CO ~ SE, family = "binomial", data =
mydata) ##
## Deviance Residuals:
## Min 1Q Median 3Q
Max ## -1.7131 -0.5719 -0.0085 0.4493
1.8220
##
## Coefficients:
## Estimate Std. Error z value Pr(>|
z|) ## (Intercept) -4.1030 1.7629 -2.327
0.0199 * ## SE 0.7583 0.3139
2.416 0.0157 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1
' ' 1 ##
## (Dispersion parameter for binomial family taken to
be 1) ##
## Null deviance: 27.726 on 19 degrees of freedom
## Residual deviance: 16.092 on 18 degrees of
freedom
## AIC: 20.092
##
## Number of Fisher Scoring iterations: 5
The output indicates that surgeon’s experience (SE) is significantly
associated with the probability of surviving the surgery (0.0157, Wald test). The
output also provides the coefficients for:
• Intercept ¼ 4.1030, and
• SE ¼ 0.7583.
These coefficients can then be used in the logistic regression equation model
to estimate the probability of surviving the heart surgery:
CO
Probability of surviving heart surgery ¼ 1 þexpð ð
1
4:1030 þ0:7583SEÞÞ :
18.8 PD Neuroimaging-Genetics Case-Study 625
SE=2
CO =1/(1+exp(-(-4.1030+0.7583*SE))) CO
## [1] 0.07001884
Similarly, a patient undergoing heart surgery with a doctor who has 400
operating hours experience (SE ¼ 4), the estimated probability of survival is p ¼
0.26:
SE=4; CO =1/(1+exp(-(-4.1030+0.7583
*SE))); CO
## [1] 0.2554411
CO
## [1] 0.2554411
for (SE inc(1:5)) {
CO <- 1/(1+exp(-(-4.1030+0.7583
*SE)));
print(c(SE, CO))
}
## [1] 1.00000000 0.03406915
## [1] 2.00000000 0.07001884
## [1] 3.0000000 0.1384648
## [1] 4.0000000 0.2554411
## [1] 5.0000000 0.4227486
[1] 0.2554411
The table below shows the probability of surviving surgery for several values
of surgeons’ experience (Table. 18.4).
The output from the logistic regression analysis gives a p-value of p ¼ 0.0157,
which is based on the Wald z-score. In addition to the Wald method, we can
calculate the p-value for logistic regression using the Likelihood Ratio Test (LRT),
which for these data yields 0.0006476922 (Table 18.5).
Table 18.4 Estimates of the likelihood of transplant surgery patient survival based on SE
Surgeon’s experience (SE) Probability of patient survival (Clinical outcome)
1 0.034
2 0.07
3 0.14
4 0.26
5 0.423
626 18 Regularized Linear Modeling and Controlled Variable Selection
Table 18.5 Estimates of the effect-size, standard error and p-value quantifying the signi ficance
of
SE on CO
. Estimate Std. error z value Pr(>z) Wald
SE 0.7583 0.3139 2.416 0.0157 *
mylogit <- glm(CO ~ SE, data = mydata, family = "binomial")
summary(mylogit)
## Call:
## glm(formula = CO ~ SE, family = "binomial", data =
mydata) ##
## Deviance Residuals:
## Min 1Q Median 3Q
Max ## -1.7131 -0.5719 -0.0085 0.4493
1.8220
##
## Coefficients:
## Estimate Std. Error z value Pr(>|
z|) ## (Intercept) -4.1030 1.7629 -2.327
0.0199 * ## SE 0.7583 0.3139
2.416 0.0157 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1
' ' 1 ##
## (Dispersion parameter for binomial family taken to
be 1) ##
## Null deviance: 27.726 on 19 degrees of freedom
## Residual deviance: 16.092 on 18 degrees of freedom
## AIC: 20.092
## Number of Fisher Scoring iterations: 5
p
The logit of a number 0 p 1 is given by the formula: logit pð Þ ¼ log 1 p, and
represents the log-odds ratio (of survival in this case) (Table. 18.6).
confint(mylogit)
1 p 1 p
exp(coef(mylogit)) # exponentiated logit model coefficients
## (Intercept) SE
## 0.01652254 2.13474149
• (Intercept) SE
• 0.01652254 2.13474149 ¼¼ exp(0.7583456)
18.8 PD Neuroimaging-Genetics Case-Study 627
. OR 2.5% 97.5%
(Intercept) 0.01652254 0.0001825743 0.27729
0
SE 2.13474149 1.3083794719 4.83998
6
Table 18.6 Point and interval estimates of the odds ratio of survival
exp(cbind(OR = coef(mylogit), confint(mylogit)))
## OR 2.5 % 97.5 % ##
(Intercept) 0.01652254 0.0001825743 0.277290 ## SE
2.13474149 1.3083794719 4.839986
We can compute the LRT and report its p-values (0.0006476922) by using the
with() function:
## [1] 0.0006476922
LRT p-value <0.001 tells us that our model as a whole fits significantly
better than an empty model. The deviance residual is -2*loglikelihood, and we
can report the model’s log likelihood by:
logLik(mylogit)
The LRT compares the data fit of two models. For instance, removing
predictor variables from a model may reduce the model quality (i.e., a model will
have a lower log likelihood). To statistically assess whether the observed
difference in model fit is significant, the LRT compares the difference of the
log likelihoods of the two models. When this difference is statistically
significant, the full model (the one with more variables) represents a better fit
to the data, compared to the reduced model. LRT is computed using the log
likelihoods (ll) of the two models:
Lm ð1 Þ
¼ ð Þ ¼ðð
Þ ð ÞÞ
LRT 2 ln2 ll m2 ll m1 , L m2
628 18 Regularized Linear Modeling and Controlled Variable Selection
where:
• m1 and m2 are the reduced and the full models, respectively,
• L(m1) and L(m2) denote the likelihoods of the 2 models, and
• ll(m1) and ll(m2) represent the log likelihood (natural log of the model
likelihood function).
As n ! 1, the distribution of the LRT is asymptotically chi-squared with degrees
of freedom equal to the number of parameters that are reduced (i.e., the
LRT
number of variables removed from the model). In our case, χdf2 ¼2, as we
have an intercept and one predictor (SE), and the null model is empty (no
parameters).
18.8.3 False Discovery Rate (FDR)
FalsePositives
FDR E :
False Discovery Rate ¼ expectation total number of selected features#
|{z}
|ffl{zffl} | False Discovery Proportionffl{z ffl}
## [1] 0.010 0.010 0.013 0.014 0.051 0.190 0.350 0.350 0.500
0.630 0.670 ## [12] 0.750 0.810 0.900
#calculate the threshold for each p-value threshold<-
alpha.star*(1:length(pvals))/length(pvals)
## pvals threshold
## [1,] 0.010 0.003571429
0
## [2,] 0.010 0.007142857 0
## [3,] 0.013 0.010714286 0
…
## [12,] 0.750 0.042857143 0
## [13,] 0.810 0.046428571
0 ## [14,] 0.900
0.050000000 0
Start with the smallest p-value and move up we find that the largest k for
^
which the p-value is less than its threshold, α ∗, which is k ¼ 4.
Next, the algorithm rejects the null hypotheses for the tests that correspond
to the p-values p(1), p(2), p(3), p(4).
Note: that since we controlled FDR at α ∗ ¼ 0.05, we are guaranteed that on ∗
average only 5% of the tests that we rejected are spurious. Since α ¼ 0.05 of 4 is
quite small and less than 1, we are confident that none of our rejections are
expected to be spurious.
The Bonferroni corrected α for these data is ¼ 0:0036. If we had used this
family-wise error rate in our individual hypothesis tests, then we would have
concluded that none of our 14 results were significant!
Fig. 18.18 Graphical representation of the naïve, conservative Bonferroni, and FDR critical p-
values
#generate the values to be plotted on x-axis
x.values<-(1:length(pvals))/length(pvals)
#widen right margin to make room for labels
par(mar=c(4.1, 4.1, 1.1, 4.1))
#label lines
mtext(c('naive', 'Bonferroni'), side=4, at=c(.05, .
05/length(pvals)), las=1, line=0.2)
#select observations that are less than
threshold for.test <-
cbind(1:length(pvals), pvals) pass.test <-
for.test[pvals <= 0.05*x.values, ]
pass.test ## pvals
## 4.000 0.014
#use the largest k to color points that meet Benjamini-Hochberg FDR
test last<-ifelse(is.vector(pass.test), pass.test[1],
pass.test[nrow(pass.test), 1])
points(x.values[1:last], pvals[1:last], pch=19, cex=1.5)
FDR Adjusting the p-Values
18.8 PD Neuroimaging-Genetics Case-Study 631
We now run the knockoff filter along with the Benjamini-Hochberg (BH)
procedure for controlling the false-positive rate of feature selection. More details
about the knock-off filtering methods are available online.
Before running either selection procedure, remove rows with missing values,
reduce the design matrix by removing predictor columns that do not appear
frequently (e.g., at least three times in the sample), and remove any columns
that are duplicates.
library(knockoff)
'equicorrelated') names(result$selected)
# Run BH (Benjamini-Hochberg)
k = ncol(X)
lm.fit = lm(Y ~ X - 1) # no intercept
# Alternatively: dat =
as.data.frame(cbind(Y,X)) # lm.fit = lm(Y ~
. -1,data=dat ) # no intercept
633
18.9 Assignment: 18. Regularized Linear Modeling and Knockoff Filtering
intersect(BH_selected,knockoff_selec
ted)
¼ yt 2yt1 þ yt2
beijing.pm25[beijing.pm25$Value==-999, 9]<-NA
beijing.pm25[is.na(beijing.pm25$Value), 9]<-
floor(mean(beijing.pm25$Value, na.rm = T))
Here we first reassign the missing values into NA labels. Then we replace all
NA labels with the mean computed using all non-missing observations. Note that
the floor() function casts the arithmetic averages as integer numbers, which is
needed as AQI values are expected to be whole numbers.
Now, let’s observe the trend of hourly average PM2.5 across 1 day. You can
see a significant pattern: The PM2.5 level peeks in the afternoons and is the
lowest in the early mornings. It exhibits approximate periodic boundary conditions
(these patterns oscillate daily) (Fig. 19.1).
19.1 Time Series Analysis 639
Fig. 19.1 Time course of the mean, top-20%, and bottom-20% air quality in Beijing (PPM2.5)
require(ggplot2)
id = 1:nrow(beijing.pm25)
mat =
matrix(0,nrow=24,ncol=3)
stat = function(x){
c(mean(beijing.pm25[iid,"Value"]),quantile(beijing.pm25[iid,"Valu
e"],c(0.2
,0.8)))
}
for (i in 1:24){ iid =
which(id%%24==i-1)
mat[i,] = stat(iid)
}
To begin with, we can visualize the overall trend by plotting PM2.5 values against
time. This can be achieved using the plyr package.
library(plyr)
ts<-ts(beijing.pm25$Value, start=1, end=69335, frequency=1)
ts.plot(ts)
The dataset is recorded hourly, and the 8-year time interval includes about
69,335 h of records. Therefore, we start at the first hour and end with 69, 335th h.
Each hour has a univariate PM2.5 AQI value measurement, so frequency¼1.
From this time series plot, Fig. 19.2, we observe that the data has some peaks
but most of the AQIs stay under 300 (which is considered hazardous).
The original plot seems have no trend at all. Remember we have our
measurements in hours. Will there be any difference if we use monthly average
640 19 Big Longitudinal Data Analysis
instead of hourly reported values? In this case, we can use Simple Moving
Average (SMA) technique to smooth the original graph.
Fig. 19.2 Raw time-series plot of the Beijing air quality measures (2008–2016)
Fig. 19.3 Simple moving monthly average PM2.5 air quality index values
To accomplish this, we need to install the TTR package and utilize the SMA()
method (Fig. 19.3).
#install.packages("TTR")
library(TTR)
bj.month<-SMA(ts, n=720)
plot.ts(bj.month, main="Monthly PM2.5 Level SMA", ylab="PM2.5 AQI")
19.1 Time Series Analysis 641
The pattern seems less obvious in this graph, Fig. 19.4. Here we used
exponential smoothing ratio of 2/(n + 1).
ARIMA models have 2 components: autoregressive (AR) part and moving average
(MA) part. An ARMA(p,d,q) model is a model with p terms in AR, q terms in
MA, and d representing the order difference. Differencing is used to make the
original dataset approximately stationary. ARMA(p,d,q) has the following
analytical form:
1 XϕiLi!ð1 LÞdXt ¼ 1 þ XθiLi!Et: p q
642 19 Big Longitudinal Data Analysis
i¼1 i¼1
First, let’s try to determine the parameter d. To make the data stationary on the
mean (remove any trend), we can use first differencing or second order
differencing. Mathematically, first differencing is taking the difference between
two adjacent data points:
yt0 ¼ yt yt1:
yt ¼ yt
∗ 0 yt10 ¼ yt 2yt1 þ yt2:
Let’s see which differencing method is proper for the Beijing PM2.5 dataset.
Function diff() in R base can be used to calculate differencing. We can plot the
differences by plot.ts() (Fig. 19.5).
Neither of them appears quite stationary. In this case, we can consider using
some smoothing techniques on the data like we just did above (bj.month<-
SMA(ts, n ¼720)). Let’s see if smoothing by exponentially-weighted mean
(EMA) can help making the data approximately stationary (Fig. 19.6).
19.1 Time Series Analysis 643
Fig. 19.6 Monthly-smoothed first- and second-order differencing of the AQI data
par(mfrow=c(2, 1))
bj.diff2<-diff(bj.month, differences=2) plot.ts(bj.diff2,
main="2nd differencing") bj.diff<-diff(bj.month,
differences=1) plot.ts(bj.diff, main="1st differencing")
Both of these EMA-filtered graphs have tempered variance and appear pretty
stationary with respect to the first two moments, mean and variance.
644 19 Big Longitudinal Data Analysis
To decide the auto-regressive (AR) and moving average (MA) parameters in the
model we need to create autocorrelation factor (ACF) and partial autocorrelation
factor (PACF) plots. PACF may suggest a value for the AR-term parameter q, and
ACF may help us determine the MA-term parameter p. We plot the ACF and
PACF using the approximately stationary time series, bj.diff object (Fig. 19.7).
par(mfrow=c(1, 2))
acf(ts(bj.diff), lag.max = 20, main="ACF") pacf(ts(bj.diff),
lag.max = 20, main="PACF")
• Pure AR model, (q ¼ 0), will have a cut off at lag p in the PACF.
• Pure MA model, (p ¼ 0), will have a cut off at lag q in the ACF.
• ARIMA(p, q) will (eventually) have a decay in both.
Fig. 19.7 Autocorrelation factor (ACF) and partial autocorrelation factor (PACF) plots of bj.diff
All spikes in the plots are outside of the (normal) insignificant zone in the
ACF plot while two of them are significant in the PACF plot. In this case, the
best ARIMA model is likely to have both AR and MA parts.
We can examine for seasonal effects in the data using stats::stl(), a flexible
function for decomposing and forecasting the series, which uses averaging to
calculate the seasonal component of the series and then subtracts the seasonality.
Decomposing the series and removing the seasonality can be done by subtracting
the seasonal component from the original series using forecast::seasadj(). The
19.1 Time Series Analysis 645
frequency parameter in the ts() object specifies the periodicity of the data or the
number of observations per period, e.g., 30, for monthly smoothed daily data (Fig.
19.8).
alternative = "stationary")
## data: bj.diff
## Dickey-Fuller = -29.188, Lag order = 41, p-value = 0.01
## alternative hypothesis: stationary
We see that we can reject the null and therefore, there is no statistically
significant non-stationarity in the bj.diff timeseries.
# install.packages("forecast")
library(forecast)
fit<-auto.arima(bj.month, approx=F, trace =
F) fit
## Series: bj.month
## ARIMA(1,1,4)
##
## Coefficients:
## ar1 ma1 ma2 ma3
ma4 ## 0.9426 0.0813 0.0323 0.0156
0.0074 ## s.e. 0.0016 0.0041 0.0041
0.0041 0.0041
##
19.1 Time Series Analysis 647
There is a clear pattern present in ACF/PACF plots, Fig. 19.10, suggesting that
the model residuals repeat with an approximate lag of 12 or 24 months. We may
try a modified model with a different parameters, e.g., p ¼ 24 or q ¼ 24. We can
648 19 Big Longitudinal Data Analysis
Fig. 19.10 ARIMA(1,1,4) model plot, ACF and PACF plots of the resiguals for bj.month
Fig. 19.11 An improved ARIMA(1,1,24) model plot, ACF and PACF plots of the resiguals for
bj.month
19.1 Time Series Analysis 649
Fig. 19.12 Diagnostic plot of the residuals of the ARIMA(1,1,24) time-series model for bj.
month
fit24 <- arima(deseasonal_count, order=c(1,1,24)); fit24
## Call:
## arima(x = deseasonal_count, order = c(1, 1, 24))
##
## Coefficients:
## ar1 ma1 ma2 ma3 ma4 ma5 ma6
ma7 ## 0.9496 0.0711 0.0214 0.0054 -0.0025 -0.0070
-0.0161 -0.0149 ## s.e. 0.0032 0.0049 0.0049 0.0048 0.0047
0.0046 0.0045 0.0044
## ma8 ma9 ma10 ma11 ma12 ma13
ma14 ## -0.0162 -0.0118 -0.0100 -0.0136 -0.0045
-0.0055 -0.0075 ## s.e. 0.0044 0.0043 0.0042 0.0042
0.0042 0.0041 0.0041
## ma15 ma16 ma17 ma18 ma19 ma20 ma21
ma22 ## -0.0060 -0.0005 -0.0019 0.0066 0.0088 0.0156
0.0247 0.0117 ## s.e. 0.0041 0.0041 0.0041 0.0041 0.0041
0.0040 0.0040 0.0040
## ma23
ma24 ## 0.0319
0.0156 ## s.e. 0.0040
0.0039
##
## sigma^2 estimated as 0.004585:log likelihood = 88295.88,aic =
-176539.8 tsdisplay(residuals(fit24), lag.max=36, main='Seasonal
Model Residuals')
displayForecastErrors <- function(forecastErrors)
{
# Generate a histogram of the Forecast
Errors binsize <- IQR(forecastErrors)/4
sd <- sd(forecastErrors) min <-
min(forecastErrors) - sd max <-
max(forecastErrors) + sd
<- max(norm)
if (min2 < min) { min <- min2
} if (max2 > max) { max <-
max2 }
breaks=bins)
Now, we can use our models to make predictions for future PM2.5 AQI. We will
use the function forecast() to make predictions. In this function, we have to specify
the number of periods we want to forecast. Using the smoothed data, we can make
predictions for the next month, July 2016. As each month has about 24 30 ¼ 720
h, we specify a horizon h ¼ 720 (Fig. 19.13).
par(mfrow=c(1, 1))
ts.forecasts<-forecast(fit, h=720) plot(ts.forecasts,
include = 2880)
When plotting the forecasted values with the original smoothed data, we
include only the last 3 months in the original smoothed data to see the predicted
values clearer. The shaded regions indicate ranges of expected errors. The darker
(inner) region represents by 80% confidence range and the lighter (outer) region
bounds by the
19.1 Time Series Analysis 651
Fig. 19.13 Prospective out-of-range prediction intervals of the ARIMA(1,1,4) time-series model
http://www.seasonal.website
The most general kind of SEM is a structural regression path model with latent
variables, which account for measurement errors of observed variables. Model
identification determines whether the model allows for unique parameter
estimates and may be based on model degrees of freedom (df M 0) or a known scale
for every latent feature. If ν represents the number of observed variables, then the
total degrees of freedom for a SEM, νð12þνÞ, corresponds to the number of
variances and unique covariances in a variance-covariance matrix for all the
features, and the model degrees of freedom,df M ¼ νð12þνÞ l, where l is the number of
estimated parameters.
Examples include:
• Just-identified model (dfM ¼ 0) with unique parameter estimates,
• Over-identified model (dfM > 0) desirable for model testing and assessment,
• Under-identified model (dfM < 0) is not guaranteed unique solutions for all
parameters. In practice, such models occur when the effective degrees of
freedom are reduced due to two or more highly-correlated features, which
presents problems with parameter estimation. In these situations, we can
exclude or combine some of the features boosting the degrees of freedom.
The latent variables’ scale property reflects their unobservable, not
measurable, characteristics. The latent scale, or unit, may be inferred from one of
its observed constituent variables, e.g., by imposing a unit loading identification
constraint fixing at 1.0 the factor loading of one observed variable.
An SEM model with appropriate scale and degrees of freedom conditions may
be identifiable subject to Bollen’s two-step identification rule. When both the
CFA path components of the SEM model are identifiable, then the whole SR
model is identified, and model fitting can be initiated.
• For the confirmatory factor analysis (CFA) part of the SEM, identification
requires (1) a minimum of two observed variables for each latent feature, (2)
independence between measurement errors and the latent variables, and (3)
independence between measurement errors.
• For the path component of the SEM, ignoring any observed variables used to
measure latent variables, model identification requires: (1) errors associated
with endogenous latent variables to be uncorrelated, and (2) all causal effects to
be unidirectional.
The LISREL representation can be summarized by the following matrix
equations:
ξ þ δ,
x ¼ Λx
measurement model component y ¼ Λyη þ E:
And
654 19 Big Longitudinal Data Analysis
where:
Let’s also denote the two variance-covariance matrices, Θδ(p p) and ΘE(q q),
representing the variance-covariance matrices among the measurement errors δ
and E, respectively. The third equation describing the LISREL path model
component as relationships among latent variables includes:
where A ¼ (I B)1. This representation of Σ does not involve the observed and
latent exogenous and endogenous variables, x, y, ξ, η. Maximum likelihood
estimation (MLE) may be used to obtain the Σ parameters via iterative searches for
a set of optimal parameters minimizing the element-wise deviations between Σ and
S.
The process of optimizing the objective function f(Σ,S) can be achieved by
computing the log likelihood ratio, i.e., comparing the likelihood of a given fitted
model to the likelihood of a perfectly fit model. MLE estimation requires
multivariate normal distribution for the endogenous variables and Wishart
distribution for the observed variance-covariance matrix, S.
Using MLE estimation simplifies the objective function to:
fðΣ;SÞ ¼ ln j Σ j þtr S Σ1 ln j S j tr SS 1 ,
The R Lavaan package uses the following SEM syntax, Table 19.1, to represent
relationships between variables. We can follow the following table to specify
Lavaan models:
For example in R we can write the following model model<-
'#regressions
y1 þ y2 f1 þ f2 þ x1 þ x2 f1
f2 þ f3 f2 f3 þ x1 þ x2
#latentvariabledefinitions
f1 ¼ y1 þ y2 þ y3 f2 ¼
y4 þ y5 þ y6
f3 ¼ y7 þ y8 þ y9 þ y10
#variancesandcovariances
656 19 Big Longitudinal Data Analysis
y1 y1 y1
y2 f1 f2
#intercepts
y1 1 f1
1
'
0
Note that the two " " symbols (in the beginning and ending of a model
description) are very important in the R-syntax.
Table 19.1 Lavaan syntax for Formula type Operator Explanation
specifying the relations
Latent variable definition ¼~ Is measured by
between variables and their
variance-covariance structure Regression ~ Is regressed on
(Residual) (co)variance ~~ Is correlated
with
Intercept ~1 Intercept
Let’s use the PPMI dataset in our class file as an example to illustrate SEM
model fitting.
Now, we can import the dataset into R and recode the ResearchGroup variable into
a binary variable.
par(mfrow=c(1, 1))
PPMI<-
read.csv("https://umich.instructure.com/files/330397/download?
download
_frd=1")
summary(PPMI)
## FID_IID L_insular_cortex_ComputeArea
L_insular_cortex_Volume ## Min. :3001 Min. : 50.03
19.2 Structural Equation Modeling (SEM)-Latent Variables 657
Fig. 19.15 Pair-wise correlation structure of the Parkinson’s disease (PPMI) data.
658 19 Big Longitudinal Data Analysis
This large dataset has 1,746 observations and 31 variables with missing data in
some of them. A lot of the variables are highly correlated. You can inspect high
correlation using heat maps, which reorders these covariates according to
correlations to illustrate clusters of high-correlations (Fig. 19.15).
pp_heat <-
PPMI[complete.cases(PPMI),-20]
corr_mat = cor(pp_heat) # Remove
upper triangle corr_mat_lower =
corr_mat
corr_mat_lower[upper.tri(corr_mat_lower)] = NA
# Melt correlation matrix and make sure order of factor variables
is correct corr_mat_melted = melt(corr_mat_lower)
colnames(corr_mat_melted) <- c("Var1", "Var2", "value")
corr_mat_melted$Var1 = factor(corr_mat_melted$Var1,
levels=colnames(corr_mat
))
corr_mat_melted$Var2 = factor(corr_mat_melted$Var2,
levels=colnames(corr_mat ))
# Plot
corr_plot = ggplot(corr_mat_melted, aes(x=Var1, y=Var2,
fill=value)) + geom_tile(color='white') +
scale_fill_distiller(limits=c(-1, 1), palette='RdBu',
na.value='white', name='Correlation') +
ggtitle('Correlations') +
coord_fixed(ratio=1) +
theme_minimal() +
scale_y_discrete(position="right") +
theme(axis.text.x=element_text(angle=45, vjust=1,
hjust=1), axis.title.x=element_blank(),
axis.title.y=element_blank(),
panel.grid.major=element_blank(),
legend.position=c(0.1,0.9),
legend.justification=c(0,1))
corr_plot
And here are some specific correlations
cor(PPMI$L_insular_cortex_ComputeArea,
PPMI$L_insular_cortex_Volume)
## [1] 0.9837297 cor(PPMI$UPDRS_part_I,
## [1] 0.5326681
One way to solve this substantial multivariate correlation issue is to create some
latent variables. We can consider the following model.
model1<-
'
Imaging =~ L_cingulate_gyrus_ComputeArea +
L_cingulate_gyrus_Volume+R_c
ingulate_gyrus_ComputeArea+R_cingulate_gyrus_Volume+R_insular_corte
x_Compute
Area+R_insular_cortex_Volume
19.2 Structural Equation Modeling (SEM)-Latent Variables 659
UPDRS=~UPDRS_part_I+UPDRS_part_II+UPDRS_part_III
DemoGeno =~ Weight+Sex+Age
mydata<-scale(PPMI[, -20])
mydata<-data.frame(mydata, PPMI$ResearchGroup) colnames(mydata)
[31]<-"ResearchGroup"
Step 3 – Fitting a Model on the Data
Now, we can start to build the model. The cfa() function we will use is part of the
lavaan package.
# install.packages("lavaan")
library(lavaan) fit<-cfa(model1,
data=mydata, missing = 'FIML')
Here we can see some warning messages. Both our covariance and error term
matrices are not positive definite. Non-positive definite matrices can cause the
estimates of our model to be biased. There are many factors that can lead to this
problem. In this case, we might create some latent variables that are not a good
fit for our data. Let’s try to delete the DemoGeno latent variable. We can add
Weight, Sex, and Age directly to the regression model.
model2
<'
# (1) Measurement Model
Imaging =~ L_cingulate_gyrus_ComputeArea +
L_cingulate_gyrus_Volume+R_cing
ulate_gyrus_ComputeArea+R_cingulate_gyrus_Volume+R_insular_cortex_C
omputeAre
a+R_insular_cortex_Volume
UPDRS =~ UPDRS_part_I +UPDRS_part_II + UPDRS_part_III
# (2) Regressions
ResearchGroup ~ Imaging + UPDRS
+Age+Sex+Weight '
When fitting model2, the warning messages are gone. We can see that falsely
adding a latent variable can cause those matrices to be not positive definite.
Currently, the lavaan functions sem() and cfa() are the same.
fit<-cfa(model2, data=mydata, missing = 'FIML')
summary(fit, fit.measures=TRUE)
## lavaan (0.5-23.1097) converged normally after 107
iterations ##
## Number of observations 1764
660 19 Big Longitudinal Data Analysis
##
## Number of missing patterns 4
##
## Estimator
ML ## Minimum Function Test Statistic
7714.119
## Degrees of freedom
60 ## P-value (Chi-square)
0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic
30237.866 ## Degrees of freedom
75 ## P-value
0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI)
0.746 ## Tucker-Lewis Index (TLI)
0.683
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0)
NA ## Loglikelihood unrestricted model (H1)
NA
##
## Number of free parameters
35
## Akaike (AIC)
NA ## Bayesian (BIC)
NA
## ## Root Mean Square Error of
Approximation:
##
## RMSEA
0.269 ## 90 Percent Confidence Interval
0.264 0.274 ## P-value RMSEA <= 0.05
0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.052
## ## Parameter
Estimates:
##
## Information
Observed ## Standard Errors
Standard
## ## Latent
Variables:
## Estimate Std.Err z-value
P(>|z|)
## Imaging =~
## L_cnglt_gyr_CA 1.000
## L_cnglt_gyrs_V 0.994 0.004 260.366
19.2 Structural Equation Modeling (SEM)-Latent Variables 661
27.917 0.000
## .R_cnglt_gyrs_V 0.093 0.003 27.508
0.000
## .R_nslr_crtx_CA 0.141 0.005 28.750
0.000
## .R_nslr_crtx_Vl 0.159 0.006 28.728
0.000 ## .UPDRS_part_I 0.877 0.038
23.186 0.000 ## .UPDRS_part_II 0.561
0.033 16.873 0.000 ## .UPDRS_part_III
0.325 0.036 9.146 0.000 ##
.ResearchGroup 0.083 0.006 14.808
0.000 ## Imaging 0.993 0.034
29.509 0.000
## UPDRS 0.182 0.035 5.213
0.000
19.2.4 Outputs of Lavaan SEM
In the output of our model, we have information about how to create these two
latent variables (Imaging, UPDRS) and the estimated regression model.
Specifically, it gives the following information.
1. First six lines are called the header contains the following information:
According to the output of model fit, our latent variable UPDRS is a combination
of three observed variables-UPDRS_part_I, UPDRS_part_II, and
UPDRS_part_III. We can visualize how average UPDRS values differ among the
research groups over time.
mydata$UPDRS<-
mydata$UPDRS_part_I+1.890*mydata$UPDRS_part_II+2.345*mydata$UP
DRS_part_III
mydata$Imaging<-mydata$L_cingulate_gyrus_ComputeArea
+0.994*mydata$L_cingul
ate_gyrus_Volume+0.961*mydata$R_cingulate_gyrus_ComputeArea+0.955*my
data$R_c
ingulate_gyrus_Volume+0.930*mydata$R_insular_cortex_ComputeArea+0.92
0*mydata
$R_insular_cortex_Volume
The above code stores the latent UPDRS and Imaging variables into mydata.
By now, we are experienced with using the package ggplot2 for data visualization.
Now, we will use it to set the x and y axes as time and UPDRS, and then display
the trend of the individual level UPDRS.
664 19 Big Longitudinal Data Analysis
19.3 Longitudinal Data Analysis-Linear Mixed Models
Fig. 19.16 Average UPDRS scores of the two cohorts in the PPMI dataset, patients (1) and
controls (0)
require(ggplot2)
p<-ggplot(data=mydata, aes(x=time_visit, y=UPDRS,
group=FID_IID)) dev.off() p+geom_point()+geom_line()
This graph is a bit messy without a clear pattern emerging. Let’s see if group-
level graphs may provide more intuition. We will use the aggregate() function to
get the mean, minimum and maximum of UPDRS for each time point. Then, we
will use separate color for the two research groups and examine their mean trends
(Fig. 19.16).
Yi ¼ Ziβi þ Ei,
βi ¼ Ai∗β þ bi:
So, the full model in matrix form would be:
##
## Call:
## glm(formula = UPDRS ~ Imaging + ResearchGroup * time_visit +
## Age + Sex + Weight, data = mydata)
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -7.6065 -2.4581 -0.3159 1.8328 14.9746
## Coefficients:
## Estimate Std. Error t value Pr(>|
t|) ## (Intercept) 0.70000 0.10844 6.455
1.57e-10 *** ## Imaging 0.03834 0.01893
2.025 0.0431 * ## ResearchGroup1 -6.93501
0.33445 -20.736 < 2e-16 *** ## time_visit
0.05077 0.10843 0.468 0.6397 ## Age
0.54171 0.10839 4.998 6.66e-07 *** ## Sex
0.16170 0.11967 1.351 0.1769 ## Weight
0.20980 0.11707 1.792 0.0734 . ##
ResearchGroup1:time_visit -0.06842 0.32970 -0.208 0.8356
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
model.lmm<-lmer(UPDRS~Imaging+ResearchGroup*time_visit+Age+Sex+Weight+
(time_ visit|FID_IID), data=mydata) summary(model.lmm)
## Scaled residuals:
## Min 1Q Median 3Q Max
## -3.2660 -0.4617 -0.0669 0.3575 4.6158
## Random effects:
## Groups Name Variance Std.Dev. Corr
## FID_IID (Intercept) 7.8821 2.8075
## time_visit 0.2454 0.4954
0.16 ## Residual 3.1233 1.7673
## Number of obs: 1206, groups: FID_IID, 440
667
## Fixed effects:
##
##
##
##
##
#LMM
##
##
##
##
In the summary of the LMM model, we can see a section called Correlationof
FixedEffects. The original model made no assumption about the correlation
(unstructured correlation). In R, we usually have the following 4 types of
correlation models.
669
19.4 GLMM/GEE Longitudinal Data Analysis •
Independence: No correlation:
0 1 0 0
0 1 01:
@0 0 1A
1 ρ ρ @ρ ρ
1A
ρ 1 ρ1:
01 ρ ρ2
ρ 1 ρ 1:
@ρ2 ρ 1 A
1 ρ1,2 ρ1,3
0 ρ1,2
1 ρ2,3 1:
@ρ1,3 ρ2,3
1A
In the LMM model, the output also seems unstructured. So, we needn’t worry
about changing the correlation structure. However, if the output under
670 19 Big Longitudinal Data Analysis
unstructured correlation assumption looks like an Exchangeable or AR(1)
structure, we may consider changing the LMM correlation structure accordingly.
The primary focus of GEE is the estimation of the mean model: E(Y i, jjXi, j) ¼ μi,
This mean model can be any generalized linear model. For example: P(Y i, j ¼
1jXi, j) ¼ πi, j (marginal probability, as we don’t condition on any other variables):
Since the data could be clustered (e.g., within subject, or within unit), we need
to choose a correlation model. Let’s introduce some notation:
– The specification of a mean model, μi, j(β), and a correlation model, R i(α),
does not identify a complete probability model for Yi
19.4 GLMM/GEE Longitudinal Data Analysis
– The model {μi, j(β),Ri(α)} is semi-parametric since it only specifies the first
two multivariate moments (mean and covariance) of Y i. Higher order
moments are not specified.
i¼1
Scale: A change of scale term transforming the scale of the mean, μi, to the
scale of the regression coefficients (covariates).
Variance weight: The inverse of the variance-covariance matrix is used to
weight in the data for subject i, i.e., giving more weight to differences between
observed and expected values for subjects that contribute more information.
Model Mean: Specifies the mean model, μi(β), compared to the observed data,
Yi. This fidelity term minimizes the difference between actually-observed and
meanexpected (within the ith cluster/subject). See also the SMHS EBook.
!P Yij ¼ 1
P Yij ¼ 0
673
The only difference between GLMM and LMM in this situation is that GLMM
used a logit link for the binary response.
With GEE, we don’t have random intercept or slope terms.
!P Y
log ij ¼ 1jXij;bi ¼ β0 þ β1xij þ Eij:
P Yij ¼ 0
model.glmm<-glmer(ResearchGroup~UPDRS+Imaging+Age+Sex+Weight+(1|
FID_IID), da ta=mydata, family="binomial") display(model.glmm)
## glmer(formula = ResearchGroup ~ UPDRS + Imaging + Age + Sex +
## Weight + (1 | FID_IID), data = mydata, family = "binomial")
## coef.est
674 19 Big Longitudinal Data Analysis
coef.se ## (Intercept) -86.63
32.07 ## UPDRS -16.78
6.27 ## Imaging 0.59
0.61 ## Age 6.04
2.41
## Sex 0.65
2.15 ## Weight 6.12
3.76
##
## Error terms:
## Groups Name
Std.Dev. ## FID_IID
(Intercept) 40.72 ##
Residual 1.00
## ---
## number of obs: 1206, groups: FID_IID, 440
## AIC = 129.5, DIC = -114.1
## deviance = 0.7
In terms of AIC, the GLMM model is a lot better than the GLM model.
Try to apply some of these longitudinal data analytics on the fMRI data we
discussed in Chap. 4 (Visualization).
Review the 3D/4D MRI imaging data discussion in Chap. 4. Extract the time
courses of several time series at different 3D spatial locations, some near-by, and
some farther apart (distant voxels). Then, apply time-series analyses, report
findings, determine if near-by or farther-apart voxels may be more correlated.
Example of extracting time series from 4D fMRI data:
#See examples here: https://cran.r-
project.org/web/packages/oro.nifti/vignettes/nifti.pdf
fMRIURL <-
"http://socr.umich.edu/HTML5/BrainViewer/data/fMRI_FilteredData_4D.nii.gz"
fMRIFile <- file.path(tempdir(), "fMRI_FilteredData_4D.nii.gz")
download.file(fMRIURL, dest=fMRIFile, quiet=TRUE) (fMRIVolume <-
readNIfTI(fMRIFile, reorient=FALSE))
# dimensions: 64 x 64 x 21 x 180 ; 4mm x 4mm x 6mm x 3 sec
References
Box GE, Jenkins GM, Reinsel GC, Ljung GM. Time series analysis: forecasting and control:
John Wiley & Sons; 2015.
Grace JB. Structural equation modeling and natural systems: Cambridge University Press; 2006.
http://idaejin.github.io/bcam-courses/neiker-2016/material/mixed-models/
Liang K-Y, Zeger S. Longitudinal data analysis using generalized linear models. Biometrika.
1986;73(1):13-22. doi: https://doi.org/10.1093/biomet/73.1.13.
McCulloch CE, Neuhaus JM. Generalized linear mixed models: Wiley Online Library; 2013.
McIntosh A, Gonzalez-Lima F. Structural equation modeling and its application to network
analysis in functional brain imaging. Human Brain Mapping. 1994;2(1-2):2-22.
Shipley B. Cause and correlation in biology: a user’s guide to path analysis, structural equations
and causal inference with R: Cambridge University Press; 2016.
Chapter 20
Natural Language Processing/Text Mining
http://www.conversational-technologies.com
nldemos/nlDemos.htm
Let’s create some documents we can use to illustrate the use of the tm package for
text mining. The five documents below represent portions of the syllabi of five
recent courses taught by the author:
• HS650: Data Science and Predictive Analytics (DSPA)
• Bootcamp: Predictive Big Data Analytics using R
• HS 853: Scientific Methods for Health Sciences: Special Topics
• HS851: Scientific Methods for Health Sciences: Applied Inference, and
• HS550: Scientific Methods for Health Sciences: Fundamentals
We import the syllabi into several separate segments represented as documents.
• As an exercise, try to use the rvest::read_html method to load in the five
course syllabi directly from the course websites listed above.
doc1 <-"HS650: The Data Science and Predictive Analytics(DSPA) course
(offered as a massive open online course, MOOC, as well as a
traditional University of Michigan class) aims to build computational
abilities, inferential thinking, and practical skills for tackling core
data scientific challenges. It explores foundational concepts in
data management, processing, statistical computing, and dynamic
visualization using modern programming tools and agile
webservices. Concepts, ideas, and protocols are illustrated through
examples of real observational, simulated and research-derived
datasets. Some prior quantitative experience in programming,
calculus, statistics, mathematical models, or linear algebra will be
necessary. This open graduate course will provide a general overview
of the principles, concepts, techniques, tools and services for
managing, harmonizing, aggregating, preprocessing, modeling, analyzing
and interpreting large, multisource, incomplete, incongruent, and
heterogeneous data (Big Data). The focus will be to expose
students to common challenges related to handling Big Data and present
the enormous opportunities and power associated with our ability to
interrogate such complex datasets, extract useful information, derive
knowledge, and provide actionable forecasting. Biomedical, healthcare,
and social datasets will provide context for addressing specific
driving challenges. Students will learn about modern data analytic
techniques and develop skills for importing and exporting, cleaning
and fusing, modeling and visualizing, analyzing and synthesizing
20.1 A Simple NLP/TM Example 679
The VCorpus object includes all the text and some meta-data (e.g., indexing)
about the entire text.
## [1] "character"
doc_corpus
## <<VCorpus>>
## Metadata: corpus specific: 0, document level (indexed): 0
## Content: documents: 5 doc_corpus[[1]]
$content
The text itself contains upper case letters as well as lower case letters. The first
thing to do is to convert all characters to lower case.
doc_corpus<-tm_map(doc_corpus, tolower)
doc_corpus[[1]]
## [1] "hs650: the data science and predictive analytics (dspa)
course (offe red as a massive open online course, mooc, as well as
a traditional universi ty of michigan class) … community validation
of high-throughput analytic wor kflows will be emphasized
throughout the course."
20.1.4 Text Pre-processing
Remove Stopwords
stopwords("english")
## [1] "i" "me" "my" "myself" "we"
## [6] "our" "ours" "ourselves" "you" "your"
## [11] "yours" "yourself" "yourselves" "he" "him"
## [16] "his" "himself" "she" "her" "hers"
…
## [171] "so" "than" "too" "very" doc_corpus<-
tm_map(doc_corpus, removeWords, stopwords("english"))
doc_corpus[[1]]
## [1] "hs650: data science predictive analytics (dspa) course
(offered massive open online course, mooc, well traditional
university michigan c lass) aims build …, sharing community
validation high-throughput analytic workflows will emphasized
throughout course."
682 20 Natural Language Processing/Text Mining
Now we notice the irrelevant punctuation in the text, which can be removed by
using a combination of tm_map() and removePunctuation() functions.
doc_corpus<-tm_map(doc_corpus, removePunctuation)
doc_corpus[[2]]
## [1] "bootcamp weeklong intensive bootcamp focused methods
techniques tool s services resources big healthcare biomedical data
analytics using opensour ce statistical computing software r
morning sessions 3 hrs … collaborative d esign implementation
sharing community validation highthroughput analytic wo rkflows
will emphasized throughout course"
The above tm_map commands changed the structure of our doc_corpus object.
We may apply PlainTextDocument function if we need to convert it back to the
original format.
doc_corpus<-tm_map(doc_corpus, PlainTextDocument)
Let’s inspect the first three documents. We notice that there are some words
ending with “ing”, “es”, or “s”.
doc_corpus[[1]]$content
## [1] "hs650 data science predictive analytics dspa course offered
massive open online course mooc well traditional university
michigan class aims buil d computational abilities inferential …
validation highthroughput analytic w orkflows will emphasized
throughout course" doc_corpus[[2]]$content
## [1] "bootcamp weeklong intensive bootcamp focused methods
techniques tool s services resources big healthcare biomedical data
analytics using opensour ce statistical computing software r
morning sessions 3 … design implementat ion sharing community
validation highthroughput analytic workflows will emph asized
throughout course" doc_corpus[[3]]$content
## [1] "hs 853 course covers number modern analytical methods
advanced healt hcare research specific focus will reviewing using
innovative modeling compu tational analytic visualization …
20.1 A Simple NLP/TM Example 683
If we have multiple terms that only differ in their endings (e.g., past, present,
present-perfect-continuous tense), the algorithm will treat them differently because
it does not understand language semantics, the way a human would. To make
things easier for the computer, we can delete these endings by “stemming”
documents. Remember to load the package SnowballC before using the function
stemDocument(). The earliest stemmer was written by Julie Beth Lovins in 1968,
which had great influence on all subsequent work. Currently, one of the most
popular stemming approaches was proposed by Martin Porter and is used in
stemDocument(), and you can read more on Porter algorithm online.
# install.packages("SnowballC")
library(SnowballC)
doc_corpus<-tm_map(doc_corpus, stemDocument)
doc_corpus[[1]]$content
## [1] "hs650 data scienc predict analyt dspa cours offer massiv
open onlin cours mooc well tradit univers michigan class aim build
comput abil inferent i think practic skill tackl core data scientif
… fuse model visual analyz sy nthes complex dataset collabor design
implement share communiti valid highth roughput analyt workflow
will emphas throughout cours"
This stemming process has to be done after the PlainTextDocument function
because stemDocument can only be applied to plain text.
It’s very useful to be able to tokenize text documents into n-grams, sequences of
words, e.g., a 2-gram represents two-word phrases that appear together in order.
This allows us to form bags of words and extract information about word ordering.
The bag of words model is a common way to represent documents in matrix form
based on their term frequencies (TFs). We can construct an n t document-term
matrix (DTM), where n is the number of documents, and t is the number of unique
terms. Each column in the DTM represents a unique term. For instance, the (i,j)th
cell represents how many of term j are present in document i.
The basic bag of words model is invariant to ordering of the words within a
document. Once we compute the DTM, we can use machine learning techniques to
interpret the derived signature information contained in the resulting matrices.
Now that the doc_corpus object is quite clean, we can make a document-term
matrix to explore all the terms in the five initial documents. The document term
684 20 Natural Language Processing/Text Mining
## $statist
## epidemiolog publish studi theori understand appli
## 0.95 0.95 0.95 0.95 0.95
0.83
## test
## 0.80
686 20 Natural Language Processing/Text Mining
Let’s explore some real datasets. First, we will import the 2011 USA Jobs
Ranking Dataset from SOCR data archive.
library(rvest)
wiki_url <-
read_html("http://wiki.socr.umich.edu/index.php/SOCR_Data_2011_U
S_JobsRanking")
html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content" class="mw-body-primary" role="main">\n\t<a
id="top ...
## 4
Tabulates_analyzes_and_interprets_the_numeric_results_of_experiment
s_ and_surveys
## 5
Plans_and_develops_computer_systems_for_businesses_and_scientific_i
ns titutions
## 6
Studies_the_physical_characteristics_motions_and_processes_of_the_e
ar th's_atmosphere
Note that low indices represent jobs that in 2011 were highly desirable. Thus, in
2011, the most desirable job among the top 200 common jobs would be Software
Engineer. The aim of our case study is to explore the difference between the top
30 desirable jobs and the last 100 jobs in the list.
We will go through the same procedure as we did for the simple course syllabi
example. The documents we will be using include the Description column
(a vector) in the dataset.
jobCorpus<-VCorpus(VectorSource(job[, 10]))
Here we used a loop to substitute "_" with blank space. This is because when
we use removePunctuation, all the underline characters will disappear and there
will be no separation between terms. In this situation, gsub will be the best choice
to use.
jobCorpus[[1]]$content
## [1] "research design develop maintain softwar system along
hardwar develo p medic scientif industri purpos"
Then, we can start to build the DTM and reassign the labels to the Docs.
dtm<-
DocumentTermMatrix(jobCorpus)
dtm
## <<DocumentTermMatrix (documents: 200, terms: 846)>>
## Non-/sparse entries: 1818/167382
## Sparsity : 99%
## Maximal term length: 15
## Weighting : term frequency (tf)
dtm$dimnames$Docs<-as.character(1:200)
inspect(dtm[1:10, 1:10])
## <<DocumentTermMatrix (documents: 10, terms: 10)>>
## Non-/sparse entries: 2/98
## Sparsity : 98%
## Maximal term length: 7
## Weighting : term frequency
(tf) ## Sample :
## Terms
## Docs 16wheel abnorm access accid accord account accur achiev act
activ
## 1 0 0 0 0 0 0 0 0 0
0
## 10 0 0 0 0 0 0 0 0 0
0
## 2 0 0 0 0 0 0 0 0
0 0 ## 3 0 0 0 1 0 0 0
0 0 0 ## 4 0 0 0 0 0 0
0 0 0 0
## 5 0 0 0 0 0 0 0 0
0 0 ## 6 0 0 0 0 0 0 0 0
0 0
## 7 0 0 0 0 0 0 0 0 0
0
## 8 0 0 0 0 1 0 0 0 0
0
## 9 0 0 0 0 0 0 0 0 0
0
Let’s subset the dtm into the top 30 jobs and the bottom 100 jobs.
dtm_top30<-dtm[1:30, ]
dtm_bot100<-dtm[101:200, ]
dtm_top30
20.2 Case-Study: Job Ranking 689
dtm_bot100
Now, instead of 846 terms, we only have 19 that appear in the top 30 job
descriptions (JDs) and 14 that appear in the bottom 100 JDs.
Similar to what we did in Chap. 8, visualization of the terms-world clouds may
be accomplished by combining the tm with wordcloud packages. First, we can
count the term frequencies in the two document term matrices (Fig. 20.2).
690 20 Natural Language Processing/Text Mining
Fig. 20.2 Frequency plot of commonly occurring terms (bottom 100 jobs)
# Calculate the cumulative frequencies of words across documents
and sort:
freq1<-sort(colSums(as.matrix(dtms_top30)),
decreasing=T) freq1
## develop assist natur studi analyz
concern ## 6 5 5 5
4 4 ## individu industri physic plan
busi inform ## 4 4 4 4
3 3 ## institut problem research scientif
theori treatment ## 3 3 3
3 3 3
## understand
## 3
freq2<-sort(colSums(as.matrix(dtms_bot100)),
decreasing=T) freq2
## oper repair perform instal build
prepar ## 17 15 11 9
8 8 ## busi commerci construct industri
machin manufactur ## 7 7 7
7 7 7
## product transport
## 7 7
# Plot
wf=data.frame(term=names(freq2), occurrences=freq2)
library(ggplot2)
##
20.2 Case-Study: Job Ranking 691
library(wordcloud)
It is apparent that the top 30 jobs focus more on research or discovery of new
things, and include frequent keywords like “study”, “nature”, and “analyze.” The
bottom 100 jobs more focused of operating on existing objects, with frequent
keywords like “operation”, “repair”, and “perform”.
692 20 Natural Language Processing/Text Mining
In Chap. 14, we talked about the ROC curve. We can use document term matrices
to build classifiers and use the area under the ROC curve (AUC) to evaluate
those classifiers. Assume that we want to predict whether a job ranks in the top
30, i.e., the most desired jobs. The first task would be to create an indicator of
high rank jobs (top 30). We can use the ifelse() function that we are already
familiar with.
job$highrank<-ifelse(job$Index<30, 1, 0)
Fig. 20.5 The area under the curve (AUC) measures the performance of the cross-validated
LASSO-regularized model of job-ranking against the magnitude of the regularization parameter
(bottom axis), and the efficacy of the model selection, i.e., number of non-trivial coefficients
(top axis). The vertical dash lines suggest an optimal range for the penalty term and the number
of coefficients, see Chap. 18
Next we load the glmnet package to help us build the model and draw the
corresponding graphs.
#install.packages("glmnet")
library(glmnet)
# lasso penalty
alpha = 1,
# interested in the area under ROC
curve type.measure = "auc", # 10-
fold cross-validation nfolds = 10,
# high value is less accurate, but has faster
training thresh = 1e-3,
# again lower number of iterations for faster
training maxit = 1e3)
plot(fit)
print(paste("max AUC =", round(max(fit$cvm), 4)))
## [1] "max AUC = 0.7276"
Here, x is a matrix and y is the response variable. The last line of code helps us
select the best AUC among all models. The resulting AUC 0.73 represents a
relatively good prediction model for this small sample size.
20.3 TF-IDF
TF is the ratio the number of occurrences of the most frequent word within the same documenta term’
occurrences in a document .
Symbolically,
fdð Þt
TF tð ;dÞ ¼
: maxfdð Þw
w2d
694 20 Natural Language Processing/Text Mining
The TF definition may allow high scores for irrelevant words that naturally show
up often in a long text, even after triaging common words in a prior preprocessing
step. The IDF attempts to rectify that. IDF represents the inverse of the share of
the documents in which the regarded term can be found. The lower the number of
documents containing the term, relative to the size of the corpus, the higher the
term factor.
IDF involves a logarithm function, to temper the effective scoring penalty of
showing up in two documents, which othersize may be too extreme. Typically, the
IDF for a term found in just one document is twice the IDF for another term found
in two docs. The ln() function rectifies this bias of ranking in favor of rare terms,
even if
20.3 TF-IDF 695
the TF-factor may be high. It is rather unlikely that a term’s relevance is only high
in one doc and not all others.
j
D
j
IDF tð ;DÞ ¼ ln :
j fd2D : t2dg j
20.3.3 TF-IDF
Both TF and IDF yield high scores for highly relevant terms. TF relies on local
information (search over d), whereas IDF incorporates a more global perspective
(search over D). The product TF IDF, gives the classical TF-IDF formula.
However, alternative expressions may be formulated to get other univariate
expressions using alternative weights for TF and IDF.
;
TF IDF0ðt;d;DÞ ¼ IDF t ð DÞ þ TF IDF
tð ;d;DÞ: j D j
Let’s make another DTM with TF-IDF weights and compare the differences
between the unweighted and weighted DTM.
dtm.tfidf<-DocumentTermMatrix(jobCorpus, control =
list(weighting=weightTfId f)) dtm.tfidf
## <<DocumentTermMatrix (documents: 200, terms: 846)>>
## Non-/sparse entries: 1818/167382
## Sparsity : 99%
## Maximal term length: 15
## Weighting : term frequency - inverse document frequency
(normali zed) (tf-idf)
## Sample :
## Terms
## Docs 16wheel abnorm access accid accord account accur
achiev act ## 1 0 0 0 0.0000000 0.0000000
0 0 0 0
## 2 0 0 0 0.0000000 0.0000000 0 0 0
0
## 3 0 0 0 0.5536547 0.0000000 0 0
0 0
## 4 0 0 0 0.0000000 0.0000000 0 0
0 0
## 5 0 0 0 0.0000000 0.0000000 0 0
0 0
## 6 0 0 0 0.0000000 0.0000000 0 0
0 0
## 7 0 0 0 0.0000000 0.0000000 0 0
0 0
## 8 0 0 0 0.0000000 0.4321928 0 0
0 0 ## 9 0 0 0 0.0000000 0.0000000 0
0 0 0
## Terms
## Docs activ
## 1 0
## 2 0
## 3 0
## 4 0
## 5 0
## 6 0
## 7 0
## 8 0 ## 9
0 inspect(dtm[1:9,
1:10])
## <<DocumentTermMatrix (documents: 9, terms: 10)>>
## Non-/sparse entries: 2/88
## Sparsity : 98%
## Maximal term length: 7
## Weighting : term frequency
(tf) ## Sample :
## Terms
## Docs 16wheel abnorm access accid accord account accur achiev act
activ
## 1 0 0 0 0 0 0 0 0 0
0
## 2 0 0 0 0 0 0 0 0 0
0
## 3 0 0 0 1 0 0 0 0 0
0 ## 4 0 0 0 0 0 0 0 0
0 0 ## 5 0 0 0 0 0 0 0
0 0 0
## 6 0 0 0 0 0 0 0 0 0
0 ## 7 0 0 0 0 0 0 0 0
0 0
## 8 0 0 0 0 1 0 0 0 0
0
## 9 0 0 0 0 0 0 0 0 0
0
20.3 TF-IDF 697
An inspections of the two different DTMs suggests that TF-IDF is not only
counting the frequency but also assigning different weights to each term according
to the importance of the term. Next, we are going to fit another model with this
new DTM (dtm.tfidf) (Fig. 20.6).
set.seed(2)
fit1 <- cv.glmnet(x = as.matrix(dtm.tfidf), y = job[['highrank']],
family = 'binomial',
# lasso penalty
alpha = 1,
# interested in the area under ROC
curve type.measure = "auc", # 10-
fold cross-validation nfolds = 10,
# high value is less accurate, but has faster
training thresh = 1e-3,
# again lower number of iterations for faster
training maxit = 1e3)
plot(fit1)
This output is about the same as the previous jobs ranking prediction classifier
(based on the unweighted DTM). Due to random sampling, each run of the
protocols may generate slightly different results. The idea behind using TF-IDF is
that one would expect to get more unbiased estimates of word importance. If the
document includes stopwords, like “the” or “one”, the DTM may distort the
results, but TF-IDF may resolve some of these problems.
698 20 Natural Language Processing/Text Mining
dim(dtm_jobsTrain); dim(dtm_testJDs)
## [1] 200 2675
## [1] 3 2675
set.seed(2)
fit1 <- cv.glmnet(x = as.matrix(dtm_jobsTrain), y =
job[['highrank']],
family = 'binomial',
# lasso penalty
alpha = 1,
# interested in the area under ROC
curve type.measure = "auc", # 10-
fold cross-validation nfolds = 10,
# high value is less accurate, but has faster
training thresh = 1e-3,
# again lower number of iterations for faster
training maxit = 1e3)
print(paste("max AUC =", round(max(fit1$cvm), 4)))
## [1] "max AUC = 0.7934"
Note that we somewhat improved the AUC 0.79. Below, we will assess the JD
predictive model using the three out of bag job descriptions (Fig. 20.7).
702 20 Natural Language Processing/Text Mining
• On the training data, the predicted probabilities rapidly decrease with the
indexing of the jobs, corresponding to the overall job ranking (highly ranked/
desired jobs are listed on the top).
• On the three testing job description data (accountant, attorney, and machinist),
there is a clear ranking difference between the machinist and the other two
professions.
Also see the discussion in Chap. 18 about the different types of predictions that
can be generated as outputs of cv.glmnet regularized forecasting methods.
704
20.4 Cosine Similarity
A B
similarity ¼ cosð Þ ¼θ ,
k kA 2k kB 2
where θ represents the angle between the pair of vectors A and B in the Euclidean
space spanned by the DTM matrix (Fig. 20.8).
cos_dist = function(mat){ numer =
tcrossprod(mat) denom1 =
sqrt(apply(mat, 1, crossprod))
denom2 = sqrt(apply(mat, 1,
crossprod))
1 - numer / outer(denom1,denom2)
} dist_cos =
cos_dist(as.matrix(dtm))
set.seed(2000)
fit_cos <- cv.glmnet(x = dist_cos, y = job[['highrank']],
family = 'binomial',
# lasso penalty
alpha = 1,
20.5 Sentiment Analysis 705
Fig. 20.8 AUC-based performance of the cross-validated LASSO-regularized model of job-
ranking based on cosine-similarity distance (dist_cos), see Figs. 20.5, 20.6, and 20.7
# interested in the area under ROC
curve type.measure = "auc", # 10-
fold cross-validation nfolds = 10,
# high value is less accurate, but has faster
training thresh = 1e-3,
# again lower number of iterations for faster
training maxit = 1e3)
plot(fit_cos)
print(paste("max AUC =", round(max(fit_cos$cvm), 4)))
## [1] "max AUC = 0.8065"
The AUC now is greater than 0.8, which is a pretty good result; even better
than what we obtained from DTM or TF-IDF. This suggests that our machine
“understanding” of the textual content, i.e., the natural language processing, leads
to a more acceptable content classifier.
0, negative
Y ¼ Sentiment ¼:
1, positive
The data.table package will also be used for some data manipulation. Let’s start
with splitting the data into training and testing sets.
# install.packages("text2vec");
install.packages("data.table") library(text2vec)
library(data.table)
## [1] 5000 3
## [1] "id" "sentiment" "review"
# Generate 80-20% training-testing split
of the reviews all_ids = movie_review$id
set.seed(1234)
train_ids = sample(all_ids, 5000*0.8)
test_ids = setdiff(all_ids, train_ids)
train = movie_review[train_ids, ] test =
movie_review[test_ids, ]
Next, we will vectorize the reviews by creating terms to termID mappings. Note
that terms may include arbitrary n-grams, not just single words. The set of reviews
will be represented as a sparse matrix, with rows and columns corresponding to
reviews/reviewers and terms, respectively. This vectorization may be
accomplished in several alternative ways, e.g., by using the corpus vocabulary,
feature hashing, etc.
The vocabulary-based DTM, created by the create_vocabulary()function, relies
on all unique terms from all reviews, where each term has a unique ID. In this
example, we will create the review vocabulary using an iterator construct
abstracting the input details and enabling in memory processing of the (training)
data by chunks.
# define the text preprocessing
# either a simple (tolower case) function
preproc_fun = tolower
t1 = Sys.time()
print(difftime(t1, t0, units =
'sec'))
Earlier, we saw that we can also prune the vocabulary and perhaps improve
prediction performance, e.g., by removing non-salient terms like stopwords and by
using n-grams instead of single words (Fig. 20.10).
reviewVocab = create_vocabulary(iter_train,
stopwords=tm::stopwords("english"), ngram = c(1L, 2L))
prunedReviewVocab = prune_vocabulary(reviewVocab,
term_count_min = 10,
doc_proportion_max = 0.5,
doc_proportion_min = 0.001)
prunedVectorizer = vocab_vectorizer(prunedReviewVocab)
t0 = Sys.time() dtm_train =
create_dtm(iter_train, prunedVectorizer)
dtm_test = create_dtm(iter_test,
prunedVectorizer)
t1 = Sys.time()
print(difftime(t1, t0, units =
'sec')) ## Time difference of
3.778152 secs
Next, let’s refit the model and report its performance. Would there be an
improvement in the prediction accuracy?
712 20 Natural Language Processing/Text Mining
glmnet_prunedClassifier=cv.glmnet(x=dtm_train,
y=train[['sentiment']], family
= "binomial",
# LASSO L1 penalty alpha
= 1,
# interested in the area under ROC curve or MSE
type.measure = "auc",
# n-fold internal (training data) stats cross-validation
nfolds = nFolds,
# threshold: high value is less accurate / faster training
thresh = 1e-2,
# again lower number of iterations for faster
training maxit = 1e3 )
Use these R Data Mining Twitter data to apply NLP/TM methods and investigate
the Twitter corpus.
• Construct a VCorpus object
• Clean the VCorpus object
714
References
Use Head and Neck Cancer Medication Data to apply NLP/TM methods and
investigate the corpus. You have already seen this data in Chap. 8; now we can go
a step further.
• Use MEDICATION_SUMMARY to construct a VCorpus object.
• Clean the VCorpus object.
• Build the document term matrix (DTM).
• Add a column to indicate early and later stage according to seer_stage (refer to
Chap. 8).
• Use the DTM to construct a word cloud for early stage, later stage and whole.
• Interpret according to the word cloud.
• Compute the TF-IDF (Term Frequency - Inverse Document Frequency).
• Apply LASSO on the unweighted and weighted DTM respectively and
evaluate the results according to AUC.
• Try cosine similarity transformation, apply LASSO and compare the result.
• Use other measures such as “class” for cv.glmnet().
• Does it appear that these classifiers understand well human language?
References
Kumar, E. (2011) Natural Language Processing, I. K. International Pvt Ltd, ISBN 9380578776,
9789380578774.
Kao, A, Poteet, SR (eds.) (2007) Natural Language Processing and Text Mining, Springer
Science & Business Media, ISBN 1846287545, 9781846287541.
https://github.com/kbenoit/spacyr https://tartarus.org/martin/PorterStemmer/
Chapter 21
Prediction and Internal Statistical Cross
Validation
715 21 Prediction and Internal Statistical Cross Validation
We should start by reviewing Chap. 14 (Model Performance Assessment).
Crossvalidation is a statistical approach for validating predictive methods,
classification models, and clustering techniques. It assesses the reliability and
stability of the results of the corresponding statistical analyses (e.g., predictions,
classifications, forecasts) based on independent datasets. For prediction of trend,
association, clustering, and classification, a model is usually trained on one
dataset (training data) and subsequently tested on new data (testing or validation
data). Statistical internal cross-validation uses iterative bootstrapping to define
test datasets, evaluates the model predictive performance, and assesses its power
to avoid overfitting. Overfitting is the process of computing a predictive or
classification model that describes random error, i.e., fits to the noise
components of the observations, instead of the actual underlying relationships and
salient features in the data.
In this Chapter, we will use the Google Flu Trends, Autism, and Parkinson’s
disease case-studies to (1) illustrate exhaustive and non-exhaustive internal
statistical cross-validation; (2) explore alternative forecasting types using linear
and nonlinear predictions; and (3) compare complementary predictor functions.
• Confusion matrices reporting accuracy, FP, FN, PPV, NPV, LOR and other
metrics may be used to assess predictions of dichotomous (binary) or
polytomousoutcomes.
• R2, correlations (between predicted and observed outcomes), and RMSE
measures may be used to quantify the performance of various supervised
forecasting methods on continuousfeatures.
21.2 Overfitting
Our predictions are most accurate if we can model as much of the signal and as
little of the noise as possible. Note that in these terms, R2 is a poor metric to
identify predictive power – it measures how much of the signal and the noise is
explained by our model. In practice, it’s hard to always identify what’s signal and
what’s noise. This is why practical applications tend to favor simpler models,
701
since the more complicated a model is, the easier it is to overfit the noise
component of the observed information.
21.3 Internal Statistical Cross-Validation is an Iterative Process
C: B
@
xn,1
xn,k A
Using least squares to estimate the linear function parameters (effect-sizes), β1, ,
βk, allows us to compute a hyperplane y ¼ a + xβ that best fits the observed data
(xi,yi)1 i n. This is expressed as a matrix by:
B
@
C:
yn A
B@ an CA B@ x
n,1 xn,k CAB@ βk CA
:
⋮
One measure to evaluate the model fit may be the mean squared error (MSE).
The MSE for a given value of the parameters α and β on the observed training data
(xi,yi)1 i n is expressed as:
2
n
1 0 1
uvuuuffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
704 21 Prediction and Internal Statistical Cross Validation
ffiffiffiffiffiffiffiffiffiffi1 Xn 0@BB |
fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflffl
fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffly i: at xi 1 xi
In the linear model case, the expected value of the MSE (over the distribution of
training sets) for the training set is nnþkkþ11 E, where E is the expected value of the
MSE for the testing/validation data. Therefore, fitting a model and computing the
MSE on the training set, we may produce an over optimistic evaluation assessment
(smaller RMSE) of how well the model may fit another dataset. This bias
represents in-sample estimate of the fit, whereas we are interested in the cross-
validation estimate as an out-of-sample estimate.
In the linear regression model, cross validation may not be as useful, since we
can compute the exact correction factor nnþkkþ11 to obtain an estimate of the
(unknown) exact expected out-of-sample fit using the (known) in-sample MSE
(under)estimate. However, even in this situation, cross-validation remains useful
as it can be used to select an optimal regularized cost function.
In most other modeling procedures (e.g. logistic regression), there are no simple
general closed-form expressions (formulas) to adjust the cross-validation error
estimate of the known in-sample fit to estimate the unknown out-of-sample error
rate. Cross-validation is general strategy to predict the performance of a model on
705
a validation set using stochastic computation instead of obtaining experimental,
theoretical, mathematical, or closed-form analytic error estimates.
21.5 Case-Studies
AdaptiveBoosting (AdaBoost)
head(ppmi_data)
# View
(ppmi_data)
Obtain a model-free predictive analytics, e.g., AdaBoost classification, and
report the results.
# Model-free analysis, classification
# install.packages("crossval")
# install.packages("ada")
# library("crossval")
require(crossval)
require(ada)
#set up adaboosting prediction function
nrow(data.1$X); ncol(data.1$X)
nrow(balancedData); ncol(balancedData)
nrow(input); ncol(input)
colnames(balancedData) <- c(colnames(input),
"PD")
Fig. 21.2 Quantile-quantile plot of the original and rebalanced data distributions for one feature
for (i in 1:ncol(balancedData))
{
test.results.raw[i] <- my.t.test.p.value(input[, i], balancedData [,
i]) test.results.bin[i] <- ifelse(test.results.raw[i] >
alpha.0.05, 1, 0)
# binarize the p-value (0=significant, 1=otherwise)
print(c("i=",i,"var=", colnames(balancedData[i]), "t-
test_raw_p_value=",
test.results.raw[i])
) }
for (i in 1:(ncol(balancedData)-
1)) {
test.results.raw [i] <- wilcox.test(input[, i], balancedData [,
i])$p.value test.results.bin [i] <- ifelse(test.results.raw [i] >
alpha.0.05, 1, 0) print(c("i=", i, "Wilcoxon-test=",
710 21 Prediction and Internal Statistical Cross Validation
test.results.raw [i]))
}
print(c("Wilcoxon test results: ", test.results.bin))
# test.results.corr <- stats::p.adjust(test.results.raw, method =
"fdr", n = length(test.results.raw))
# where methods are "holm", "hochberg", "hommel", "bonferroni", "BH",
"BY", "fdr", "none")
# plot(test.results.raw, test.results.corr)
The next step will be the actual cross-validation.
# using raw data:
X <- as.data.frame(input); Y <- output
neg <- "1" # "Control" == "1"
set.seed(115)
cv.out <- crossval::crossval(my.ada, X, Y, K = 5, B = 1, negative =
neg)
# the label of a negative "null" sample (default:
"control") out <- diagnosticErrors(cv.out$stat)
print(cv.out$stat)
## FP TP TN FN
## 0.6 109.6 97.0 0.2
print(out)
These data contain the effect of two soporific drugs to increase hours of sleep
(treatment-compared design) on 10 patients. The data are available in R by default
(sleep{datasets}).
First, load the data and report some graphs and summaries (Fig. 21.3).
21.5 Case-Studies 711
Fig. 21.3 Box-and whisker plots of the hours of sleep for the two cohorts in the sleep dataset
data(sleep); str(sleep)
X = as.matrix(sleep[, 1, drop=FALSE]) #
increase in hours of sleep,
# drop is logical, if TRUE the result is coerced to the lowest
possible dimension.
# The default is to drop if only one column is left, but not to drop if
only one row is left.
Y = sleep[, 2] # drug given plot(X ~ Y)
levels(Y) # "1" "2"
dim(X) # 20 1
# install.packages("crossval")
library("crossval")
Execute the above code and interpret the diagnostic results measuring the
performance of the LDA prediction.
712 21 Prediction and Internal Statistical Cross Validation
data("attitude")
y = attitude[, 1] # rating variable x = attitude[,
-1] # date frame with the remaining variables
is.factor(y)
summary( lm(y ~ . , data=x) ) # R-squared: 0.7326
# set up lm prediction function
We will demonstrate model-based analytics using lm and lda, and then will
validate the forecasting using CV.
predfun.lm = function(train.x, train.y, test.x, test.y)
{ lm.fit = lm(train.y ~ . , data=train.x)
ynew = predict(lm.fit, test.x )
# compute squared error risk (MSE)
out = mean( (ynew - test.y)^2)
# note that, in general, when fitting linear model to continuous
outcome variable (Y),
# we can't use the out<-confusionMatrix(test.y, ynew, negative=n
egative), as it requires a binary outcome
# this is why we use the MSE as an estimate of the discrepancy b
etween observed & predicted values
return(out)
}
# require("MASS")
#predfun.lda = function(train.x, train.y, test.x, test.y, negative)
#{ lda.fit = lda(train.x,
grouping=train.y) # ynew =
predict(lda.fit, test.x)$class # count
TP, FP etc.
# out = confusionMatrix(test.y, ynew, negative=negative)
#return( out )
#}
Let’s go back to the more elaborate PD data and start by loading and
preprocessing the derived-PPMI data.
# ppmi_data <-
read.csv("https://umich.instructure.com/files/330400/download?
download_frd=1", header=TRUE)
# ppmi_data$ResearchGroup <- ifelse(ppmi_data$ResearchGroup ==
"Control", "C ontrol", "Patient")
# attach(ppmi_data); head(ppmi_data)
# install.packages("crossval")
# library("crossval")
# ppmi_data$PD <- ifelse(ppmi_data$ResearchGroup=="Control", 1, 0)
# input <- ppmi_data[ , -which(names(ppmi_data) %in% c("ResearchGroup",
"PD", "X", "FID_IID"))]
# output <- as.factor(ppmi_data$PD)
Fig. 21.4 Residual plots provide exploratory analytics of the model quality
set.seed(12345)
# cv.out.lm = crossval::crossval(predfun.lm, as.data.frame(X),
as.numeric(Y)
, K=5, B=20)
#cv.out.lm$stat;
#cv.out.lm;
#diagnosticErrors(cv.out.lm$stat)
21.6 Summary of CV output
The cross-validation (CV) output object includes the following three components:
• stat.cv: Vector of statistics returned by predfun for each cross validation run. •
stat: Mean statistic returned by predfun averaged over all cross validation runs.
• stat.se: Variability measuring the corresponding standard error.
We have already seen a number of predict() functions, e.g., Chap. 18. Below, we
will add to the collection of predictive analytics and forecasting functions.
21.7 Alternative Predictor Functions 715
We already saw the logit model in Chap. 18. Now, we will demonstrate a
logitpredictor function by applying it to the PD dataset.
# ppmi_data <-
read.csv("https://umich.instructure.com/files/330400/download?
download_frd=1", header=TRUE)
# ppmi_data$ResearchGroup <- ifelse(ppmi_data$ResearchGroup ==
"Control", "Control", "Patient")
# install.packages("crossval"); library("crossval")
# ppmi_data$PD <- ifelse(ppmi_data$ResearchGroup=="Control", 1, 0)
cv.out.logit = crossval::crossval(predfun.logit,
as.data.frame(X), as.numeric(Y), K=5, B=2, neg="1",
verbose=FALSE) cv.out.logit$stat.cv
## FP TP TN FN
## B1.F1 1 50 31 2
## B1.F2 0 60 19 6
## B1.F3 2 55 19 8
## B1.F4 3 58 23 0
716 21 Prediction and Internal Statistical Cross Validation
## B1.F5 3 60 21 1
## B2.F1 2 56 22 4
## B2.F2 0 57 23 5
## B2.F3 3 60 20 1
## B2.F4 1 58 23 2 ## B2.F5 1
54 27 3
diagnosticErrors(cv.out.logit$st
at)
## acc sens spec ppv npv lor
## 0.9431280 0.9466667 0.9344262 0.9726027 0.8769231 5.5331424
Caution: Note that if you forget to exponentiate the values of the predicted logistic
model (see ynew2 in predict.logit), you will get nonsense results, e.g., all cases
may be predicted to be in one class, trivial sensitivity, or incorrect NPP.
In Chaps. 8 and 21, we discussed the linear and quadratic discriminant analysis
models. Let’s now introduce a predfun.qda() function.
predfun.qda = function(train.x, train.y, test.x, test.y, negative)
{ require("MASS") # for lda function qda.fit =
qda(train.x, grouping=train.y) ynew = predict(qda.fit,
test.x)$class out.qda = confusionMatrix(test.y, ynew,
negative=negative) return( out.qda )
}
cv.out.qda = crossval::crossval(predfun.qda,
as.data.frame(input.short), as.factor(Y), K=5, B=20, neg="1")
e"
))]
X = as.matrix(input.short2)
cv.out.qda= crossval::crossval(predfun.qda, as.data.frame(X),
as.numeric(Y), K=5, B=2, neg="1")
It makes sense to contrast the QDA and GLM/Logit predictions.
diagnosticErrors(cv.out.qda$stat); diagnosticErrors(cv.out.logit$stat)
Clearly, both the QDA and Logit model predictions are quite similar and reliable.
¼ 1 P Xð Þ Pl¼0 P Xð j Y ¼ lÞP Yð ¼ lÞ
1 ð12ðXμkÞTΣk 1ðXμkÞÞ
P Xð j Y ¼ kÞ ¼ 1 e :
ð2πÞnjΣkj2
This model can be used to classify data by using the training data to estimate:
718 21 Prediction and Internal Statistical Cross Validation
(1) The class prior probabilities P(Y ¼ k) by counting the proportion of observed
instances of class k,
(2) The class means μk by computing the empirical sample class means, and
(3) The covariance matrices by computing either the empirical sample class
covariance matrices, or by using a regularized estimator, e.g., LASSO).
In the linear case (LDA), the Gaussians for each class are assumed to share the
same covariance matrix: Σk ¼ Σ for each class k. This leads to linear decision
surfaces separating different classes. This is clear from comparing the log-
probability ratios of a pair of 2 classes (k and l):
LOR ¼ logP Y ð ¼ k j X Þ
P Y ð ¼ lj X Þ , (the LOR ¼ 0 , the two probabilities are
LOR ¼ logP Y ð ¼ k j XÞ
¼ ðμk μlÞTΣ1ðμk μlÞ ¼ 1 μkTΣ1μk μlTΣ1μlÞ.
P Yð ¼ l j XÞ 2
But, in the more general, quadratic case of QDA, there are no assumptions on the
covariance matrices Σk of the Gaussians, leading to more flecible quadratic
decision surfaces separating the classes.
We already saw Artificial Neural Networks (NNs) in Chap. 11. Applying NNs is
not straightforward. We have to create a design matrix with an indicator column
for the response feature. In addition, we need to write a predict function to
translate the output of neuralnet() into analytical forecasts.
720 21 Prediction and Internal Statistical Cross Validation
# predict nn library("neuralnet")
pred = function(nn, dat) { yhat =
compute(nn, dat)$net.result yhat =
apply(yhat, 1, which.max)-1
return(yhat)
}
crossval::diagnosticErrors(cv.out.nn$stat)
21.7.5 SVM
In Chap. 11, we also saw SVM classification. Let’s try cross-validation using
Linear and Gaussian (radial) kernel SVM. We may expect that linear SVM would
achieve a similar result to Gaussian, or even better than Gaussian SVM, since this
dataset has a large k (# features) compared with n (# cases), which we explored in
detail in Chap. 11.
library("e1071")
my.svm <- function (train.x, train.y, test.x,
test.y,method,cost=1,gamma=1/ncol(dx_norm),coef0=0,degree=3)
{ svm_l.fit <- svm(x = train.x, y=as.factor(train.y),kernel =
method) predict.y <- predict(svm_l.fit, test.x) out <-
crossval::confusionMatrix(test.y, predict.y,negative = 0)
return (out)
}
Indeed, both types of kernels yield good quality predictors according to the
assessment metrics reported by the diagnosticErrors() method.
21.7.6 k-Nearest Neighbors Algorithm (k-NN)
• In k-NN regression, the output is the property value for the object representing
the average of the values of its k nearest neighbors.
Let’s now build the corresponding predfun.knn() method.
# X = as.matrix(input) # Predictor variables X =
as.matrix(input.short2)
# Y = as.matrix(output) # Outcome
# KNN (k-nearest neighbors)
library("class")
# knn.fit.test <- knn(X, X, cl = Y, k=3, prob=F);
predict(as.matrix(knn.fit.
test), X)$class
# table(knn.fit.test, Y); confusionMatrix(Y, knn.fit.test,
negative="1")
# This can be used for polytomous variable (multiple classes)
# TESTING DATA
input.test <- input[-input.train.ind, ]
output.test <- as.matrix(output)[-input.train.ind, ]
Then, we can fit the k-NN model and report the results.
library("class") knn_model <- knn(train= input.train, input.test,
cl=as.factor(output.train), k=2)
#plot(knn_model)
summary(knn_model)
attributes(knn_model)
# cross-validation
knn_model.cv <- knn.cv(train= input.train, cl=as.factor(output.train),
k=2) summary(knn_model.cv)
21.7 Alternative Predictor Functions 723
In Chap. 13, we showed that k-MC aims to partition n observations into k clusters,
where each observation belongs to the cluster with the nearest mean, which acts as
a prototype of a cluster. The k-MC partitions the data space into Voronoi cells. In
general, there is no computationally tractable solution for this, i.e., the problem is
NP-hard. However, there are efficient algorithms that converge quickly to local
optima, e.g., the expectation-maximization algorithm for mixtures of Gaussian
distributions via an iterative refinement approach (Figs. 21.5, 21.6 and 21.7).
kmeans_model <- kmeans(input.train, 2)
layout(matrix(1, 1))
# tiff("C:/Users/User/Desktop/test.tiff", width = 10, height = 10,
units = ' in', res = 300)
fpc::plotcluster(input.train, output.train, col = kmeans_model$cluster)
cluster::clusplot(input.train, kmeans_model$cluster, color=TRUE,
shade=TRUE, labels=2, lines=0)
Fig. 21.5 k-Means clustering plot () of the Parkinson’s disease data (PPMI)
724 21 Prediction and Internal Statistical Cross Validation
par(mfrow=c(10,10))
# the next figure is very large and will not render in RStudio, you may
need to save it as PDF file!
# pdf("C:/Users/User/Desktop/test.pdf", width = 50, height = 50)
# with(ppmi_data[,1:10], pairs(input.train[,1:10], col=c(1:2)
[kmeans_model$c luster])) # dev.off()
with(ppmi_data[,1:10], pairs(input.train[,1:10], col=c(1:2)
[kmeans_model$cluster]))
21.7 Alternative Predictor Functions 725
Fig. 21.7 Pair plots of the two clustering lables along the first 10 PPMI features
## L_insular_cortex_AvgMeanCurvature L_insular_cortex_ComputeArea
## 2 0.1071299082 2635.580514
## 2 0.1071299082 2635.580514
## 1 0.2221893533
1134.578902 ## 2 0.1071299082
2635.580514 ## 2 0.1071299082
2635.580514
## 2 0.1071299082 2635.580514
## L_insular_cortex_Volume
L_insular_cortex_ShapeIndex ## 2
7969.485443 0.3250065829 ## 2
7969.485443 0.3250065829
## 1 2111.385018 0.2788562513
## 2 7969.485443
0.3250065829 ## 2 7969.485443
0.3250065829 ## 2 7969.485443
0.3250065829
… resid.kmeans <- (input.train -
fitted(kmeans_model))
## [,1] [,2]
## betweenss 15462062254
15462062254 ## tot.withinss
12249286905 12249286905 ## totss
27711349159 27711349159
# validation
stopifnot(all.equal(kmeans_model$totss, ss(input.train)),
all.equal(kmeans_model$tot.withinss, ss(resid.kmeans)),
## these three are the same:
all.equal(kmeans_model$betweenss, ss(fitted.kmeans)),
all.equal(kmeans_model$betweenss, kmeans_model$totss -
kmeans_model$tot.withinss),
## and hence also
all.equal(ss(input.train), ss(fitted.kmeans) +
ss(resid.kmeans))
)
# kmeans(input.train, 1)$withinss
# trivial one-cluster, (its W.SS == ss(input.train))
clust_kmeans2 = kmeans(scale(X),
center=X[1:2,],iter.max=100, algorithm='Lloyd')
We may get empty clusters, instead of two clusters, when we randomly select
two points as the initial centers. The way to solve this problem is using k-means+
+.
726 21 Prediction and Internal Statistical Cross Validation
# k++ initialize
kpp_init = function(dat, K)
{ x = as.matrix(dat) n =
nrow(x)
# Randomly choose a first center centers
= matrix(NA, nrow=K, ncol=ncol(x))
centers[1,] = as.matrix(x[sample(1:n,
1),]) for (k in 2:K) {
# Calculate dist^2 to closest center for each
point dists = matrix(NA, nrow=n, ncol=k-1) for
(j in 1:(k-1)) { temp = sweep(x, 2,
centers[j,], '-') dists[,j] = rowSums(temp^2)
}
dists = rowMeans(dists)
# Draw next center with probability proportional to dist^2
cumdists = cumsum(dists)
prop = runif(1, min=0, max=cumdists[n])
centers[k,] = as.matrix(x[min(which(cumdists > prop)),])
}
return(centers)
}
clust_kmeans2_plus = kmeans(scale(X), kpp_init(scale(X), 2),
iter.max=100, a lgorithm='Lloyd')
Now let’s evaluate the model. The first step is to justify the selection of k¼2.
We use the method silhouette() in package cluster. Recall from Chap. 14 that the
silhouette value is between 1 and 1. Negative silhouette values represent
“misclustered” cases (Fig. 21.8).
Fig. 21.8 Silhouette plot of the 2-class k-means clustering of the Parkonson’s disease data
clust_k2 = clust_kmeans2_plus$cluster
require(cluster)
21.7 Alternative Predictor Functions 727
summary(sil_k2)
## Silhouette of 100 units in 2 clusters from silhouette.default(x =
clust_k 2[subset_int], dist = dis) :
## Cluster sizes and average silhouette widths:
## 48 52
## 0.1895633766 0.1018642857
## Individual silhouette
widths:
## Min. 1st Qu. Median Mean 3rd Qu.
Max. ## -0.06886907 0.06533312 0.14169240 0.14395980 0.22658680
0.33585520
mean(sil_k2<0) ##
[1] 0.01666666667
The result is pretty good. Only a very small number of samples are
“misclustered” (having negative silhouette values). Furthermore, you can observe
that when k¼3 or k¼4, the overall silhouette decreases, which indicates suboptimal
clustering.
dis = dist(as.data.frame(scale(X))) clust_kmeans3_plus =
kmeans(scale(X), kpp_init(scale(X), 3), iter.max=100, a
lgorithm='Lloyd')
summary(silhouette(clust_kmeans3_plus$cluster,dis))
## Silhouette of 422 units in 3 clusters from
silhouette.default(x = clust_kmeans3_plus$cluster, dist =
dis) : ## Cluster sizes and average silhouette widths:
## 139 157
126 ## 0.08356111542 0.19458813829
0.17237138090
## Individual silhouette widths:
## Min. 1st Qu. Median Mean 3rd Qu.
Max. ## -0.06355399 0.08376430 0.16639550 0.15138420 0.21855670
0.33107050
Fig. 21.9 Multi-dimensional scalling plot (2D projection) of the k-means clustering depicting the
agreement between testing data labels (glyph shapes) and the predicted class lables (glyph
colors)
L ¼ I D12SD12
be labeled as part of S2. This approach may be used iteratively for hierarchical
clustering by repeatedly partitioning the subsets.
The specc method in the kernlab package implements a spectral clustering
algorithm where the data-clustering is performed by embedding the data into the
subspace of the eigenvectors of an affinity matrix.
# install.packages("kernlab")
library("kernlab")
Spirals Data
Income Data
Fig. 21.11 Pair plots of the two-class spectral clustering of the income dataset
data(income) num_clusters <- 2 data_sc <-
specc(income, centers= num_clusters)
data_sc
## Spectral Clustering object of class "specc"
##
## Cluster memberships:
## ## 2 1 2 2 2 1 2 1 2 1
1 1 2 1
##
## String kernel function. Type = spectrum
## Hyperparameters : sub-sequence/string length = 4
## Normalized
##
## Cluster size:
## [1] 7 7
centers(data_sc)
## [,1] ## [1,]
NA withinss(data_sc) ##
logical(0) plot(income,
col= data_sc)
(Table 21.1).
## Number of folds: 5
## Total number of CV fits: 5
##
## Round # 1 of 1
## CV Fit # 1 of 5
## CV Fit # 2 of 5
## CV Fit # 3 of 5
## CV Fit # 4 of 5
## CV Fit # 5 of 5
# get k-Means CV results
my.kmeans <- function (train.x, train.y, test.x, test.y, negative,
formula){
kmeans.fit <- kmeans(scale(test.x), kpp_init(scale(test.x), 2),
iter.max=100, algorithm='Lloyd')
734 21 Prediction and Internal Statistical Cross Validation
21.8 Compare the Results
735
## Number of folds: 5
## Total number of CV fits: 10
##
## Round # 1 of 2
## CV Fit # 1 of 10
## CV Fit # 2 of 10
## CV Fit # 3 of 10
## CV Fit # 4 of 10
## CV Fit # 5 of 10
##
## Round # 2 of 2
## CV Fit # 6 of 10
## CV Fit # 7 of 10
## CV Fit # 8 of 10
## CV Fit # 9 of 10
## CV Fit # 10 of 10
# get spectral clustering CV results
my.sc <- function (train.x, train.y, test.x, test.y, negative,
formula){ sc.fit <- specc(scale(test.x), centers= 2) predict.y
<- [email protected]
#count TP, FP, TN, FN, Accuracy, etc. out <-
confusionMatrix(test.y, predict.y, negative = negative)
# negative is the label of a negative "null" sample (default:
"control"). return (out)
} set.seed(123) cv.out.sc <- crossval::crossval(my.sc,
as.data.frame(X), Y, K = 5, B = 2, negative = neg)
## Number of folds: 5
## Total number of CV fits: 10
##
## Round # 1 of 2
## CV Fit # 1 of 10
## CV Fit # 2 of 10
## CV Fit # 3 of 10
## CV Fit # 4 of 10
## CV Fit # 5 of 10
##
## Round # 2 of 2
## CV Fit # 6 of 10
## CV Fit # 7 of 10
## CV Fit # 8 of 10
## CV Fit # 9 of 10
## CV Fit # 10 of 10
21.9 Assignment: 21. Prediction and Internal Statistical Cross-Validation
736 21 Prediction and Internal Statistical Cross Validation
require(knitr)
## Loading required package: knitr
res_tab=rbind(diagnosticErrors(cv.out.ada$stat),diagnosticErrors( cv
.out.lda$stat),diagnosticErrors(cv.out.qda$stat),diagnosticErrors( c
v.out.knn$stat),diagnosticErrors(cv.out.logit$stat),diagnosticErrors
( cv.out.nn$stat),diagnosticErrors(cv.out.svml$stat),diagnosticError
s( cv.out.svmg$stat),diagnosticErrors(cv.out.kmeans$stat),diagnostic
Errors( cv.out.sc$stat))
rownames(res_tab) <- c("AdaBoost", "LDA", "QDA", "knn", "logit",
"Neural Network", "linear SVM", "Gaussian SVM", "k-Means",
"Spectral Clustering") kable(res_tab,caption = "Compare Result")
Leaving knn, kmeans and specc aside, the other methods achieve pretty good
results. In the PD case study, the reason for suboptimal results in some clustering
methods may be rooted in lack of training (e.g., specc and kmeans) or the curse of
(high) dimensionality, which we saw in Chap. 7. As the data are rather sparse,
predicting from the nearest neighbors may not be too reliable.
Cross-Validation
References
Elder, J, Nisbet, R, Miner, G (eds.) (2009) Handbook of Statistical Analysis and Data Mining
Applications, Academic Press, ISBN 0080912036, 9780080912035.
Hastie, T, Tibshirani, R, Friedman, J. (2013) The Elements of Statistical Learning: Data Mining,
Inference, and Prediction, Springer Series in Statistics, New York, ISBN 1489905189,
9781489905185.
Hothorn, T, Everitt, BS. (2014) A Handbook of Statistical Analyses using R, CRC Press, ISBN
1482204592, 9781482204599.
https://en.wikipedia.org/wiki/Coefficient_of_determination
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0157077
Chapter 22
Function Optimization
We will start with function optimization without restrictions for the domain of the
cost function, Ω 3 {xi}. The extreme value theorem suggests that a solution to the
free optimization processes, minx1,x2,x3,...,xn f xð 1;x2;x3;...;xnÞ or maxx1,x2,x3,...,xn f
xð 1;x2;x3;...;xnÞ, may be obtained by a gradient vector descent method. This
means that we can minimize/maximize theobjectivefunctionbyfinding
¼
solutionsto∇f ndxdf1; df
dx 2;...; df
dx 1g ¼
f0;0;...;0g.Solutionstothisequation,x1,..., xn, will present candidate (local)
minima and maxima.
In general, identifying critical points using the gradient or tangent plane, where
the partial derivatives are trivial, may not be sufficient to determine the extrema
(minima or maxima) of multivariate objective functions. Some critical points may
represent inflection points, or local extrema that are far from the global optimum
of the objective function. The eigenvalues of the Hessian matrix, which includes
739
the second order partial derivatives, at the critical points provide clues to pinpoint
extrema. For instance, invertible Hessian matrices that (i) are positive definite
(i.e.,
all eigenvalues are positive), yield a local minimum at the critical point, (ii) are
negative definite (all eigenvalues are negative) at the critical point suggests that
the objective function has a local maximum, and (iii) have both positive and
negative eigenvalues yield a saddle point for the objective function at the critical
point where the gradient is trivial.
There are two complementary strategies to avoid being trapped in local
extrema. First, we can run many iterations with different initial vectors. At each
iteration, the objective function may achieve a (local) maximum/minimum/saddle
point. Finally, we select the overall minimal (or maximal) value from all iterations.
Another adaptive strategy involves either adjusting the step sizes or accepting
solutions in probability, e.g., simulated annealing is one example of an adaptive
optimization.
Fig. 22.1 Plots of the density and cumulative distribution functions of the simulated data
CDF xð Þ p ¼ 0:
The uniroot and stats::nlm R functions do non-linear minimization of a function
f using a Newton-Raphson algorithm.
set.seed(1234) x <-
rnorm(1000, 100, 20)
pdf_x <- density(x)
Let’s look at the function f(x1,x2) ¼ (x1 3)2 + (x2 + 4)2. We define the function in
R and utilize the optim() function to obtain the extrema points in the support of the
objective function and/or the extrema values at these critical points.
require("stats") f <- function(x) { (x[1]
- 3)^2 + (x[2] +4)^2 } initial_x <- c(0,
-1)
x_optimal <- optim(initial_x, f, method="CG") # performs
minimization x_min <- x_optimal$par
# x_min contains the domain values where the (local) minimum is
attained x_min # critical point/vector
## [1] 3 -4 x_optimal$value # extrema value of the
objective function
## [1] 8.450445e-15
742 22 Function Optimization
22.1 Free (Unconstrained) Optimization optim allows the use of six
• Nelder-Mead: robust but relatively slow, works reasonably well for non-
differentiable functions.
• BFGS: quasi-Newton method (also known as a variable metric algorithm), uses
function values and gradients to build up a picture of the surface to be
optimized.
• CG: conjugate gradients method, fragile, but successful in larger optimization
problems because it’s unnecessary to save large matrix.
• L-BFGS-B: allows box constraints.
• SANN: a variant of simulated annealing, belonging to the class of stochastic
global optimization methods.
• Brent: for one-dimensional problems only, useful in cases where optim() is
used inside other functions where only method can be specified.
Consider the function f(x) ¼ 10 sin (0.3x) sin (1.3x2) 0.00002x4 + 0.3x + 35.
Maximizing f() is equivalent to minimizing f(). Let’s plot this oscillatory function,
then find and report its critical points and extremum values. The function optim
returns two important results:
• par: the best set of domain parameters found to optimize the function •
value: the extreme values of the function corresponding to par (Fig. 22.2).
743
Fig. 22.2 Example of minimizing and oscillatory function, f(x) ¼ 10 sin (0.3x) sin (1.3x2)
0.00002x4 + 0.3x + 35, using optim
funct_osc <- function (x) { -(10*sin(0.3*x)*sin(1.3*x^2) -
0.00002*x^4 +
0.3*x+35) }
plot(funct_osc, -50, 50, n = 1000, main = "optim() minimizing an
oscillatory function")
abline(v=17, lty=3, lwd=4, col="red")
8g1ðx1;x2;...;xnÞ ¼ 0
>
< ... :
>: gkðx1;x2;...;xnÞ ¼ 0
744 22 Function Optimization
Note that the right hand sides of these equations may always be assumed to be
trivial (0), otherwise we can just move the non-trivial parts within the constraint
functions gi. Linear Programming, Quadratic Programming, and Lagrange
multipliers may be used to solve such equality-constrained optimization problems.
We can merge the equality constraints within the objective function ( f ! f∗).
Lagrange multipliers represent a typical solution strategy that turns the
constrained optimization problem (minxf(x) subject to gi(x1,x2,...,xn), 1 i k), into
an unconstrained optimization problem:
k
f∗ðx1;x2;...;xn;λ1;λ2;...;λkÞ ¼ f xð 1;x2;...;xnÞ þ X
λigiðx1;x2;...;xnÞ:
i¼1
22.2 Constrained Optimization 745
g
f∗ðx1;x2;...;xn;λ1;λ2;...;λkÞ ¼ f xð 1;x2;...;xnÞ þ λ1 1ðx1;x2;...;xnÞ þ
þ λkgkðx1;x2;...;xnÞ:
There are no general solutions for arbitrary inequality constraints; however, partial
solutions do exist when some restrictions on the form of constraints are present.
When both the constraints and the objective function are linearfunctions of the
domain variables, then the problem can be solved by Linear Programming.
LP works when the objective function is a linear function. The constraint functions
are also linear combination of the same variables.
Consider the following elementary (minimization) example:
subject to:
solve(lps.model)
## [1] 0
# Retrieve the values of the variables from a solved linear
program model get.variables(lps.model) # check against the exact
solution x_1 = 0, x_2 = 8, x_3 = 0 ## [1] 0 8 0
get.objective(lps.model) # get optimal (min) value
## [1] -32
In lower dimensional problems, we can also plot the constraints to graphically
demonstrate the corresponding support restriction. For instance, here is an
example of a simpler 2D constraint and its Venn diagrammatic representation (Fig.
22.3).
< x1
6
: 8 : x2
x1
library(ggplot2)
ggplot(data.frame(x = c(-100, 0)), aes(x = x)) +
22.2 Constrained Optimization 747
stat_function(fun=function(x) {(150-2*x)/6},
aes(color="Function 1")) + stat_function(fun=function(x) {
-x }, aes(color = "Function 2")) + theme_bw() +
scale_color_discrete(name = "Function") + geom_polygon(
Fig. 22.3 A 2D graphical depiction of the function optimization support restriction constraints
8 x1 þ 2x2 þ 3x3 16
>
< 3x1 x2 6x3 0:
>: x1 x2 2
## [1] 5
##
## $scaling
## [1] "geometric" "equilibrate" "integers"
##
## $sense
## [1] "maximize"
##
## $simplextype
## [1] "dual" "primal"
##
## $timeout
## [1] 0
##
## $verbose ## [1] "neutral" solve(lps.model2) # 0
suggests that this solution convergences
point of maximum
## [1] 20 18 0 get.objective(lps.model2) #
## [1] 132
In 3D, we can utilize the rgl::surface3d() method to display the constraints.
This output is suppressed, as it can only be interpreted via the pop-out 3D
rendering window.
library("rgl") n <- 100 x <- y
<- seq(-500, 500, length = n)
region <- expand.grid(x = x, y =
y)
set.type(lps.model, 2, "binary")
set.type(lps.model, 3, "integer")
get.type(lps.model) # This is Mixed Integer Linear Programming
(MILP)
## [1] "real" "integer" "integer"
columns=c(1))
"x3")) print(lps.model)
## Model name:
## x1 x2 x3
## Minimize -3 -4 -3
## R1 6 2 4 <=
150 ## R2 1 1 6
>= 0
## R3 4 5 4 =
40 ## Kind Std Std Std
## Type Real Int Int
## Upper 5 1 Inf
## Lower -5 0 0
solve(lps.model)
## [1] 0
get.objective(lps.model)
## [1] -30.25
get.variables(lps.model
) ## [1] 4.75 1.00 4.00
get.constraints(lps.mod
el)
## [1] 46.50 29.75 40.00
set.type(lps.model, 1, "binary")
set.type(lps.model, 2, "binary")
set.type(lps.model, 3, "binary")
print(lps.model)
22.2 Constrained Optimization 751
## Model name:
## C1 C2 C3
## Minimize 2 1 2
## R1 1 2 4 <=
5 ## R2 1 1 6
>= 2 ## R3 1 1
1 = 2 ## Kind Std
Std Std
## Type Int Int
Int ## Upper
1 1 1 ##
Lower 0 0 0
solve(lps.model)
## [1] 0
get.variables(lps.model)
## [1] 1 1 0
22.2.4 Quadratic Programming (QP)
QP can be used for second order (quadratic) objective functions, but the constraint
functions are still linear combinations of the domain variables.
A matrix formulation of the problem can be expressed as minimizing an
objective function:
f Xð Þ ¼ XTDX dTX,
ATX ½¼ j b,
where the first k constrains may represent equalities (¼) and the remaining ones
are inequalities (), and b is the constraints right hand size (RHS) constant vector.
Here is an example of a QP objective function and its R optimization:
4x1 þ 3x2 ¼ 8
2x1 þ x2 ¼ 2 :
2x2 þ x3 0
library(quadprog)
752 22 Function Optimization
## [1] 49
The minimum value, 49, of the QP solution is attained at x1 ¼ 1, x2 ¼ 4, x3 ¼ 8.
When D is a positive definitive matrix, i.e., XTDX > 0, for all non-zero X, the
QP problem may be solved in polynomial time. Otherwise, the QP problem is NP-
hard. In general, even if D has only one negative eigenvalue, the QP problem is
still NP-hard.
The QP function solve.QP() expects a positive definitive matrix D.
The package Rsolnp provides a special function solnp(), which solves the general
non-linear programming problem:
minf xð
Þx
subject to: g xð Þ ¼ 0
lh h xð Þ uh lx x
ux,
Duality in math really just means having two complementary ways to think about
an optimization problem. The primal problem represents an optimization
challenge in terms of the original decision variable x. The dual problem, also
called Lagrange dual, searches for a lower bound of a minimization problem or an
upper bound for a maximization problem. In general, the primal problem may be
difficult to analyze, or solve directly, because it may include non-differentiable
penalty terms, e.g., l1 norms, recall LASSO/Ridge regularization in Chap. 18.
Hence, we turn to the corresponding Lagrange dual problem where the solutions
may be more amenable, especially for convex functions, that satisfy the following
inequality:
Motivation
Suppose we want to borrow money, x, from a bank, or lender, and f(x) represents
the borrowing cost to us. There are natural “design constraints” on money lending.
For instance, there may be a cap in the interest rate, h(x) b, or we can have many
other constraints on the loan duration. There may be multiple lenders, including
selffunding, that may “charge” us f(x) for lending us x. Lenders goals are to
maximize profits. Yet, they can’t charge you more than the prime interest rate,
plus some premium based on your credit worthiness. Thus, for a given fixed λ, a
lender may make us an offer to lend us x aiming to minimize
f xð Þ þ λ h xð Þ:
If this cost is not optimized, i.e., minimized, you may be able to get another
loan y at lower cost f(y) < f(x), and the funding agency loses your business. If the
cost/ objective function is minimized, the lender may maximize their profit by
varying λ and still get us to sign on the loan.
The customer’s strategy represents a game theoretic interpretation of the
primal problem, whereas the dual problem corresponds to the strategy of the
lender.
In solving complex optimization problems, duality is equivalent to existence of
a saddle point of the Lagrangian. For convex problems, the double-dual is
equivalent to the primal problem. In other words, applying the convex conjugate
(Fenchel transform) twice returns the convexification of the original objective
function, which in most situations is the same as the original function.
754 22 Function Optimization
The dual of a vector space is defined as the space of all continuous linear
functionals on that space. Let X ¼ Rn, Y ¼ Rm, f : X ! R, and h : X ! Y. Consider the
following optimization problem:
minf xð
Þx
subject to
x2X h xð
Þ 0:
Then, this primal problem has a corresponding dual problem:
subject to λi 0,80 i
m:
The parameter λ 2 Rm is an element of the dual space of Y, i.e., Y∗, since the
inner product hλ,h(x)i is a continuous linear functional on Y. Here Y is finite
## [1] -10 6
Example 2: Quadratic Example
lh <- c(0)
uh <- c(4)
##
## Iter: 1 fn: 7.8697 Pars: 0.68437 0.31563
## Iter: 2 fn: 5.6456 Pars: 0.39701 0.03895
## Iter: 3 fn: 5.1604 Pars: 0.200217 0.002001
## Iter: 4 fn: 5.0401 Pars: 0.10011821 0.00005323
## Iter: 5 fn: 5.0100 Pars: 0.0500592618 0.0000006781
## Iter: 6 fn: 5.0025 Pars: 0.02502983706 -0.00000004425
## Iter: 7 fn: 5.0006 Pars: 0.01251500215 -0.00000005034
## Iter: 8 fn: 5.0002 Pars: 0.00625757145
-0.00000005045 ## Iter: 9 fn: 5.0000 Pars:
0.00312915970 -0.00000004968 ## Iter: 10 fn: 5.0000
Pars: 0.00156561388 -0.00000004983 ## Iter: 11 fn:
5.0000 Pars: 0.0007831473 -0.0000000508
## Iter: 12 fn: 5.0000 Pars: 0.00039896484 -0.00000005045
## Iter: 13 fn: 5.0000 Pars: 0.00021282342 -0.00000004897
## Iter: 14 fn: 5.0000 Pars: 0.00014285437 -0.00000004926
## Iter: 15 fn: 5.0000 Pars: 0.00011892066 -0.00000004976
## solnp--> Completed in 15 iterations
sol2$values
## [1] 19.000000 7.869675 5.645626 5.160388 5.040095 5.010024
5.002506 ## [8] 5.000627 5.000157 5.000039 5.000010
5.000002 5.000001
756 22 Function Optimization
5.000000
## [15] 5.000000 5.000000
sol2$pars
lx <- rep(1, 3)
ux <- rep(10,
3)
##[29] 5.000000
sol3$pars
## [1] 2.886751 2.886751 5.773505
22.4 Manual Versus Automated Lagrange Multiplier Optimization
757
The non-linear optimization is sensitive to the initial parameters (pars),
especially when the objective function is not smooth or if there are many local
minima. The function gosolnp() may be employed to generate initial (guesstimates
of the) parameters.
return(c(z1, z2))
}
constraints4 <- c(2, 1)
x0 <- c(1, 1, 1) ctrl <- list(trace=0) sol4 <- solnp(x0, fun = fn4,
eqfun = eqn4, eqB = constraints4, control=ctrl) sol4$values
## [1] 2.000000 -5.078795 -11.416448 -5.764047 -3.584894
-3.224531
## [7] -3.211165 -3.211103 -3.211103
sol4$pars
## [1] 0.55470019 -0.83205030 -0.05854932
The materials in the linear algebra and matrix computing, Chap. 5, and the
regularized parameter estimation, Chap. 18, provide additional examples of least
squares parameter estimation, regression, and regularization.
Optimization
Let’s manually implement the Lagrange Multipliers procedure and then compare
the results to some optimization examples obtained by automatic R function calls.
The latter strategies may be more reliable, efficient, flexible, and rigorously
validated.
758 22 Function Optimization
The manual implementation provides a more direct and explicit representation of
the actual optimization strategy.
We will test a simple example of an objective function:
f xð ;y;zÞ ¼ 4y 2z þ x2 þ y2,
subject to two constraints:
2x y z ¼ 2 x2 þ
y2 þ z ¼ 1:
0, 0)
f xð ;y;zÞ ¼ 4y 2z þ x2 þ y2
subject to:
760 22 Function Optimization
2x y z ¼ 2 x2 þ
y2 ¼ 1:
library(Rsolnp)
fn4 <- function(x) # f(x, y, z) = 4y-2z + x^2+y^2
{
4*x[2] - 2*x[3] + x[1]^2+ x[2]^2
}
x0 <- c(1, 1, 1)
ctrl <-
list(trace=0)
sol4 <- solnp(x0, fun = fn4, eqfun = eqn4, eqB = constraints4,
control=ctrl) sol4$values
## [1] 4.0000000 -0.1146266 -5.9308852 -3.7035124 -2.5810141
-2.5069444 ## [7] -2.5065779 -2.5065778 -2.5065778
The results of both (manual and automated) experiments identifying the
optimal (x,y,z) coordinates minimizing the objective function f(x,y,z) ¼ 4y 2z + x2
+ y2 are in agreement.
Suppose we are given xnoisy with n noise-corrupted data points. The noise may be
additive (xnoisy x + E) or not additive. We may be interested in denoising the signal
and recovering a version of the original (unobserved) dataset x, potentially as a
smoothed representation of the original (uncorrupted) process. Smoother signals
suggest less (random) fluctuations between neighboring data points.
One objective function we can design to denoise the observed signal, xnoisy, may
include a fidelity term and a regularization term; see the regularized linear
modeling in Chap. 18.
22.5 Data Denoising
761
Totalvariation denoising assumes that for each time point t, the observed noisy
data
xnoisyð Þt x tð Þ þ Eð Þt : observed
noise |{z}
To recover the native signal, x(t), we can optimize (argmin xf(x)) the following
objective cost function:
n1 n1
1 X 2 λ X j x tð Þ x tð 1Þ j , f xð Þ ¼k y tð Þ
x noisy Þkt þ
ð
2 t¼1 t¼2 |
fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfidelity
term
ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl} |
fflfflfflfflfflfflfflfflfflfflfflfflfflfflregularization
term
ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}
Fig. 22.4 Denoising by smoothing, raw noisy data and two smoothed models (loess)
Fig. 22.5 Manual denoising signal recovery using non-linear constaint optimization (solnp)
# initialization of parameters
betas_0 <- c(0.3, 0.3, 0.5, 1)
betas <- betas_0
# Denoised model
x_denoised <- function(x, betas) { if (length(betas) != 4) {
print(paste0("Error!!! length(betas)=", length(betas), " != 4!!!
Exiting
..."))
break();
}
# print(paste0(" .... betas = ", betas, "...\n"))
# original noise function definition: sin(x)^2/(1.5+cos(x))
return((betas[1]*sin(betas[2]*x)^2)/(betas[3]+cos(x)))
}
library(Rsolnp)
# Objective Function
objective_func <-
function(betas) {
# f(x) = 1/2 * \sum_{t=1}^{n-1} {|y(t) - x_{noisy}(t)\|^2}} +
\lambda *
\sum_{t=2}^{n-1} | x(t) - x(t-1)| fid <-
764 22 Function Optimization
fidelity(x_noisy(xs), x_denoised(xs, betas))
reg <- abs(betas[4])*regularizer(betas) error
<- fid + reg
# uncomment to track the iterative optimization state
# print(paste0(".... Fidelity =", fid, " ... Regularizer = ", reg,
" ...
TotalError=", error))
#
print(paste0("....betas=(",betas[1],",",betas[2],",",betas[3],",",
betas
[4],")"))
return(error)
}
# unconstraint optimization
# ctrl <- list(trace=1, tol=1e-5) ## could specify: outer.iter=5,
inner.iter=9)
# sol_lambda <- solnp(betas_0, fun = denoising_func, control=ctrl)
# suppress the report of the the functional values (too many)
# sol_lambda$values
Fig. 22.6 Plot of the observed noisy data and four alternative denoised reconstructions
# install.packages("tvd")
library("tvd")
2. maxx2sinx 10
x2
:
766 22 Function Optimization
3. maxx, y(2xy + 2x x2 2y2).
subject to:
8 4x1
þ
3x2
>
þ 2x3 þ x4 10 >>< x1 x3 þ 2x4
¼2
> x1 þ x2 þ x3 þ x4 1:
subject to:
< x1,x2 0 : x1 x2 1
>>: x1,x2 2integers
subject to:
x1 þ x2 ¼ 1
x1,x2 0:
subject to x1, x2 0.
References
Based on the signal denoising example presented in this chapter, try to change the
noise level, replicate the denoising process, and report your findings.
References
Cortez, P. (2014) Modern Optimization with R, Springer, ISBN 3319082639, 9783319082639.
CRAN Optimization & Math Programming Site provides details about a broad range of R
optimization functions.
Vincent Zoonekynd’s Optimization Blog http://zoonek.free.fr/blosxom/R/2012-06-01_Optimiza
tion.html.
Chapter 23
Deep Learning, Neural Networks
23.1.1 Perceptrons
W ¼ AX þ BY:
At each layer l, the weight matrix, W(l), has the following properties:
• The number of rows of W(l) equals the number of nodes/units in the previous
(l 1)st layer, and
• The number of columns of W(l) equals the number of units in the next (l + 1)st
layer.
Neuronal cells fire depending on the presynaptic inputs to the cell, which
causes constant fluctuations of the neuronal membrane - depolarizing or
hyperpolarizing, i.e., the cell membrane potential rises or falls. Similarly,
perceptrons rely on thresholding of the weight-averaged input signal, which for
biological cells corresponds to voltage increases passing a critical threshold.
Perceptrons output non-zero values only when the weighted sum exceeds a
certain threshold C. In terms of its input vector, (X, Y), we can describe the
output of each perceptron (P) by:
( 1, if AX þ BY > C
Output Pð Þ ¼:
0, if AX þ BY C
the weights leading from all the units i in the previous layer to all of the units j in
the current layer. The product matrix X W has dimensions n k.
The hidden size parameter k, the weight matrix Wm k, and the bias vector bn 1
are used to compute the outputs at each layer:
The role of the bias parameter is similar to the intercept term in linear
regression and helps improve the accuracy of prediction by shifting the decision
boundary along Y axis. The outputs are fully-connected layers that feed into an
activation layer to perform element-wise operations. Examples of activation
functions that transform real numbers to probability-like values include (Fig.
23.1):
• The sigmoid function, a special case of the logistic function, which converts
real numbers to probabilities,
• The rectifier (relu, Rectified Linear Unit) function, which outputs the max(0,
input),
• The tanh (hyperbolic tangent function).
768 23 Deep Learning, Neural Networks
The final fully-connected layer may be hidden of size equal to the number of
classes in the dataset and may be followed by a softmax layer mapping the input
into a probability score. For example, if a size n m input is denoted by Xn m, then
the probability scores may be obtained by the softmax transformation function,
which maps real valued vectors to vectors of probabilities:
...
Pj¼1i 1exi,j Pjm¼1i exi,j !: ex ,
ex ,m
m
;
Fig. 23.2 A schematic of a fully-;
connected feedforward neural
network with two hidden layers
The plot above illustrates the key elements in the action potential,
or activation function, and the calculations of the corresponding training
parameters:
where:
ðÞ¼
• f is the activation function, e.g., logistic function f x 1
1þ ex. It converts the
aggregate weights at each node to probability values,
769
• wkl,i is the weight carried from the ith element of the (l 1)th layer to the kth
element of the current lth layer,
• bkl is the (residual) bias present in the kth element in the lth layer. This is
effectively the information not explained by the training model.
These parameters may be estimated using different techniques (e.g., using
least squares, or stochastically using steepest decent methods) based on the
training data.
There are parallels between biology (neuronal cells) and the mathematical
models (perceptrons) for neural network representation. The human brain
contains about 1011 neuronal cells connected by approximately 10 15 synapses
forming the basis of our
23.2 Biological Relevance
functional phenotypes. Figure 23.3 illustrates some of the parallels between brain
biology and the mathematical representation using synthetic neural nets. Every
neuronal cell receives multi-channel (afferent) input from its dendrites, generates
output signals, and disseminates the results via its (efferent) axonal connections
and synaptic connections to dendrites of other neurons.
The perceptron is a mathematical model of a neuronal cell that allows us to
explicitly determine algorithmic and computational protocols transforming input
signals into output actions. For instance, a signal arriving through an axon x0 is
modulated by some prior weight, e.g., synaptic strength, w0 x0. Internally, within
the neuronal cell, this input is aggregated (summed, or weight-averaged) with
inputs from all other axons. Brain plasticity suggests that synaptic strengths
(weight coefficients w) are strengthened by training and prior experience. This
learning process controls the direction and strength of influence of neurons on
other neurons. Either excitatory (w > 0) or inhibitory (w 0) influences are
possible. Dendrites and axons carry signals to and from neurons, where the
aggregate responses are computed and transmitted downstream. Neuronal cells
only fire if action potentials exceed a certain threshold. In this situation, a signal
is transmitted downstream through its axons. The neuron remains silent, if the
summed signal is below the critical threshold.
Timing of events is important in biological networks. In the computational
perceptron model, a first order approximation may ignore the timing of
neuronal firing (spike events) and only focus on the frequency of the firing.
The firing rate of a neuron with an activation function f represents the
frequency of the spikes along the axon. We saw some examples of activations
functions earlier.
770 23 Deep Learning, Neural Networks
Figure 23.3 illustrates the parallels between the brain network-synaptic
organization and an artificial synthetic neural network.
Fig. 23.3 A depiction of the parallels between a biological central nervous system network
organization (human bran) and a synthetic neural network employed in deep machine learning
23.3 Simple Neural Net Examples
Table 23.1 Exact XOR binary operator InputX InputY XOR output(Z)
0 0 0
0 1 1
1 0 1
1 1 0
Blocks
http://playground.tensorflow.or
https://cs.stanford.edu/people
karpathy/convnetjs/demo
classify2d.htm
Fig. 23.7 Live Demo: TensorFlow and ConvnetJS deep neural network webapps
23.4 Classification
Let’s load the mlbench and mlbench packages and demonstrate the basic
invocation of mxnet. The Sonar data mlbench::Sonar includes sonar signals
bouncing off a metal cylinder or a roughly cylindrical rock. Each of 208 patterns
includes a set of 60 numbers (features) in the range 0.0–1.0, and a label M
(metal) or R (rock). Each feature represents the energy within a particular
frequency band, integrated over a certain period of time. The M and R labels
associated with each observation classify the record as rock or mine (metal)
cylinder. The numbers in the labels are in increasing order of aspect angle, but
they do not encode the angle directly.
# Load the required packages: mlbench and mxnet
# install.packages("mlbench"); install.packages("mxnet")
# Note mxnet requires "visNetwork"
# If it doesn't work, you may need the following lines:
# install.packages("drat",
repos="https://cran.rstudio.com") #
drat:::addRepo("dmlc")
# install.packages("mxnet")
require(mlbench)
require(mxnet) ## Init
Rcpp data(Sonar,
package="mlbench")
table(Sonar[,61])
##
## M R
23.4 Classification 775
## 111 97
Sonar[,61] = as.numeric(Sonar[,61])-1 # R = "1", "M" = "0"
set.seed(123) train.ind =
sample(1:nrow(Sonar),0.7*nrow(Sonar))
pred.label = t(preds1)
table(pred.label, test.y)
## test.y
## pred.label 0 1
## 0 28
7 ## 1 6
22
library("caret")
sensitivity(factor(preds1), factor(as.numeric(test.y)),positive =
1)
## [1] 0.7586207
776 23 Deep Learning, Neural Networks
specificity(factor(preds1), factor(as.numeric(test.y)),negative =
0)
## [1] 0.8235294
We can also use crossval::diagnosticErrors() and crossval:: confusionMatrix()
to get more detailed evaluations. Similar to using the sensitivity() and
specificity() methods, we should specify the negative and positive labels.
Note that you have to specify crossval::confusionMatrix() if you also have the
caret package loaded, as caret also has a function called confusionMatrix().
library("crossval")
diagnosticErrors(crossval::confusionMatrix(preds1,test.y, negative
= 0))
get_roc(preds50)
Fig. 23.8 ROC curves of multi-layer perceptron predictions (mx.mlp), using out-of-bag test-data,
corresponding to different number of iterations, see Chap. 14
The plot suggests that the results stabilize after 100 training (epoch) iterations.
Let’s look at some visualizations of the real labels of the test data (test.y) and
their corresponding ML-derived classification labels (preds[2,]) using 200
iterations (Figs. 23.9, 23.10, 23.11, 23.12, and 23.13).
graph.viz(model.mx$symbol)
hist(preds10[2,],main = "rounds=10")
hist(preds50[2,],main = "rounds=50")
778 23 Deep Learning, Neural Networks
hist(preds100[2,],main = "rounds=100")
hist(preds[2,],main = "rounds=200")
softmaxoutput0
fullyconnected0
fullyconnected1
activation0
8
2
Fig. 23.9 MLP model structure (the plot is rotated 90-degrees to save space)
Fig. 23.10 Frequency plot of the predicted probabilities using ten epochs corresponding to ten
full
(training-phase) passes through the data (cf. num.round¼n)
23.4 Classification 779
Fig. 23.11 Frequency plot of the predicted probabilities using 50 epochs, compare to Fig. 23.10
Fig. 23.12 Frequency plot of the predicted probabilities using 100 epochs, compare to Fig. 23.11
Fig. 23.13 And finally, the plot of the predicted probabilities using 200 epochs; compare to
Fig. 23.12
Fig. 23.14 Summary plots illustrating the progression of the neural network learning from 10 ro
200 epochs, corresponding with improved binary classification results (testing data)
library(ggplot2)
get_gghist = function(preds){
ggplot(data.frame(test.y, preds), aes(x=preds,
group=test.y, fill=as.factor(test.y)))+
23.4 Classification 781
geom_histogram(position="dodge",binwidth=0.25)+theme
_bw()
}
df =
data.frame(preds[2,],preds100[2,],preds50[2,],preds10[2,]
) p <- lapply(df,get_gghist)
require(gridExtra) # used for arrange ggplots
grid.arrange(p$preds10.2...,p$preds50.2...,p$preds100.2...,p$preds.
2...)
23.4.2 MXNet Notes
• The mx.mlp() function is a proxy to the more complex and laborious process
of defining a neural network by using MXNet’s Symbol. For instance, this
call model.mx <- mx.mlp(train.x, train.y, hidden_node¼8, out_node¼2,
out_activation¼"softmax", num.round¼20, array.batch.size¼15,
learning.rate¼0.1, momentum¼0.9, eval.metric¼mx.metric.accuracy) would be
equivalent to a symbolic network definition like: data <- mx.symbol.Variable
("data"); fc1 <- mx.symbol.FullyConnected(data, num_hidden¼128) act1 <-
mx.symbol.Activation(fc1, name¼"relu1", act_type¼"relu"); fc2 <- mx.symbol.
FullyConnected(act1, name¼"fc2", num_hidden¼64); act2 <-
mx.symbol.Activation(fc2, name¼"relu2", act_type¼"relu");fc3<-
mx.symbol.FullyConnected(act2, name¼"fc3", num_hidden¼2); lro <-
mx.symbol. SoftmaxOutput(fc3, name¼"sm"); model2 <- mx.model.
FeedForward.create(lro, X¼train.x, y¼train.y, ctx¼mx. cpu(), num.round¼100,
array.batch.size¼15, learning. rate¼0.07,momentum¼0.9) (see example with
linear regression below).
• Layer-by-layer definitions translate inputs into outputs. At each level, the
network allows for a different number of neurons and alternative activation
functions. Other options can be specified by using mx.symbol:
• mx.symbol.Convolution applies convolution to the input and then adds a bias.
It can create convolutional neural networks.
• mx.symbol.Deconvolution does the opposite and can be used in segmentation
networks along with mx.symbol.UpSampling, e.g., to reconstruct the pixel-
wise classification of an image.
• mx.symbol.Pooling reduces the data by selecting signals with the highest
response.
• mx.symbol.Flatten links convolutional and pooling layers to form a fully
connected network.
• mx.symbol.Dropout attempts to cope with the overfitting problem.
The function mx.mlp() is a wrapper for quick design of standard multi-layer
perceptrons. For more extensive experiments, customized symbolic
representation can be explicitly specified using combinations of the above
methods.
782 23 Deep Learning, Neural Networks
To allow smooth, fast, and consistent operation on CPU and GPU, in inmxnet,
the generic R function controlling the reproducibility of stochastic results is
overwritten by mx.set.seed. So can use mx.set.seed() to control random numbers
in MXNet.
To examine the accuracy of the model.mx learner (trained on the training
data), we can make prediction (on testing data) and evaluate the results using the
provided testing labels (report the confusion matrix).
23.5 Case-Studies
Let’s first demonstrate a deep learning regression using the ALS data to predict
ALSFRS_slope, Figs. 23.15 and 23.16.
als <-
read.csv("https://umich.instructure.com/files/1789624/download?
downlo ad_frd=1")
als <- scale(als[,-c(1,7)]) train.ind =
sample(1:nrow(als),0.7*nrow(als))
train.x = data.matrix(als[train.ind,-
c(1,7)]) train.y = als[train.ind,7]
test.x = data.matrix(scale(als[-train.ind,-c(1,7)]))
test.y = als[-train.ind,7]
mx.set.seed(1234)
# Create a MXNet Feedforward neural net model with the specified
training.
model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y,
ctx=mx.cpu(), num.round=1000, array.batch.size=20,
learning.rate=2e-6, momentum=0.9,
eval.metric=mx.metric.rmse,verbose=F)
Fig. 23.15 The strong linear relation between the out-of-bag testing data continuous outcome
variable (y-axis) and the corresponding predicted regression values (x-axis) suggests a good
network prediction performance
784 23 Deep Learning, Neural Networks
linearressionoutput0
fullyconnected10
data
1
The option verbose ¼ F can
suppress messages, including training
accuracy reports, in each iteration.
You must scale data before inputting it into MXnet, which expects that the
training and testing sets are normalized to the same scale. There are two
strategies to scale the data.
• Either scaling the complete data simultaneously and then splitting them into
train data and test data, or
• Alternatively, scaling only the training dataset to enable model-training, but
saving your protocol for data normalization, as new data (testing, validation)
will need to be (pre)processed the same way as the training data.
Have a look at the Google TensorFlow API. It shows the importance of
learning rate and the number of rounds. You should test different sets of
parameters.
• Too small learning rate may lead to long computations.
• Too large learning rate may cause the algorithm to fail to converge, as large
step size (learning rate) may by-pass the optimal solution and then oscillate or
even diverge.
preds = predict(model, test.x) sqrt(mean((preds-
test.y)^2))
## [1] 0.2171032
range(test.y)
graph.viz(model$symbol)
23.5 Case-Studies 785
We can again use the mx.mlp wrapper to construct the learning network, but we
can also use a more flexible way to construct and configure the multi-layer
network in mxnet. This configuration is done by using the Symbol call, which
specifies the links among network nodes, the activation function, dropout ratio,
and so on:
Below we show the configuration of a perceptron with one hidden layer.
########### Network configuration
# variables act <-
mx.symbol.Variable("data")
# affine transformation fc <-
mx.symbol.FullyConnected(act, num.hidden = 10)
# non-linear activation act <-
mx.symbol.Activation(data = fc, act_type = "relu")
# affine transformation fc <-
mx.symbol.FullyConnected(act, num.hidden = 2)
# softmax output and crossmlp
<- mx.symbol.SoftmaxOutput(fc)
####Preparing data
set.seed(2235)
############ spirals dataset
s <- sample(x = c("train", "test"), size = 1000, prob = c(.8,.2),
replace = TRUE) dta <- mlbench.spirals(n = 1000, cycles = 1.2, sd =
.03) dta <- cbind(dta[["x"]], as.integer(dta[["classes"]]) - 1)
colnames(dta) <- c("x","y","label") ######### train, validate, test
dta.train <- dta[s == "train",] dta.test <- dta[s == "test",]
Let’s display the data and examine its structure (Fig. 23.17).
dt <- as.data.frame(dta);dt[,3] <-
as.factor(dt[,3]) dt.train <- dt[s ==
"train",] dt.test <- dt[s == "test",]
p1 <- ggplot(dt,aes(x = x,y = y,color=label))+geom_point()
+ggtitle("Whole data structure")
p2 <- ggplot(dt.train,aes(x = x,y =
y,color=label))+geom_point()+ggtitle("Train data
structure") p3 <- ggplot(dt.test,aes(x = x,y =
y,color=label))+geom_point()+ggtitle("Test data structure")
grid.arrange(p1,p2,p3,nrow=3)
786 23 Deep Learning, Neural Networks
Fig. 23.17 Original spirals data structure (whole, traning and testing sets)
# Network training
# Feed-forward networks may be trained using iterative gradient
descent algo rithms. A **batch** is a subset of data that is used
during single forward p ass of the algorithm. An **epoch**
represents one step of the iterative proc ess that is repeated until
all training examples are used.
## pred.label 0 1
## 0 90 30
## 1 22 73
23.5 Case-Studies 787
Fig. 23.18 Frequency of feed-forward neural network prediction probabilities (x-axis) for the
spirals data relative to testing set labels (colors)
The prediction result is close to perfect, and we can inspect deeper the results
using crossval::confusionMatrix (Fig. 23.18).
library("crossval")
diagnosticErrors(crossval::confusionMatrix(pred.label,dta.test[,3],n
egative = 0))
mx.set.seed(2235)
model <- mx.model.FeedForward.create(
symbol = mlp,
X = dta.train[, c("x", "y")], y =
dta.train[, c("label")], num.round =
2000, array.layout = "rowmajor",
learning.rate = 1, epoch.end.callback =
mx.callback.train.stop(), eval.metric =
mx.metric.accuracy, verbose = FALSE
)
## [100] training accuracy: 75.56 %
## [200] training accuracy: 76 %
## [300] training accuracy: 76 %
## [400] training accuracy: 76.45 %
## Training finished with final accuracy: 76.45 %
labeled_spiral_data <- as.data.frame(cbind(dta.test[, c("x", "y")],
as.factor(pred.label)))
colnames(labeled_spiral_data) <- c("x", "y", "label")
labeled_spiral_data$label <-
as.factor(labeled_spiral_data$label) p4 <-
ggplot(labeled_spiral_data, aes(x = x,y = y,color=label))+
geom_point()+ggtitle("Structure of Predicted-Labels on Test
Data") p4
23.5.3 IBS Study
Let’s try another example using the IBS Neuroimaging study (Figs. 23.20 and
23.21).
# IBS NI Data
# UCLA Data
wiki_url <-
read_html("http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_April
2011_NI_ IBS_Pain")
IBSData<- html_table(html_nodes(wiki_url, "table")[[2]]) # table 2
set.seed(1234)
test.ind = sample(1:354, 50, replace = F) # select 50/354 of cases
for testing, train on remaining (354-50)/354 cases
html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content">\n\t\t<a name="top"
id="top"></a>\n\t\t\t\t<h1 id= ...
790 23 Deep Learning, Neural Networks
# View(data.frame(train.x, train.y))
# View(data.frame(test.x, test.y))
# table(test.y); table(train.y)
# num.round - number of iterations to train the model
mx.set.seed(2235)
model <- mx.model.FeedForward.create(
symbol = mlp,
array.batch.size=20, X
= train.x, y=train.y,
num.round = 200,
array.layout =
"rowmajor",
learning.rate = exp(-
1),
eval.metric = mx.metric.accuracy, verbose=FALSE)
preds = predict(model, test.x)
pred.label = max.col(t(preds))-1; table(pred.label, test.y)
## test.y
## pred.label 0 1
## 0 23
10 ## 1
10 7
library("crossval")
diagnosticErrors(crossval::confusionMatrix(pred.label,test.y,negati
ve = 0))
## acc sens spec ppv npv
lor ## 0.6000000 0.4117647 0.6969697 0.4117647 0.6969697
0.4762342
## attr(,"negative")
## [1] 0
ggplot(data.frame(test.y, preds[2,]), aes(x=preds[2,],
group=test.y, fill=as.factor(test.y)))+
geom_histogram(position="dodge",binwidth=0.25)+theme_bw()
23.5 Case-Studies 791
Fig. 23.20 Frequency of the feed-forward neural network prediction probabilities (x-axis) for
the IBS data relative to testing set labels (colors)
Fig. 23.21 Validation results of the binarized feed-forward neural network prediction
probabilities
(y-axis) for the IBS testing data (x-axis) with label-coding for match(0)/mismatch(1)
792 23 Deep Learning, Neural Networks
Another case study we have seen before is the country quality of life (QoL)
dataset. Let’s explore a new neural network model and use it to predict the
overall country QoL.
wiki_url <-
read_html("http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_2008_
World_ CountriesRankings")
html_nodes(wiki_url, "#content")
## {xml_nodeset (1)}
## [1] <div id="content">\n\t\t<a name="top"
id="top"></a>\n\t\t\t\t<h1 id= ...
CountryRankingData<- html_table(html_nodes(wiki_url, "table")[[2]])
# View (CountryRankingData); dim(CountryRankingData): Select an
appropriate
# outcome "OA": Overall country ranking (13)
# Dichotomize outcome, Top-countries OA<20, bottom countries OA>=20
set.seed(1234)
test.ind = sample(1:100, 30, replace = F) # select 15/100 of cases
for testing, train on remaining 85/100 cases
CountryRankingData[,c(8:12,14)] <-
scale(CountryRankingData[,c(8:12,14)])
# scale/normalize all input variables
train.x = data.matrix(CountryRankingData[-test.ind, c(8:12,14)]) #
exclude outcome
train.y = ifelse(CountryRankingData[-test.ind, 13] < 50, 1,
0) test.x = data.matrix(CountryRankingData[test.ind,
c(8:12,14)])
test.y = ifelse(CountryRankingData[test.ind, 13] < 50, 1, 0) #
developed (high OA rank) country
# View(data.frame(train.x, train.y)); View(data.frame(test.x,
test.y))
# View(data.frame(CountryRankingData,
ifelse(CountryRankingData[,13] < 20,
1, 0)))
act <- mx.symbol.Variable("data")
fc <- mx.symbol.FullyConnected(act, num.hidden = 10)
act <- mx.symbol.Activation(data = fc, act_type =
23.5 Case-Studies 793
## test.y
## pred.label 0 1
## 0 17 1
## 1 1 11
We only need 15 rounds to achieve 97% accuracy (Figs. 23.22 and 23.23).
ggplot(data.frame(test.y, preds[2,]), aes(x=preds[2,],
group=test.y, fill=as.factor(test.y)))+
geom_histogram(position="dodge",binwidth=0.25)+theme_bw()
Fig. 23.22 Frequency of the feed-forward neural network prediction probabilities (x-axis) for
the QoL data relative to testing set labels (colors)
23.5 Case-Studies 795
Fig. 23.23 Validation results of the binarized feed-forward neural network prediction
probabilities
(y-axis) for the QoL testing data with label-coding for match(0)/mismatch(1)
#calculate sensitivity & specificity and more
library("crossval")
diagnosticErrors(crossval::confusionMatrix(pred.label,test.y,negativ
e = 0))
## acc sens spec ppv npv lor
## 0.9333333 0.9166667 0.9444444 0.9166667 0.9444444 5.2311086
## attr(,"negative")
## [1] 0
# convert pred-probability to binary classes threshold=0.5?
bin_preds <- ifelse (preds[2,]<0.5, 0, 1)
# get a factor variable comparing binary test-labels vs. predicted-
labels label_match <- as.factor(ifelse (test.y==bin_preds, 0, 1))
p6 <- ggplot(data.frame(test.y, preds[2,]), aes(x = test.y, y =
preds[2,], color=label_match))+geom_point()+ggtitle("Match between
Test Data Labels and Predicted Labels") p6
23.5.5 Handwritten Digits Classification
K ¼ Y 28 þ X,
## [1] 60
## [1] 4
## [1] 2
## [1] 60
The testdata (test.csv) has the same organization as the training data, except
that it does not contain the first label column. It includes 28,000 images and we
can predict image labels that can be stored as ImageId, Label pairs, which can be
visually compared to the 2D images for validation/inspection.
require(mxnet)
23.5 Case-Studies 797
# train.csv
pathToZip <- tempfile()
download.file("http://www.socr.umich.edu/people/dinov/2017/Spring/DS
PA_HS650
/data/DigitRecognizer_TrainingData.zip",
pathToZip) train <-
read.csv(unzip(pathToZip)) dim(train)
## [1] 42000 785
unlink(pathToZip)
# test.csv
pathToZip <- tempfile()
download.file("http://www.socr.umich.edu/people/dinov/2017/Spring/DS
PA_HS650
/data/DigitRecognizer_TestingData.zip",
pathToZip) test <-
read.csv(unzip(pathToZip)) dim(test)
## [1] 28000 784
unlink(pathToZip)
library("imager")
# first convert the CSV data (one row per image, 28,000 rows)
array_3D <- array(test, c(28, 28, 28000)) mat_2D <-
matrix(array_3D[,,1], nrow = 28, ncol = 28)
plot(as.cimg(mat_2D))
In these CSV data files, each 28 28 image is represented as a single row. The
intensities of these greyscale images are stored as 1 byte integers, in the range
[0,255], which we linearly transformed into [0, 1]. Note that we only scale the X
input, not the output (labels). Also, we don’t have manual gold-standard
validation labels for the testing data, i.e., test.y is not available for the
handwritten digits data.
The majority class (1) in the training set includes 11.2% of the observations.
data <- mx.symbol.Variable("data") represents the input layer. The first hidden
layer, set by fc1 <- mx.symbol.FullyConnected(data, name¼"fc1",
800 23 Deep Learning, Neural Networks
num_hidden¼128), takes the data as an input, its name, and the number of
hidden neurons to generate an output layer.
act1 <- mx.symbol.Activation(fc1, name¼"relu1", act_type¼"relu") sets the
activation function, which takes the output from the first hidden layer "fc1" and
generates an output that is fed into the second hidden layer "fc2", which uses
fewer hidden neurons (64).
The process repeats with the second activation "act2", resembling "act1" but
using different input source and name. As there are only ten digits (0, 1, ..., 9),
in the last layer "fc3", we set the number of neurons to 10. At the end, we set the
activation to softmax to obtain a probabilistic prediction.
Training
We are almost ready for the training process. Before we start the computation,
let’s decide what device we should use.
Here we assign CPU to mxnet. After all these preparation, you can run the
following command to train the neural network! Note that in mxnet, the correct
function to control the random process is mx.set.seed.
mx.set.seed(1234)
model <- mx.model.FeedForward.create(softmax, X=train.x, y=train.y,
ctx=devices, num.round=10, array.batch.size=100,
learning.rate=0.07, momentum=0.9,
eval.metric=mx.metric.accuracy,
initializer=mx.init.uniform(0.07),
epoch.end.callback=mx.callback.log.train.metric
(100) )
## Start training with 1 devices
## [1] Train-
accuracy=0.863031026252982 ## [2]
Train-accuracy=0.958285714285716 ##
[3] Train-accuracy=0.970785714285717
## [4] Train-accuracy=0.977857142857146
## [5] Train-accuracy=0.983238095238099
## [6] Train-accuracy=0.98521428571429
## [7] Train-
accuracy=0.987095238095242 ## [8]
Train-accuracy=0.989309523809528
## [9] Train-accuracy=0.99214285714286
## [10] Train-accuracy=0.991452380952384
For 10 rounds, the training accuracy exceeds 99%. It may not be worthwhile
trying 100 rounds, as this would increase substantially the computational
complexity.
23.5 Case-Studies 801
Forecasting
We can save the predicted labels of the testing handwritten digits to CSV:
We can open the predicted_lables.csv file and inspect the ML-labels (saved in
the 2-column ImageID and Label format CSV) assigned to the 28,000 manually
drawn digits. As the testing handwritten digits data do not have humanprovided
labels, we can’t quantitatively assess the validity of the algorithm on the testing
data (Fig. 23.28). However, we can visually inspect random handwritten digit
instances (7 in the example below, image indices 4 : 10) against their predictions
and gain intuition of the accuracy rate of the ML classifier (Table 23.4, Fig.
23.29).
Fig. 23.28 Plot of the agreement between relative frequencies in the number of train.y labels
(in range 0–9) against the testing data predicted labels. These quantities are not directly related
(frequencies of digits in training.y and predicted.testing.data); we can’t exlicitely validate the
testing-data predicitons, as we don’t have gold-standard test.y labels! However, numbers
closer to the diagonal of the plot would indicate expected good classifications, whereas, off
diagonal points may suggest less effective labeling
Table 23.4 Predicted labels for the set of the first 7 handwritten ImageId Label
digits
1 2
2 0
3 9
4 9
5 3
6 7
7 0
Fig. 23.29 Visual validation of the
handwritten digits (left) and their
neural network
23.5 Case-Studies 803
prediction (right) for the set of seven images. The number and indices of these testing data
images can be manually specified
table(train.y)
## train.y
## 0 1 2 3 4 5 6 7 8 9
## 4132 4684 4177 4351 4072 3795 4137 4401 4063 4188
table(predicted_lables[,2])
##
## 0 1 2 3 4 5 6 7 8 9
## 2774 3228 2862 2728 2781 2401 2777 2868 2826 2755
# Plot the relative frequencies between the number of train.y
labels (in range 0-9) against the testing data predicted labels.
# These are not directly related (training.y vs.
predicted.testing.data! # Remember - we don't have gold-standard
test.y labels! Generally speaking, numbers closer to the diagonal
suggest expected good classifications. Whereas, off diagonal points
may suggest less effective labeling.
label.names <- c("0", "1", "2", "3", "4", "5", "6", "7", "8", "9")
plot(ftable(train.y)[c(1:10)], ftable(predicted_lables[,2])
[c(1:10)]) text(ftable(train.y)[c(1:10)]+20,
ftable(predicted_lables[,2])[c(1:10)], labels=label.names, cex= 1.2)
# For example, the ML-classification labels assigned to the first 7
images ( from the 28,000 testing data collection) are:
head(predicted_lables, n = 7L)
## ImageId Label
## 1 1 2
## 2 2 0
## 3 3
9 ## 4 4
9
## 5 5 3
## ImageId Label
## 6 6 7
## 1 1 2
## 7 7 0
## 2 2 0
## 3 3 9library(knitr)
## 4 4 9
kable(head(predicted_lables, n = 7L), format =
"markdown")
plot(img1, axes=FALSE)
text(40, label_Ypositons, labels=label.names[1:(m_end-m_start)],
cex= 1.2, col="blue")
mtext(paste((m_end+1-m_start), " Random Images \n Indices
(m_start=", m_start, " : m_end=", m_end, ")"), side=2, line=-6,
col="black") mtext("ML Classification Labels", side=4, line=-5,
col="blue")
table(ftable(train.y)[c(1:10)], ftable(predicted_lables[,2])
[c(1:10)])
##
## 2401 2728 2755 2774 2777 2781 2826 2862 2868
3228 ## 3795 1 0 0 0 0 0 0 0
0 0 ## 4063 0 0 0 0 0 0 1 0
0 0 ## 4072 0 0 0 0 0 1 0
0 0 0
## 4132 0 0 0 1 0 0 0 0 0 0
## 4137 0 0 0 0 1 0 0 0 0
0 ## 4177 0 0 0 0 0 0 0 1 0
0 ## 4188 0 0 1 0 0 0 0 0
0 0
## 4351 0 1 0 0 0 0 0 0 0 0
## 4401 0 0 0 0 0 0 0 0 1
0 ## 4684 0 0 0 0 0 0 0 0
0 1
Examining the Network Structure Using LeNet
We can use the mxnet package LeNet convolutional neural network (CNN)
protocol for learning the network.
Let’s first construct the network.
# input data <-
mx.symbol.Variable('data')
# first conv
conv1 <- mx.symbol.Convolution(data=data, kernel=c(5,5),
num_filter=20) tanh1 <- mx.symbol.Activation(data=conv1,
act_type="tanh") pool1 <- mx.symbol.Pooling(data=tanh1,
pool_type="max", kernel=c(2,2), stride=c(2,2))
# second conv
conv2 <- mx.symbol.Convolution(data=pool1, kernel=c(5,5),
num_filter=50) tanh2 <- mx.symbol.Activation(data=conv2,
act_type="tanh") pool2 <- mx.symbol.Pooling(data=tanh2,
pool_type="max", kernel=c(2,2), stride=c(2,2))
# first fullc
flatten <- mx.symbol.Flatten(data=pool2) fc1 <-
mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 <- mx.symbol.Activation(data=fc1,
act_type="tanh")
# second fullc fc2 <-
mx.symbol.FullyConnected(data=tanh3, num_hidden=10)
# loss
lenet <- mx.symbol.SoftmaxOutput(data=fc2)
Next, we will reshape the matrices into arrays.
23.5 Case-Studies 805
Compare the training speed on different devices – CPU vs. GPU. Start by
defining the devices.
n.gpu <- 1
device.cpu <-
mx.cpu()
device.gpu <- lapply(0:(n.gpu-1), function(i) {
mx.gpu(i)
})
Passing a list of devices is useful for high-end computational platforms (e.g.,
multi-GPU systems); mxnet can train on multiple GPUs or CPUs.
To train using the CPU, try fewer iterations as protocol is computationally very
intense.
mx.set.seed(1234)
tic <-
proc.time()
model <- mx.model.FeedForward.create(lenet, X=train.array,
y=train.y,
ctx=device.cpu, num.round=1, array.batch.size=100,
learning.rate=0.05, momentum=0.9, wd=0.00001,
eval.metric=mx.metric.accuracy,
epoch.end.callback=mx.callback.log.train.metric(100))
## Start training with 1 devices
## [1] Train-
accuracy=0.522267303102625
print(proc.time() - tic)
## user system elapsed
## 313.22 66.45 50.94
The corresponding training on GPU is similar, but it requires a separate GPU-
compilation of mxnet (/mxnet/src/storage/storage.cc:78) with USE_CUDA¼1 to
enable GPU usage.
mx.set.seed(1234)
tic <-
proc.time()
model <- mx.model.FeedForward.create(lenet, X=train.array,
y=train.y,
ctx=device.gpu, num.round=5, array.batch.size=100,
learning.rate=0.05, momentum=0.9, wd=0.00001,
eval.metric=mx.metric.accuracy,
epoch.end.callback=mx.callback.log.train.metric(100))
print(proc.time() - tic)
GPU training is faster than CPU. Everyone can submit a new classification
result to Kaggle and see a ranking result for their classifier. Make sure you
follow the specific result-file submission format.
806 23 Deep Learning, Neural Networks
#
install.packages("imager")
require(mxnet)
require(imager)
Download and unzip the pre-trained model to a working folder, and load the
model and the mean image (used for preprocessing) using mx.nd.load into R. This
download can either be done manually, or automated, as shown below.
pathToZip <- tempfile()
download.file("http://www.socr.umich.edu/people/dinov/2017/Spring/DS
PA_HS650
/data/Inception.zip", pathToZip)
model_file <- unzip(pathToZip)
# setwd(paste(getwd(),"results", sep='/'))
model = mx.model.load(paste(getwd(),"Inception_BN", sep='/'),
iteration=39)
To classify a new image, select the image and load it in. Below, we show the
classification of several alternative images (Fig. 23.30).
808 23 Deep Learning, Neural Networks
Fig. 23.30 A U.S. weather pattern map as an example image for neural network image
recognition
library("imager")
# One should be able to load the image directly from the web (but
sometimes there may be problems, in which case, we need to first
download the image and then load it in R:
# im <-
imager::load.image("http://wiki.socr.umich.edu/images/6/69/DataManag
ement Fig1.png")
Call the preprocessing function with the normalized image (Fig. 23.31).
# plot(normed)
The image classification uses a predict function to get the probability over all
(learned) classes.
prob <- predict(model, X=normed) dim(prob)
## [1] 1000 1
810 23 Deep Learning, Neural Networks
Let’s try the automated image classification of this lakeside panorama (Figs.
23.32 and 23.33).
download.file("https://upload.wikimedia.org/wikipedia/commons/2/23/L
ake_mapo urika_NZ.jpeg", paste(getwd(),"results/image.png",
sep="/"), mode = 'wb') im <-
load.image(paste(getwd(),"results/image.png", sep="/")) plot(im)
23.6 Classifying Real-World Images 811
Another costal boundary between water and land is represented in this beach
image (Fig. 23.34).
download.file("https://upload.wikimedia.org/wikipedia/commons/9/90/H
olloways _beach_1920x1080.jpg", paste(getwd(),"results/image.png",
sep="/"), mode = ' wb')
im <- load.image(paste(getwd(),"results/image.png", sep="/"))
plot(im)
23.6.5 Volcano
Here is another natural image representing the Mount St. Helens Vocano
(Fig. 23.35).
Fig. 23.35 A volcano image
for neural network image
recognition
download.file("https://upload.wikimedia.org/wikipedia/commons/thumb/
d/dc/MSH 82_st_helens_plume_from_harrys_ridge_05-19-82.jpg/1200px-
MSH82_st_helens_plu me_from_harrys_ridge_05-19-82.jpg",
paste(getwd(),"results/image.png", sep="
/"), mode = 'wb')
im <- load.image(paste(getwd(),"results/image.png", sep="/"))
plot(im)
prob <- predict(model, X=normed)
max.idx <- order(prob[,1], decreasing = TRUE)[1:10]
print(paste0("Top Predicted Image-Label Classes: Name=",
synsets[max.idx], "; Probability: ", prob[max.idx]))
## [1] "Top Predicted Image-Label Classes: Name=n09472597 volcano;
Probability: 0.993182718753815"
## [2] "Top Predicted Image-Label Classes: Name=n09288635 geyser;
Probability: 0.00681292032822967"
## [3] "Top Predicted Image-Label Classes: Name=n09193705 alp;
Probability: 4.15803697251249e-06"
## [4] "Top Predicted Image-Label Classes: Name=n03344393 fireboat;
Probability: 1.48333114680099e-07"
## [5] "Top Predicted Image-Label Classes: Name=n04310018 steam
locomotive;
Probability: 1.17537313215621e-08"
814 23 Deep Learning, Neural Networks
The predicted top class labels for this image are perfect:
• Volcano.
• Geyser.
• Alp.
Fig. 23.36 A cortical brain
surface image for neural
network image recognition
download.file("http://wiki.socr.umich.edu/images/e/ea/BrainCortex2.p
ng", pas te(getwd(),"results/image.png", sep="/"), mode = 'wb') im
<- load.image(paste(getwd(),"results/image.png", sep="/")) plot(im)
# normed <-preproc.image(im, mean.img)
prob <- predict(model, X=normed)
max.idx <- order(prob[,1], decreasing = TRUE)[1:10]
print(paste0("Top Predicted Image-Label Classes: Name=",
synsets[max.idx], "; Probability: ", prob[max.idx]))
## [1] "Top Predicted Image-Label Classes: Name=n01917289 brain
coral; Probability: 0.4974305331707"
23.6 Classifying Real-World Images 815
• Brain coral.
• Mushroom.
• Hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa.
• Jigsaw puzzle.
Imagine if we can train a brain image classifier that labels individuals
(volunteers or patients) solely based on their brain scans into different classes
reflecting their development state, clinical phenotypes, disease traits, or aging
profiles. This will require a substantial amount of expert-labeled brain scans,
intense model training and extensive validation. However, any progress in this
direction will lead to effective computational clinical decision support systems
that can assist physicians with diagnosis, tracking, and prognostication of brain
growth and aging in health and disease.
download.file("http://wiki.socr.umich.edu/images/f/fb/FaceMask
1.png", paste(getwd(),"results/image.png", sep="/"), mode =
'wb') im <- load.image(paste(getwd(),"results/image.png",
sep="/")) plot(im)
816 23 Deep Learning, Neural Networks
Apply the deep learning neural network techniques to classify some images using
the pre-trained model as demonstrated in this chapter:
• Google images.
• SOCR Neuroimaging data.
• Your own images.
References
Carneiro, G, Mateus, D, Loïc, P, Bradley, A, Manuel, J, Tavares, RS, Belagiannis, V, Papa, JP,
Jacinto, C, Loog, M, Lu, Z, Cardoso, JS, Cornebise, J (eds). (2016) Deep Learning and Data
Labeling for Medical Applications: First International Workshop, LABELS 2016, Springer,
ISBN 3319469762, 9783319469768.
Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. arXiv preprint arXiv:150203167. 2015.
Wiley, JF. (2016) R Deep Learning Essentials, Packt
Publishing, ISBN 1785284711, 9781785284717.
Zhou, K, Greenspan, H, Shen, D. (2017) Deep Learning for Medical Image Analysis, Academic
Press, ISBN 0128104090, 9780128104095.
MXNET R Tutorial.
Deep Learning with MXNetR.
Deep Neural Networks.
Google's TensorFlow API.
https://github.com/dmlc/mxnet/blob/master/R-
package/vignettes/classifyRealImageWithPretrained Model.Rmd
Summary
820 Summary
Glossary
Table 1 (continued)
Notation Description
lm() linear model
lowess locally weighted scatterplot smoothing
LP or QP linear or quadratic programming
MCI mildly cognitively impared patients
MIDAS Michigan Institute for Data Science
ML Machine-Learning
MOOC massive open online course
MXNet Deep Learning technique using R package MXNet
NAND Negative-AND logical operator
NC or HC Normal (or Healthy) control subjects
NGS Next Generation Sequence (Analysis)
NLP Natural Language Processing
OCR optical character recognition
PCA Principal Component Analysis
PD Parkinson’s Disease patients
PPMI Parkinson’s Progression Markers Initiative
(R)AWS (Risk for) Alcohol Withdrawal Syndrome
RMSE root-mean-square error
SEM structural equation modeling
SOCR Statistics Online Computational Resource
SQL Structured Query Language (for database queries)
SVD Singular value decomposition
SVM Support Vector Machines
TM Text Mining
TS Time-series
w.r.t. With Respect To, e.g., “Take the derivative of this expression w.r.t. a1 and
set the derivative to 0, which yields (S λIN)a1 ¼ 0.”
XLSX Microsoft Excel Open XML Format Spreadsheet file
XML eXtensible Markup Language
XOR Exclusive OR logical operator
A
Index Accuracy, 10, 211, 275, 276, 283, 301–303,
307, 323–325, 334, 335, 337, 339, 340,
342, 343, 377, 409, 424, 432, 463, 475,
479–482, 484, 485, 497, 500, 502, 504,
507, 508, 511, 561, 562, 573, 576, 583,
599, 605, 692, 698, 704, 726, 767, 781, B
782, 784, 793, 800, 801, 806 Bar, 15, 140, 143, 147, 159, 161,
Activation, 383–385, 403, 767–769, 774, 775, 162, 164 barplot, 39, 161, 162, 164,
781, 785, 799, 800 Activation 463 barplot(x) histogram of the values of x.
functions, 384, 385, 767, 781 add, 16, 22, Use horiz¼FALSE for horizontal bars, 39
24, 33, 41, 146, 155, 158, 159, Beach, 811–812
162, 225, 227, 230, 292, 332, 373, Big Data, 1, 4, 8–10, 12, 642, 661, 765,
386, 391, 402, 403, 418, 424, 454, 819, 823
479, 530, 538, 595, 605, 633, 645, Biomedical, 8–9
712, 801 Bivariate, 39, 40, 46, 77, 140, 153–156, 173,
Alcohol withdrawal syndrome (RAWS), 3, 238, 240, 252, 738–739, 766, 770
824
Black box, 383, 766 boxplot, 39, 70, 161
Allometric, 266, 817, 823 boxplot(x) ‘box-and-whiskers’ plot,
Allometric relationship, 817
39 Brain, 4, 178, 286, 511, 769, 814–
ALSFRS, 4, 559, 733, 783
815
Alzheimer’s disease (AD), 4, 149–151,
© Ivo D. Dinov 2018 825
I. D. Dinov, Data Science and Predictive Analytics, https://doi.org/10.1007/978-3-319-72347-
1
569, 823 C
Alzheimer’s disease neuroimaging initiative c(), 18–20, 552 c (), seq (), rep (), and
(ADNI), 4, 823 data.frame (). Sometimes we use list () and
Amyotrophic lateral sclerosis (ALS), 4, 140, array () to create data
141, 559–569, 733, 783–784, 823 too, 18
Analog clock, 816 C/C++, 13
Appendix, 56–60, 138–139, 149, Cancer, 293, 294, 296, 298, 302, 303,
183–197, 420 424, 427, 432
Application program interface (API), 525, Caret, 322, 477, 486, 487, 491, 492, 497–510,
784, 823 554, 555, 564, 776
Apriori, 267, 268, 423–427, 431, 441, Chapter, 13, 63, 69, 139, 143, 149, 164, 183,
472, 823 201, 222, 245, 268, 271, 274, 289, 295,
ARIMA, 623, 626, 628, 630–638, 823 298, 300, 301, 308, 317, 322, 329, 334,
array, 20, 25, 31–33, 145 array 336, 337, 342, 345–348, 353, 358, 361,
(), 18 Assessment, 282–286, 370, 373, 380, 383, 390, 392, 394,
510–511 398, 401, 409, 414–416, 420, 427, 442,
Assessment: 22. deep learning, neural 447–449, 465, 475–480, 488, 491, 492,
networks, 816–817 494, 527, 546, 553, 554, 557, 563, 564,
assocplot, 40 assocplot(x) Cohen’s Friendly 570, 573, 574, 585, 592, 599, 601, 623,
graph shows the deviations from 657, 659, 672, 674, 684, 689, 695, 697,
independence of rows and columns in a two 712, 713, 715, 717, 719, 720, 723, 727,
dimensional contingency table, 40 733, 735, 736, 738, 749, 753, 756, 763,
attr, 27 766, 795, 817
Attributes, 26, 27, 144, 289, 311, 313, 315, Chapter 22, 415, 817
342, Chapter 23, 164
530, 560, 561, 670 axes, 41, 46, 47, Chronic disease, 316, 330, 335, 383,
131, 152, 154, 159, 171, 191, 416, 476, 503
219, 249, 258, 261, 368, 595, 648 Classification, 144, 267, 268, 281, 286–287,
axes¼TRUE, 41 289, 304–305, 307, 323, 331–332,
396–403, 477, 478, 498, 510, 533,
773–782, 795–805, 816
826 Index
Clinical, 258, 612, 614, 695 Density, 46, 48, 49, 72, 98, 132, 133, 140,
Coast, 812 141,
Cognitive, 2, 4, 7, 149, 700, 820 143–147, 173, 174, 198, 287, 289
Color, 45, 46, 87, 132, 151, 154, 165, 167, Device, 775, 800 diagnosticErrors, 718,
172, 776
269, 444, 649, 660 confusionMatrix, Dichotomous, 40, 271, 318, 459, 460, 478,
283, 322, 477, 480, 482, 485, 776, 787 655,
Constrained, 244, 587, 735, 740–747, 750 698, 733, 746, 747, 770
Contingency table, 35, 40, 78, 500 contour, 40 Dimensionality reduction, 233, 265–266
contour(x, y, z) contour plot (data are Divide-and-conquer, 307, 311, 373
interpolated to draw the curves), x and y must Divide and conquer classification, 307
be vectors and z must be a matrix Divorce, 443, 448–455, 467, 470 dotchart,
so that dim(z)¼c(length(x), length(y)) 39 dotchart(x) if x is a data frame, plots a
(x and y may be omitted), 40 Cleveland dot plot (stacked plots lineby-line
coplot, 40 and column-by-column), 39
coplot(x~y | z) bivariate plot of x and y for Download, 15, 555, 806, 817
each value or interval of values of z, 40
Coral, 815
Cosine, 659, 685, 695 E
Cosine similarity, 695 Cost Earthquake, 132–135, 157, 159, 172
function, 217, 503, 573, 586, Ebola, 5
703, 735, 743, 747, 757, 758 Eigen, 219, 823
CPU, 553, 765, 775, 782, 800, 804, 805, 823 Entropy, 311–313, 342
Create, 19, 22, 76–78, 83, 132, 174, 202, 214,
222, 224, 273, 274, 299, 315, 318, 319,
370, 380, 383, 390, 450, 461, 489, 491,
504, 538, 607, 630, 638, 644, 645, 647,
661, 674, 688, 717, 775, 781
Crossval, 776, 787
Cross validation, 477, 599–601,
733–734, 823
D
Data frame, 19, 21, 22, 24, 28, 29, 31, 33–36,
39, 40, 47, 48, 66, 131, 132, 153,
164, 172, 174, 273, 274, 299, 300,
319, 438, 451, 490, 514, 526, 529,
537, 540, 547–549, 555, 561, 562,
565, 608 data.frame, 19, 25,
83, 103, 164, 273 Data science, 1, 9,
11, 661, 823, 824
Data Science and Predictive Analytics
(DSPA),
1, 11–13, 198, 492, 623, 661,
819–821, 823
Decision tree, 307, 310–316, 498, 510, 533
Deep learning, 765–768, 816–817, 823, 824
classification, 816 regression, 817
Denoising, 735, 756, 757, 760, 763
Index 827
Error, 28, 47, 57, 60, 162, 163, 217, 254, 258, 202, 207, 208, 213, 216–219, 222, 224,
270, 280, 281, 287, 302, 305, 311, 313, 225, 234, 243, 246, 247, 251, 254, 255,
316, 321, 324–325, 328, 329, 331, 332, 257, 260, 267, 269, 272–274, 289, 295,
350, 361, 378, 388, 391, 393, 412, 299, 300, 308, 313, 314, 317, 319, 322,
478–480, 487, 491, 500, 501, 504, 507, 323, 332, 334, 337, 351, 352, 356, 358,
509, 562, 565, 573, 576, 579, 582–584, 361, 370, 375, 376, 378, 383–385,
586, 587, 599, 618, 640, 645, 648, 697, 390–392, 394–397, 401–403, 411–413,
701–703, 712, 714, 725, 733, 734, 427, 428, 432, 434, 438, 449–451, 455,
784, 824 470, 475, 479, 480, 483, 490, 494,
Evaluation, 268, 282, 322, 335, 361, 443, 451, 499–501, 504–506, 508, 509, 514, 524,
475, 477, 491, 492, 501, 504, 507, 510, 526, 530, 532, 542, 547–554, 560, 561,
543, 546, 554, 697, 703, 817 563, 569, 575, 579, 582, 586, 595, 600,
Exome, 6 602, 607, 616, 625, 631, 632, 634, 637,
Expectations, 11–12 640, 644, 645, 649, 655, 660, 664–667,
Explanations, 41, 510 673–676, 688, 702, 709, 713, 714, 716,
717, 735–741, 748, 749, 753, 767–770,
772, 774–776, 781, 782, 785, 799–801,
F 808, 823
Face, 815–816 Functional magnetic resonance imaging
Factor, 21, 24, 46, 79, 210, 219, 233, 255, 256, (fMRI), 178–181, 623, 657
259, 265, 287, 292, 294, 299, 319, 333, Function optimization, 243, 735, 761–763
352, 359, 412, 417, 438, 561, 570, 575,
588, 600, 608, 630, 638–640, 644, 676,
677, 703, 725 G
Factor analysis (FA), 233, 242, 243, 254–256, Gaussian mixture modeling, 443
262, 265, 638, 639, 644, 823 False- Generalized estimating equations (GEE),
negative, 700 653–657 Geyser, 174, 175, 813
False-positive, 325, 573, 574, 619 ggplot2, 14, 16, 131, 132, 157, 164, 172,
Feature selection, 557–559, 571–572 455, 648
Feedforward neural net, 817 filled.contour(x, Gini, 311, 313, 335, 336, 342
y, z) areas between the contours are colored, Glossary, 823
and a legend of the colors is drawn as well, 40 Google, 383, 388–394, 396–398, 416, 491,
Flowers, 39, 63, 309, 383, 410, 411, 414, 510 492, 494, 658, 697–700, 773, 784, 817
Format, 13, 17–18, 22, 36, 38, 427, 513–515, GPU, 513, 553, 765, 775, 782, 804, 805, 823
522, 524, 525, 529, 537, 553, 665, 799, Graph, 14, 40, 47, 70, 75, 77, 164, 166, 198,
801, 805 244, 287, 297, 305, 356, 376, 386, 391,
Foundations, 13, 638–641 fourfoldplot(x) 393, 399, 430, 431, 443, 448, 489,
visualizes, with quarters of circles, the 528–533, 555, 562, 563, 570, 613, 626,
association between two dichotomous 628, 649, 650, 658, 676, 775, 784
variables for different populations (x must be Graphical user interfaces (GUIs), 15–16, 823
an array with dim¼c(2, 2, k), or a matrix with
dim¼c
(2, 2) if k ¼ 1), 40 H
Frequencies, 29, 39, 46, 145, 193, 298, 429, Handwritten digits, 795, 799, 801
430, 439, 463, 484, 485, 667, 672, 685 HC, 135, 705, 824
Function, 2, 4, 16, 20, 22, 28, 30, 32–35, 37, Heatmap, 134, 150–152
47–50, 57–60, 66, 68–70, 76–78, 83, Help, 16
131–133, 143, 145, 148, 149, 151, 153, Heterogeneity, 11, 311
155, 157, 161, 162, 167, 172–175, 187, Hidden, 135, 386, 391, 393, 394, 398,
416, 660, 765–767, 772, 774, 775,
828 Index
254–258, 260, 265, 295, 299, 300, 304, NA, 22, 24, 28, 30, 38, 67, 69, 155, 287, 380,
305, 319, 322, 324–325, 350, 351, 356, 427, 429, 538, 625
391, 427, 429–432, 450, 463, 478, 480, na.omit, 28, 48
483, 484, 501, 506, 507, 528–530, 537, na.omit(x), 28
540, 552, 555, 574, 582, 607, 608, 620, Naive Bayes, 289, 290, 299, 302–305, 476
639–641, 648, 650, 654, 655, 660, Natural language processing (NLP), 442,
667–668, 670–674, 676, 685, 688, 689, 659–668, 689–691, 694–695, 824
695, 702, 716, 717, 727, 735, 739, 747, Nearest neighbors, 267, 286–287, 719–720
748, 753, 766, 767, 782, 799–801 Negative AND (NAND), 771–772, 824
Matrix computing, 201, 229–231, 345 Network, 383, 384, 386, 398, 533, 555, 730,
Michigan Institute for Data Science 731, 773, 799–800, 804–806
(MIDAS), 824 Neural networks, 383–388, 498, 510, 717–
Mild cognitive impairment (MCI), 4, 149, 718,
151, 824 765, 766, 816–817
Misclassification, 311, 324–325, 411, 418 Neurodegeneration, 4–5
mlbench, 536, 774 mlp, 774, 775, 781, 782, Neuroimaging, 4, 7, 588–590, 608–621, 789,
785 Model, 2, 10, 13, 47–48, 81, 93, 110, 120, 817, 823
166, New Zealand, 810–811
201, 216, 217, 227, 230, 246, 252, 253, Next Generation Sequence (NGS), 6–7, 824
260, 262, 267, 268, 274–276, 283, Next Generation Sequence (NGS) Analysis,
286, 299–301, 345, 350, 356, 358–375, 6–7
377–381, 383, 385–387, 391–394, 397, Nodes, 164, 293, 307, 311, 316, 321, 336, 374,
398, 405–409, 411–416, 418, 479, 376, 379, 383, 386, 391, 393, 394,
488, 489, 510, 511, 571, 572, 658, 416, 524, 528–530, 532, 765, 766,
733, 734, 817 768, 775, 785
Model performance, 268, 274–276, 300–301, Non-linear optimization, 752–753, 762
322–323, 333, 359–373, 377–380, 386, Normal controls (NC), 4, 149, 151, 152, 167,
392–394, 406–409, 412–414, 433–438, 169–171, 824
451–454, 462–465, 475, 479, 480, 487, Numeric, 2, 19, 25, 46, 47, 66, 68, 71, 76, 77,
488, 491, 492, 494, 495, 497, 501–503, 145, 149, 150, 212, 259, 273, 274, 299,
507, 564–569, 572, 605, 697, 698, 701 319, 370–371, 377, 396, 409, 503, 559,
Model-based, 2, 10, 345, 481, 566, 573, 660, 570
710, 819, 821
Model-free, 2, 10, 481, 660, 689, 705, 819,
821 O
Modeling, 1, 4, 9, 13, 48, 83, 201, 216–217, Objective function, 242, 250, 251, 401, 558,
233, 259, 307, 347, 349, 505, 513, 528, 573, 574, 579, 587, 592, 640, 641,
582, 638, 640, 659, 668, 701, 703, 756, 735–738, 740, 741, 747–749, 753,
775, 820, 824 754, 756–758
MOOCs, 821 Open-science, 1, 819, 821
mosaicplot, 40 mosaicplot(x) “mosaic” graph Optical character recognition (OCR), 383,
of the residuals from a log-linear regression 403–408, 795, 824
of a contingency table, 40 Optimization, 13, 47, 243, 254, 401,
Multi-scale, 623 402, 513, 546, 573, 574, 579,
Multi-source, 9, 514, 559 587, 592, 641, 735–753, 755,
MXNet, 774, 775, 782, 785, 799–801, 804, 756, 761, 762
805, 817 Optimize, 47, 337, 401, 739, 757,
758, 819
N
830 Index
559, 588, 617, 619, 624, 629, 641, 642, Spectra, 231
650, 652, 659, 661, 694, 704, 714, Splitting, 268, 307, 311, 315, 373, 374, 536,
736–738, 748, 753, 754, 760, 765, 774, 584, 686
782, 801, 806, 807, 820, 821, 823, 824 SQL, 138–139, 513, 515–521, 537, 553, 824
Regularized, 574–582, 621 Stacked, 39, 46, 196 stars(x) if x is a matrix or
Regularized linear model, 621 a data frame, draws a graph with segments or a
Relationship, 77, 78, 155, 226, 245, 314, 345, star where each row of x is represented by a
349, 350, 369, 383, 394, 448, 454, 532, star and the columns are the lengths of the
581, 584, 640, 817, 823 segments, 40
rep(), 18 Statistics Online Computational Resource
Require, 12, 400, 409, 432, 527, 550, 555, (SOCR), 4, 10, 11, 50, 51, 56, 72, 79,
563, 130, 140, 147, 171, 173, 178, 187, 193,
772, 815, 819 198, 230, 258, 305, 342, 349, 522, 524,
reshape2, 14, 16 525, 531–533, 540–543, 555, 569, 584,
Risk for Alcohol Withdrawal Syndrome 669, 817, 824
(RAWS), 3 stripplot, 39, 46 stripplot(x)
Root mean square error (RMSE), 329, 477, plot of the values of x on a
565, line (an alternative to
698, 701, 703, 784, 817, 824 boxplot() for small sample
RStudio, 13, 15–16 sizes), 39
RStudio GUI, 15–16 Structural equation modeling (SEM), 623,
638–648, 824
Summary statistic, 35, 40, 67, 76, 140, 187,
S 352, 549 sunflowerplot(x, y) id. than
Scatter plot scatter, 46, 153, 226, 230, 231 plot() but the points with similar coordinates
Sensitivity, 305, 485, 486, 714, 734, 776 seq are drawn as flowers which petal number
(), 18, 38, 69 Sequencing, 6 set.seed, 37, 49, represents the number of points, 39
287, 333, 492, 499, 782, 800 Support vector machines (SVM), 398–403
setdiff, 30 Surface, 132, 141, 174–176, 814–815 Symbol,
setequal, 30 47, 404, 490, 781, 799 symbols(x, y, ...)
Silhouette, 443, 446, 451, 452, 456–459, 463, draws, at the coordinates given by x and y,
464, 469, 477, 723, 725 sin, cos, tan, symbols (circles, squares, rectangles, stars,
asin, acos, atan, atan2, log, log10, exp and thermometers or “boxplots”) which sizes,
“set” functions union(x, y), intersect(x, y), colors... are specified by supplementary
setdiff(x, y), setequal (x, y), is.element(el, set) arguments, 40
are available in R, 30
Singular value decomposition (SVD), 233,
241, T
242, 256–258, 265, 824 Table, 13, 22–24, 29, 30, 32, 35, 38, 40, 76,
Size, 16, 30, 46, 47, 49, 132, 135, 145, 154, 78,
174, 192, 209, 210, 269, 315, 316, 323, 79, 140, 144, 148, 166, 208, 268, 274,
328, 336, 348, 390, 425, 426, 429, 450, 275, 282, 292, 300, 301, 311, 317, 322,
451, 495, 498, 500, 503, 510, 515, 412, 426, 450, 463, 477–480, 482, 483,
534–536, 565, 566, 572, 592, 624, 486, 501, 504, 511, 529, 530, 548, 555,
676, 747, 767, 773, 774, 781, 784, 614, 641, 686, 771
795, 819, 823 sMRI, 4, TensorFlow, 765, 773, 784
178 softmax, 498, 767, 774, 781, Term frequency (TF), 659, 676–686, 695
800 termplot(mod.obj) plot of the (partial) effects
Sonar, 774–781 of a regression model (mod.obj), 40
Sort, 28, 700 Testing, 7, 268, 274, 282, 287, 299, 303, 318,
Specificity, 305, 485–486, 734, 776 324, 342, 396, 414, 491, 505, 579, 581,
832 Index
584, 599, 600, 639, 648, 679, 684, 686, Volcano, 175, 812–813
690, 691, 697, 701, 703, 704, 719, 765,
775, 782, 784, 795, 799–801
Text mining (TM), 442, 659–668, 689–691, W
694–695, 824 which.max, 27 Whole-
The following parameters are common to genome, 6 Whole-genome and
many plotting functions, 40 exome sequencing, 6
Then, try to perform a multiple classes (i.e Wide, 13, 18, 36, 48, 381, 427, 514, 583,
AD, NC and MCI) classification and 656, 820
report With, 1, 2, 4, 5, 8, 9, 12, 16–20, 22–26, 28–31,
the results, 816 33, 36, 37, 39–41, 45–50, 57, 67, 69–
Training, 8, 141, 260, 267–270, 274, 281, 287, 71,
289, 292, 295–297, 299–300, 303, 304, 77, 80, 81, 83, 84, 130, 132, 133, 135,
311, 318–321, 332–333, 337, 358–359, 139, 140, 143, 147, 149, 150, 153,
374, 375, 380, 390, 391, 395, 396, 398, 155–159, 161–163, 166, 171, 173–175,
410–412, 416, 418, 432–433, 450–451, 202, 210, 212–216, 219, 220, 222, 230,
461, 491, 493, 495, 501, 503–505, 507, 231, 233, 237, 242–244, 254–256, 258,
553, 554, 558–564, 579, 584, 599, 600, 267, 269–271, 273, 289, 295, 299,
679, 684, 686, 688, 697, 701–704, 715, 302–305, 307, 310–319, 322, 324, 325,
719, 733, 765, 768, 769, 775, 776, 778, 327, 331–334, 336, 337, 339, 340, 342,
782, 784, 795, 796, 799, 800, 804, 805, 347, 371, 374, 377–380, 385–390, 394,
815, 819, 821 398–403, 408, 411, 412, 416, 420,
Transdisciplinary, 9, 819, 820 Trauma, 163, 423–425, 427, 429, 430, 432, 435, 438,
443, 459–467 ts, 1, 31, 40, 47, 77, 80, 533, 439, 441, 442, 444–446, 448–451,
629, 631 ts.plot(x) id. but if x is multivariate 453–460, 463–466, 469, 472, 475, 476,
the series may have different dates and must 484, 495, 497, 500, 502–506, 508, 511,
have the same frequency, 40 513–536, 540, 543, 546–559, 563–570,
572–574, 576, 584, 586, 599, 605–608,
612–614, 616, 617, 620, 625, 626, 628,
U 630, 634, 637–639, 642, 643, 648, 653,
Unconstrained, 735–741, 761 655, 656, 663, 665, 668, 670, 672, 674,
Union, 30, 145, 424 676–678, 684, 686, 688, 689, 698, 700,
Unique, 1, 7, 29, 144, 150, 210, 334, 429, 704, 710, 711, 715, 717, 718, 720, 734,
463, 495, 527, 639, 667, 688, 698, 736, 736, 746, 756, 769–774, 781, 784, 785,
774, 820 800, 805, 808, 812, 815, 817, 819, 820
Unstructured text, 289, 659, 660, 795
X
V XLSX, 526–527, 824
Validation, 9, 267, 280, 281, 283, 287, 329, XML, 7, 24, 513, 522–524, 555, 824
340, 396, 414, 446, 448, 475, 477, Exclusive OR (XOR), 770, 771, 824
491–495, 501, 562, 574, 581, 598–600,
675, 679, 689–691, 697, 698, 700–704,
708, 709, 711, 712, 718, 733, 765, 784, Y
796, 799, 815, 819, 820, 823 Youth, 271, 443, 465–467, 498
Visualization, 4, 143–145, 164, 198–199, 657
Visualize, 4, 70, 77, 132, 149, 173, 178, 201,
222, 226, 247, 249, 252, 256, 280, 297,
305, 323, 356, 429, 434, 453, 528, 565,
571, 590, 592, 626, 648, 726