Clinical Intervention Prediction
Clinical Intervention Prediction
1
Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA
2
Laboratory for Computational Physiology, MIT, Cambridge, MA
Abstract
Real-time prediction of clinical interventions remains a challenge within intensive care units (ICUs).
This task is complicated by data sources that are sparse, noisy, heterogeneous and outcomes that are
imbalanced. In this work, we integrate data across many ICU sources — vitals, labs, notes, demo-
graphics — and focus on learning rich representations of this data to predict onset and weaning of
multiple invasive interventions. In particular, we compare both long short-term memory networks
(LSTM) and convolutional neural networks (CNN) for prediction of five intervention tasks: in-
vasive ventilation, non-invasive ventilation, vasopressors, colloid boluses, and crystalloid boluses.
Our predictions are done in a forward-facing manner after a six hour gap time to support clinically
actionable planning. We achieve state-of-the-art results on these predictive tasks using deep archi-
tectures. Further, we explore the use of feature occlusion to interpret LSTM models, and compare
this to the interpretability gained from examining inputs that maximally activate CNN outputs. We
show that our models are able to significantly outperform baselines for intervention prediction, and
provide insight into model learning.
1. Introduction
As Intensive Care Units (ICUs) play an increasing role in acute healthcare delivery (Vincent, 2013),
clinicians must anticipate patients’ care needs in a fast-paced, data-overloaded setting. The sec-
ondary analysis of healthcare data is a critical step toward improving modern healthcare, as it af-
fords the study of care in real care settings and patient populations. The widespread availability of
electronic healthcare data (Charles et al., 2013; Jamoom E and E, 2016) allows new investigations
into evidence-based decision support, where we can learn when patients need a given intervention.
c 2017.
Continuous, forward-facing event prediction is particularly important in the ICU setting where we
want to account for evolving clinical needs.
In this work, we focus on predicting the onset and weaning of interventions. The efficacy of
clinical interventions can vary drastically among patients, and unnecessarily administering an inter-
vention can be harmful and expensive. We target interventions that span a wide severity of needs
in critical care: invasive ventilation, non-invasive (NI) ventilation, vasopressors, colloid boluses,
and crystalloid boluses. Mechanical ventilation is commonly used for breathing assistance, but has
many potential complications (Yang and Tobin) and small changes in ventilation settings can have
large impact in patient outcomes (Tobin, 2006). Vasopressors are a common ICU medication, but
there is no robust evidence of improved outcomes from their use (Müllner et al., 2004), and some
evidence they may be harmful (D’Aragon et al., 2015). Fluid boluses are used to improve cardio-
vascular function and organ perfusion. There are two bolus types: crystalloid and colloid. Both are
often considered as less aggressive alternatives to vasopressors, but there are no multi-center trials
studying whether fluid bolus therapy should be given to critically ill patients, only studies trying to
distinguish which type of fluid should be given (Malbrain et al., 2014).
Capturing complex relationships across disparate data types is key for predictive performance
in our tasks. To this end, we take advantage of the success of deep learning models in capturing
rich representations of data with little hand-engineering by domain experts. We use long short-term
memory networks (LSTM) (Hochreiter and Schmidhuber, 1997), which have been shown to effec-
tively model complicated dependencies in timeseries data (Bengio et al., 1994) and have achieved
state-of-the-art results in many different applications: e.g. machine translation (Hermann et al.,
2015), dialogue systems (Chorowski et al., 2015) and image captioning (Xu et al., 2015). They are
well-suited to our modeling tasks because patient symptoms may exhibit important temporal depen-
dencies. We compare the LSTM models to a convolutional neural network (CNN) architecture that
has previously been explored for longitudinal laboratory data (Razavian et al., 2016). We train one
model per intervention which predicts all outcomes for that intervention given any patient record.
In doing so, we:
1. Achieve state-of-the-art prediction results in forward-facing, hourly prediction of clinical in-
terventions (onset, weaning, or continuity).
2. Demonstrate how feature occlusion can be used in the LSTM model to show which data
modalities and features are most important in different predictive tasks. This is an important
step in making models more interpretable by physicians.
3. Further aid in model interpretability by highlighting patient trajectories that lead to the most
and least confident predictions across outcomes and features in a CNN model.
Figure 2: Given data from a fixed-length (6 hour) sliding window, models predict the status of
intervention in a prediction window (4 hours) after a gap time (6 hours). Windows slide along the
entire patient record, creating multiple examples from each record.
The physiological variables, topic distribution, and static variables for each patient are concate-
nated into a single feature vector per patient, per hour (Esteban et al., 2016). The intervention state
of each patient (a binary value indicating whether or not they are on the intervention of interest at
each timestep) and the time of day for each timestep (an integer from 0 to 23 representing the hour)
are also added to this feature vector. Using the time of day as a feature makes it easier for the model
to capture circadian rhythms that may be present in, e.g., the vitals data.
Table 1: Proportion of each intervention class. Note that colloid and crystalloid boluses are not
administered for specific durations, and thus have only a single class (onset).
or 2) no Onset, since these interventions are not administered for on-going durations of time. After
splitting the patient records into fixed-length chunks, there are 1,154,101 examples. Table 1 lists the
proportions of each class for each intervention.
4. Methods
4.1 Long Short-Term Memory Network (LSTM)
Having seen the input sequence x1 . . . xt of a given example, we predict yˆt , a probability distribution
over the outcomes, with target outcome yt :
h1 . . . ht = LSTM(x1 . . . xt ) (1)
yˆt = softmax(Wy ht + by ) (2)
where xi ∈ RV , Wy ∈ RNC ×L2 , ht ∈ RL2 , by ∈ RNC where V is the dimensionality of the input
(number of variables), NC is the number of classes we predict, and L2 is the second hidden layer
size. Figure 3a shows a model schematic, and more model details are provided in Appendix C.
to stop training with early stopping based on AUC (area under the receiver operating characteristic
curve) performance on the validation set.
4.4 Evaluation
We evaluate our results based on per-class AUCs as well as aggregated macro AUCs. If there are
K classes each with a per-class AUC of PAU Ck then the macro AUC is defined as the average of
the per-class AUCS, AU Cmacro = K1 k AU Ck . We use the macro AUC as an aggregate score
because it weights the AUCs of all classes equally, regardless of class size (Manning et al., 2008).
This is important because of the large class imbalance present in the data.
We use L2 regularized logistic regression (LR) as a baseline for comparison with the neural
networks (Pedregosa et al., 2011). The same input is used for the LR model as for the numerical
LSTM and CNN (imputed time windows of data) but the timesteps are concatenated into a single
input vector.
4.5 Interpretibility
4.5.1 LSTM F EATURE -L EVEL O CCLUSION
Because of the additional time dependencies of recurrent neural networks, getting feature-level
interpretability from LSTMs is notoriously difficult. To achieve this, we borrow an idea from image
recognition to help understand how the LSTM uses different features of the patients. Zeiler and
Fergus (2013) use occlusion to understand how models process images: they remove a region of the
image (by setting all values in that region to 0) and compare the model’s prediction of this occluded
image with the original prediction. A large shift in the prediction implies that the occluded region
contains important information for the correct prediction. With our LSTM model, we mask features
one by one from the patients (replacing the given feature with random noise drawn from the same
distribution by bootstrapping). We then compare the predictive ability of the model with and without
each feature; when this difference is large, then the model was relying heavily on that feature to
make the prediction.
Note that examining feature interactions would require a more complex analysis to occlude
all pairs, triples, etc., but would not necessarily demonstrate the direction or exact nature of the
interaction.
4.5.2 CNN F ILTER /ACTIVATION V ISUALIZATION
We get interpretability from the CNN models in two ways. First, in order to understand how the
CNN is using the patient data to predict certain tasks, we find and compare the top 10 real examples
that our model predicts are most and least likely to have a specific outcome. As our gap time is 6
hours, this means that the model predicts high probability of onset of the given task 6 hours after
the end of the identified trajectories.
Second, we generate “hallucinations” from the model which maximize the predicted probability
for a given task (Erhan et al., 2009). This is done by creating an objective function that maximizes
the activation of a specific output node, and backpropagating gradients back to the input image,
adjusting the image so that it maximally activates the output node.
5. Results
We found deep architectures achieved state-of-the-art prediction results for our intervention tasks,
compared to both our baseline as well as other work predicting intervention onset and weaning
(Ghassemi et al., 2017; Wu et al., 2016). The AUCs for each of our five intervention types and 4
prediction tasks are shown for all models in Table 2. All models use 6 hour chucks of “raw” data
which have either been transformed to a 0-1 range (normalized and mean imputed), or discretized
into physiological words (described in section 3.3).
5.1 Physiological Words Improve Predictive Task Performance With High Class Imbalance
We observed a significantly increased AUC for some interventions when using physiological words
— specifically for ventilation onset (from 0.61 to 0.75) and colloid bolus onset (from 0.52 to 0.72),
which have the lowest proportion of onset examples (Table 1). This may be because physiological
words have a smoothing effect. Since we round the z-score for each value to the nearest integer, if
a patient has a heart rate of 87 at one hour and then 89 at the next, those values will probably be
represented as the same word. This effect may make the model invariant to small fluctuations in
the patient’s data and more resilient to overfitting small classes. In addition, the physiological word
representation has an explicit encoding for missing data. This is in contrast to the raw data that has
been forward-filled and mean-imputed, introducing noise and making it difficult for the model to
know how confident to be in the measurements it is given (Che et al., 2016).
Table 2: Comparison of model performance on five targeted interventions. Models that perform
best for a given (intervention, task) pair are bolded.
weaning. Note that the overall difference in AUC for onset ranges up to 0.16, but there is no signifi-
cant decrease in AUC for weaning (< 0.02). This is consistent with previous work that demonstrated
weaning to be a more difficult task in general for vasopressors (Wu et al., 2016). We also note that
weaning prediction places importance on time of day. As noted by Wu et al. (2016), this could be a
side-effect of patients being left on interventions longer than necessary.
For non-invasive ventilation onset and weaning the learned topics are more important than phys-
iological variables. This may mean that the need for less severe interventions can only be detected
from clinical insights derived in notes. Similar to vasopressors, we note that onset AUCs vary more
than weaning AUCs (0.14 vs 0.01), and that time of day is important for weaning.
For crystalloid and colloid bolus onsets, topics are all but one of the five most important features
for detection. Colloid boluses in general have more AUC variance for the topic features (0.14 vs.
0.05), which is likely due to the larger class imbalance compared to crystalloids.
Figure 4: We are able to make interpretable predictions using an LSTM and occluding specific
features. These figures display the eight features that cause the largest decrease in prediction AUC
for each intervention task. In general, physiological data were more important for the more invasive
interventions – mechanical ventilation (4a, 4b) and vasopressors (4c, 4d) – while clinical note topics
were more important for less invasive tasks – non-invasive ventilation (4e, 4f) and fluid boluses (4g,
4h). Note that all weaning tasks except for ventilation have significantly less AUC variance.
ing to patients who are hyperventilating. For vasopressor onsets, we see a decreased systolic blood
pressure, heart rate and oxygen saturation rate. These could either indicate altered peripheral perfu-
sion or stress hyperglycemia. Topic 3, which was important for vasopressor onset using occlusion
(Figure 4), is also increased.
Non-invasive ventilation onset is associated with decreased creatinine, phosphate, oxygen satu-
ration and blood urea nitrogen, potentially indicating neuromuscular respiratory failure. For colloid
and crystalloid boluses, we note general indicators of physiological decline, as boluses are given for
a wide range of conditions.
Hallucinations for vasopressor and ventilation onset are shown in Figure 6. While the model
was not trained with any physiological priors, it identifies blood pressure drops as being maximally
activating for vasopressor onset, and respiratory rate decline for ventilation onset. This suggests that
it is able to learn physiologically-relevant factors that are important for intervention prediction. The
hallucinations give us more insight into underlying properties of the network and what it is looking
for. However, since these trajectories are made to maximize the output of the model, they do not
necessarily correspond to physiologically plausible trajectories.
6. Conclusion
In this work, we targeted forward-facing prediction of ICU interventions covering multiple phys-
iological organ systems. To our knowledge, our model is the first to use deep neural networks to
Figure 5: Trajectories of the 10 maximally and minimally activating examples for onset of each of
the interventions. These are the six hour trajectories that occur before another six hour gap time
preceding the onset.
Figure 6: Trajectories generated by adjusting inputs to maximally activate a specific output node of
the CNN.
predict both onset and weaning of interventions using all available modalities of ICU data. In these
tasks, deep learning methods beat state-of-the-art AUCs reported in prior work for intervention pre-
diction tasks. This is sensible given that prior works have focused on single targets with smaller
datasets (Wu et al., 2016) or unsupervised representations prior to supervised training (Ghassemi
et al., 2017). We also note that LSTM models over physiological word inputs significantly improved
performance on the two intervention tasks with the lowest incidence rate — possibly because this
representation encodes important information about what is “normal” for each physiological value,
or is more robust to missingness in the physiological data.
Importantly, we were able to demonstrate interpretability for both models. In the LSTMs, we
examined feature importance using occlusion, and found that physiological data were important in
more invasive tasks, while clinical note topics were more important for less invasive interventions.
This could indicate that there is more clinical discretion at play for less invasive tasks. We also
found that all weaning tasks save ventilation had less AUC variance, which could indicate that these
decisions are also made with a large amount of clinical judgment.
The temporal convolutions in our CNN filters over the multi-channel input learnt interesting
and clinically-relevant trends in real patient trajectories, and these were further mimicked in the
hallucinations generated by the network. As in prior work (Razavian et al., 2016), we found that
RNNs often have similar or improved performance as compared to CNNs.
Acknowledgements
This research was funded in part by the Intel Science and Technology Center for Big Data, the
National Library of Medicine Biomedical Informatics Research Training grant 2T15 LM007092-22,
NIH National Institute of Biomedical Imaging and Bioengineering (NIBIB) grant R01-EB017205,
and NIH National Human Genome Research Institute (NHGRI) grant U54-HG007963.
References
Estevão Bassi, Marcelo Park, and Luciano Cesar Pontes Azevedo. Therapeutic strategies for high-
dose vasopressor-dependent shock. Critical care research and practice, 2013, 2013.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient
descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994.
D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. JMLR, 3(5):993–1022, 2003.
Dustin Charles, Meghan Gabriel, and Michael F Furukawa. Adoption of electronic health record
systems among us non-federal acute care hospitals: 2008-2012. ONC data brief, 9:1–9, 2013.
Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neu-
ral networks for multivariate time series with missing values. arXiv preprint arXiv:1606.01865,
2016.
Edward Choi, Mohammad Taha Bahadori, and Jimeng Sun. Doctor AI: predicting clinical events
via recurrent neural networks. CoRR, abs/1511.05942, 2015. URL http://arxiv.org/
abs/1511.05942.
Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter
Stewart. Retain: An interpretable predictive model for healthcare using reverse time attention
mechanism. In Advances in Neural Information Processing Systems, pages 3504–3512, 2016.
Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio.
Attention-based models for speech recognition. In Advances in Neural Information Processing
Systems, pages 577–585, 2015.
Frederick D’Aragon, Emilie P Belley-Cote, Maureen O Meade, François Lauzier, Neill KJ Ad-
hikari, Matthias Briel, Manoj Lalu, Salmaan Kanji, Pierre Asfar, Alexis F Turgeon, et al. Blood
pressure targets for vasopressor therapy: A systematic review. Shock, 43(6):530–539, 2015.
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer
features of a deep network. Technical report, University of Montreal, 2009.
Cristóbal Esteban, Oliver Staeck, Stephan Baier, Yinchong Yang, and Volker Tresp. Predicting
clinical events by combining static and dynamic information using recurrent neural networks. In
Healthcare Informatics (ICHI), 2016 IEEE International Conference on, pages 93–101. IEEE,
2016.
Narges Razavian, Jake Marcus, and David Sontag. Multi-task prediction of disease onsets from
longitudinal lab tests. In JMLR (Journal of Machine Learning Research): MLHC Conference
Proceedings, 2016.
Cátia M Salgado, Susana M Vieira, Luı́s F Mendonça, Stan Finkelstein, and João MC Sousa. En-
semble fuzzy models in personalized medicine: Application to vasopressors administration. En-
gineering Applications of Artificial Intelligence, 49:141–148, 2016.
Jean-Louis Vincent. Critical care-where have we been and where are we going? Critical Care, 17
(Suppl 1):S2, 2013.
Mike Wu, Marzyeh Ghassemi, Mengling Feng, Leo A Celi, Peter Szolovits, and Finale Doshi-Velez.
Understanding vasopressor intervention and weaning: Risk prediction in a public heterogeneous
clinical time series database. Journal of the American Medical Informatics Association, page
ocw138, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov,
Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation
with visual attention. In ICML, volume 14, pages 77–81, 2015.
Karl L Yang and Martin J Tobin. A prospective study of indexes predicting the outcome of trials of
weaning from mechanical ventilation. New England Journal of Medicine, 324.
Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. CoRR,
abs/1311.2901, 2013. URL http://arxiv.org/abs/1311.2901.
Appendix
Appendix A. Dataset Statistics
Table 3: Variables
Figure 7: Converting data from continuous timeseries format to discrete “physiological words.” The
numeric values are first z-scored and rounded, and then each z-score is made into its own category.
On the right, glucose -2 indicates the presence of a glucose value that was 2 standard deviations
below the mean. A row containing all zeros for a given variable indicates that the value for that
variable was missing at the timestep.
N M
1 XX
L(ŷ1 . . . ŷN ) = − yij log ŷij
N
i=1 j=1
where ŷij is the probability our model predicts for example i being in class j, and yij is the true
value.
Appendix D. Generated Topics
Table 5: Most probable words in the topics most important for intervention predictions.