Automated Early Detection of Obstetric Complications: Theoretical and Methodological Considerations
Automated Early Detection of Obstetric Complications: Theoretical and Methodological Considerations
Automated Early Detection of Obstetric Complications: Theoretical and Methodological Considerations
com/science/article/pii/S0002937819302376
Manuscript_5d70b4604372e7a806b60979194abb73
Gabriel J. ESCOBAR, MD1; Neeru R. GUPTA, MD2; Eileen M. WALSH, RN, MPH3;
Lauren SOLTESZ, BS1; Stephanie M. TERRY, MD4; Patricia KIPNIS, PhD1,5
The authors report no conflict of interest. This ongoing work is funded by The
Permanente Medical Group, Inc., and Kaiser Foundation Hospitals, Inc. The study was
conducted in Oakland, California.
5. Decision Support
Kaiser Foundation Hospitals, Inc.
1800 Harrison Avenue
Oakland, California 94112
Word count
Abstract: 184
Main text: 4361
© 2019 published by Elsevier. This manuscript is made available under the Elsevier user license
https://www.elsevier.com/open-access/userlicense/1.0/
2
Short title: Development of methods for obstetric automated early warning systems
AJOG at a Glance:
ABSTRACT
labor and delivery services are at much lower risk of experiencing unexpected critical
illness. Nonetheless, critical illness and other complications that put either the mother or
fetus at risk do occur. One potential approach to prevention is to use automated early
warning systems such as those used for non-pregnant adults. Predictive models using
data extracted in real time from electronic records constitute the cornerstone of such
systems. This article addresses several issues involved in the development of such
selection of outcomes for model calibration, potential uses of existing adult severity of
options for instantiation. These have not been explicitly addressed in the obstetrics
literature, which has focused on the use of manually assigned scores. In addition, this
article provides some results from work in progress to develop two obstetric predictive
models using data from 262,071 women admitted to a labor and delivery service at 15
KEY WORDS:
Early warning system
Electronic medical record
Obstetrics
Predictive model
Severity of illness
4
INTRODUCTION
Most women in the labor and delivery (L&D) service are healthy. Compared to
adults in general medical-surgical wards, they are younger, and their mortality risk is
extremely low. However, maternal mortality in developed nations persists and is rising in
the U.S.1 2 Further, obstetric complications are not rare, which has prompted
considerable interest in the use of obstetric early warning scores.3-6 Some institutions
have developed predictive models that can be used for automated early warning
systems (EWSs) for adults at risk of deterioration and/or cardiac arrest outside intensive
care units.7-10 The evidence base that EWSs of any kind actually improve outcomes is
limited – no study has shown conclusive outcomes improvements,9 10 and some have
shown no benefit11 12 – but work on these systems continues. The notion that
automated EWSs based on electronic medical records (EMRs) could be used in L&D
A number of early warning tools for obstetrics – the Maternal Early Warning
Criteria (MEWC), Modified Early Obstetric Warning System (MEOWS), and the
Maternal Early Warning Trigger (MEWT) – have been described in recent literature.3-6
maternal temperature, oximetry, heart rate, respiratory rate, and mental status;
abnormal fetal heart rate) as triggers for more focused evaluation. Cutoffs for the
triggers were defined by expert opinion, as opposed to being calibrated against defined
outcome) are not known, so it is not possible to address the problem of alert fatigue.
5
These triggers are designed for either manual use (which could be associated with
inconsistent documentation) or as simple electronic alarms (“issue alert if heart rate is <
50 or > 110”). Importantly, they do not include two indicators of the progress of labor,
cervical dilation and station of fetal descent. They cannot capture subtle relationships
the dynamic relationship between heart rate and systolic blood pressure or that between
cervical dilation and station of fetal descent. Finally, these tools do not capture
differential risk due to the specific maternal attributes (e.g., parity, body mass index) or
information from static variables (e.g., diabetes, heart rate at a discrete point in time) as
well as dynamic ones (e.g., rate of change of heart rate or cervical dilation). The weights
thresholds and balancing available resources with the need to avoid alert fatigue. They
also provide health systems with multiple options for how to provide clinicians with
probability estimates, which can be displayed directly in the EMR, reported to trained
nurses at physical or virtual command centers, or sent using smart phone text alerts to
clinicians.
predictive models that generate probability estimates in real time for non-obstetric
California (KPNC), an integrated health care delivery system that owns 21 hospitals.
The first, known as Advanced Alert Monitor (AAM), in use in adult general medical-
surgical wards, provides a real time severity of illness score (LAPS2, Laboratory-based
Point Score, version 2),18 and a discrete probability estimate.15-18 This latter estimate,
now provided hourly, is the probability that a ward patient will deteriorate within the next
12 hours. At present, AAM is operational in all 21 KPNC hospitals. The second, also
operational in all KPNC hospitals, uses many of the same variables and provides a daily
days from hospital discharge.19 The third, which provides a quantitative estimate of
neonatal early onset sepsis risk, is used in all KPNC nurseries and other health
systems.20-24 Currently, does require some manual data entry, but work is in progress to
instantiate it in the Epic (www.epicsystems.com) EMR. A joint KPNC and Epic team has
recently instantiated the AAM and rehospitalization models into this commercial EMR as
well.
obstetric complications, including those associated with or that could lead to adverse
fetal and neonatal outcomes. If we are successful, our models would be embedded in
the EMR and provide obstetric teams with probabilistic alerts with sufficient lead time to
methodologic reflection that aims to delineate key issues that must be addressed in the
the L&D setting, and the “nuts and bolts” of data processing and analytics. Because we
articles.
This work has been approved by the KPNC Institutional Board for the Protection
of Human Subjects, which has jurisdiction over all the hospitals described in this article.
The data on which some of the examples provided herein are based come from
262,071 hospital encounters that took place between 1/1/2010 and 3/31/2017 at the 15
Our goal is to develop EMR-based predictive models that could serve as core
components for EWSs that are integrated into clinician workflows in L&D and
postpartum wards. Such models should match the level of specification that has been
reported for AAM and eCART (Electronic Cardiac Arrest Triage).8 17 18 Given current
technology, it is highly desirable that the following be reported for obstetrics predictive
risk), size and rate of the numerator (outcome), discrimination (typically, the area under
and number needed to evaluate (NNE, or work-up to detection ratio).14 25-27 Ideally,
predictors used should be clearly described and justified on both statistical and biologic
grounds. Further, if specific patient subsets are excluded, the rationale for exclusion
should be made explicit. Finally, the process employed to validate the model(s) should
also be described.28
Temporal characteristics
developing a predictive model. The T0 is the time when the model issues a probability
estimate that at least one undesirable event, X, will occur within some elapsed time,
which is the event (or look forward) time frame. The time between the T0 and X (lead
time) should be sufficient for clinicians to mount a response. All predictive models are
based on data available prior to the T0, which we refer to as the look back period.
Compared to adult EWSs, unique challenges are present. The first challenge is that one
is not dealing with a single temporal frame: one must consider antepartum and
9
postpartum phases as both physiologic changes and presence of a fetus can alter
patient outcomes in different ways (Figure 1). Unlike scores such as AAM and eCART,
where the set of predictors can remain constant, the presence of two phases
complicates analyses: the information value of predictors such as dilation and station
changes dramatically after delivery. Second, time frames in L&D are often highly
compressed. The “look forward” time frames for AAM and eCART are 12 and 24 hours,
respectively, which makes sense in the context of adult medical-surgical wards where
average lengths of stay range from 3 to 5 days. Most L&D and postpartum stays are
shorter, and the antepartum phase may sometimes be extremely brief. Consequently,
we aim to achieve an event time frame of 6 hours. A third problem involves the “look
back” period. For a given physiologic parameter, all severity of illness scores currently
in use select the worst (most highly deranged) value in the “look back” time frame. This
is problematic because vital signs are often temporarily abnormal during the most
important time period in L&D – delivery. We expand on this problem in the section on
the L&D service using EWSs, rather than improving management of such complications
in other settings, such as the ICU. Given this focus, the proper denominator should be
the largest one – all women admitted to L&D, independent of outcome or subsequent
disposition. Note that this denominator precludes inclusion of women who arrive for
triage, whether in the emergency department or some other location linked to L&D, but
who are not admitted to the service. We are not including women admitted to other
10
hospital units in our models because these women would be monitored by systems
such as AAM or eCART. Women who are discharged home from triage are not
necessarily at zero risk, and those with repeated visits are at higher risk of having a
newborn require respiratory support.29 However, monitoring women who are sent home
after triage is outside the scope of our current work. Similarly, while the study of how
scores may predict outcomes in pregnant women after they have become critically ill
and/or have been transferred to the ICU30 31 is of value to intensivists, it does not
address the needs of routine obstetrics. Table 1 describes cohort assembly for our
Note that, although we are excluding women admitted to L&D after experiencing a fetal
loss from our denominator for modeling purposes, such women would be monitored
predictive model or not, should also clearly define its numerator(s). Unlike the situation
difficult. This is because two or more patients are involved – the laboring mother and
her offspring. Ideally, obstetrics providers should be provided with risk estimates for
maternal and fetal harm. We have made an explicit decision to include three
fetal/neonatal outcomes in the numerator. The rationale for this is that some of these
For example, it is possible that subtle changes in vital signs and the interactions of
these with cervical dilation measurements could be associated with uterine rupture.
11
Since some cases of uterine rupture are only identified at the time of cesarean section
and documented subjectively in an operative note, the only indicator of such an event
might be an ill newborn (which can be identified objectively using cord gases). Thus, it
might be necessary to employ two or more predictive models. These could run
concurrently, and the fact that two models are in use need not be apparent to the end
user, who need only see a risk estimate in the EMR graphical user interface. The risk
estimate would be the probability that any adverse outcome would occur within a
Another problem that needs attention is how one defines and captures adverse
outcomes. In our work, we prefer to employ outcomes that can be defined objectively
and also be tightly linked to a discrete EMR date and time stamp. For example, while it
may be possible to link a vaginal laceration to the moment of birth, not all clinicians may
record that these occurred, and they may not be captured in real time by hospital
coders. Table 2 shows the outcomes we are employing for model calibration, relevant
time stamps, and preliminary incidence estimates. We have made an explicit decision to
develop two models (one for antepartum and one for postpartum events). Because of
sample size limitations, discussed below, we will be pooling events into composite
Zuckerman et al. and Friedman et al., additional models for different outcomes using
Definition of a time stamp may be challenging for some outcomes. For example,
women in L&D seldom have spontaneous cardiac arrest – when asystole or ventricular
12
fibrillation occurs, it is usually a distal event, typically preceded by proximal events such
as hemorrhage or pulmonary embolus. Consequently, if one employs the time stamp for
cardiac arrest or death, one may not be properly calibrating the model. For this reason,
the records of all maternal deaths included in our model are being manually reviewed by
hemorrhage) that will be used to define the time stamp. Similarly, using transfer to the
cardiac disease, severe obstetric sleep apnea due to narcotics) or for reasons unrelated
discovered during pregnancy). Finally, assigning a time stamp for some outcomes may
also be problematic because multiple time stamps exist. This is the case with severe
The fact that outcomes can occur in the antepartum, postpartum, or both time
frames affects predictor selection. In theory, as has been suggested recently,32 one
could employ all available data in the EMR for prediction using machine learning and
neural networks. In practice, given limited computational resources and time, we will
start with biologically plausible predictors and employ machine learning and content
expertise to define the final set included in the model. The same constraints we have for
numerators also apply to predictors: they must be objective, and they must have a time
stamp. A specific outcome with a discrete time stamp can occur only once, but many
13
predictors vary across time. Table 3 shows the predictors we will be evaluating in both
of our models, with those that are associated with a single measurement (e.g., body
mass index) at the top, and those that repeat at the bottom. The variables listed in Table
e.g., shoulder dystocia – can occur suddenly, without preceding vital signs or laboratory
scores may not be useful (or they may be useful only in some cases). Others, such as
The predictor list shown in Table 3 is not exhaustive or definitive. In the course of
the predictive modeling process, many variables may be transformed and/or combined
with other variables (interaction terms). For example, one important variable for
predicting antepartum complications may be the change in cervical dilation over time.
The LAPS2 contains multiple interaction terms (e.g., lactate-pH, heart rate divided by
systolic blood pressure, blood urea nitrogen divided by creatinine), while the AAM
model also includes instability terms (change in vital signs over time). Many of the
space, as they can serve as both predictors as well as outcomes. These scores, which
include the APACHE (Acute Physiology and Chronic Health Examination) and SAPS
(Simplified Acute Physiology Score) are based on multiple predictors that may include
14
vital signs, pulse oximetry, mental or neurologic status measures, and laboratory test
results.33 The statistical weights assigned to specific abnormalities (e.g., the number of
points assigned for a heart rate of ≥ 120) are based on multivariate models rather than
the value of the predictor taken in isolation. A number of these scores exist, with most
having been calibrated for mortality in patients in the ICU, one of the first locations
where these physiologic variables became widely available electronically.33 Some have
been evaluated for possible use in obstetric patients already in the ICU.30 31 However, to
our knowledge, formal investigation of the use of automated ICU severity scores in
obstetrics patients outside the ICU has not yet been reported.
Using data from 391,584 KPNC adult hospitalizations, our team developed an
acute physiology score (LAPS218 34) calibrated for inpatient mortality among all
hospitalized non-laboring adults, not just those in intensive care. Although the LAPS2’s
statistical weights were derived using inpatient mortality, the score is not used to predict
number that synthesizes information from a patient’s vital signs, pulse oximetry,
neurological status, and 16 laboratory tests; it has a “look back” time frame of 72 hours.
An earlier version of this score, the LAPS, has also been externally validated in
Canada.35 The higher the LAPS2, the greater the degree of acute physiologic
derangement and the higher the mortality risk. Working with a team from Epic, our team
has recently validated a real time version of the LAPS2 that is embedded in the Epic
EMR. In KPNC, all adults in the hospital are now being assigned a LAPS2 score with a
“look back” time frame of 72 hours at the time of hospital admission, and one with a
In theory, the LAPS2 (or scores like it, including eCART) could be employed “as
is” for use in an obstetric EWS. In practice, this may not be useful for two reasons. The
first is that vital signs and laboratory test results in laboring and postpartum women do
not have the same normal ranges as those of adults in general medical-surgical wards.
The second, related to the “look back” time frame and the tachycardia and tachypnea
seen around the time of delivery, was mentioned previously. Figure 2 shows how the
distribution of hourly LAPS2 scores can be affected by delivery and choice of “look
back” time frame. Our approach will be to evaluate 3 variants of the LAPS2 as potential
predictors: the original score (72 hour “look back” time frame), the LAPS2 OB24 (24
hour “look back” time frame), and the LAPS2 PP02. This latter score is a postpartum
score, available only after delivery, having these characteristics: the T0 begins
immediately after delivery, no vital signs are included from the time period up to and
including delivery, and (subject to the two preceding constraints) the “look back” period
for vital signs, neurological status, and pulse oximetry is set to 2 hours, with this time
frame set to 24 hours for laboratory tests. The rationale for testing the LAPS2 PP02 as
a potential predictor is that, by explicitly excluding antepartum vital signs (which may be
elevated and thus result in high scores) it may prove better at detecting postpartum
abnormalities. The potential value of combining severity scores with other physiologic
and process markers is shown in Figure 3. Note that use of these scores (which are
predictors) does not preclude concurrent use of individual vital signs and laboratory
Adult severity scores can function as outcomes, not just predictors. For example,
during AAM’s development, our team found that adult ward patients with an admission
LAPS2 ≥ 110 are at much higher risk of requiring unplanned transfer to the ICU, and
analyses of real time hourly LAPS2 scores among non-laboring adults admitted to
KPNC ICUs have found that the median LAPS2 ranges between 111 and 120. Since it
is undesirable that patients reach a high degree of physiologic instability, and since it is
now possible for us to assign such scores electronically both retrospectively as well as
in real time, it stands to reason that one could pick some high LAPS2 value (e.g., 110 or
120) and treat it as an outcome. Alternatively, it is also possible to combine the LAPS2
with the time stamp for ICU transfer – this would permit an algorithmic approach to
distinguishing between preventive ICU admissions and ICU admissions “for cause.”
Extensive discussion of the role of EFM in obstetric patient safety is beyond this
paper’s scope, but we can make some methodologic observations. Currently, important
limitations exist with respect to being able to make accurate predictions using EFM.36-39
It is likely that novel approaches to the analysis of streaming data (e.g., as described by
Cahill40), including those employing machine learning, will eventually permit more
consistent use and interpretation of EFM data. From the perspective of our current
work, there are two important limitations of EFM. The first is that, in current obstetric
practice, not all women are monitored continuously and/or have inconsistent
interpretation of EFM tracings. Thus, the actual risk (prior probability of an adverse
event) at the time EFM is initiated is not known. Second, since we are attempting to
17
predict both maternal and fetal/neonatal outcomes, the fact that EFM does not predict
for other antepartum outcomes (e.g., hemorrhage) and is of no use in the postpartum
period limits its utility for our current work. However, we suspect that, in the future, it
may be possible to combine data from EWSs such as the ones we are developing with
In this article we will not go into detail on how one actually conducts predictive
modeling once one has a properly assembled dataset, as this topic is covered
extensively in the statistical and machine learning literature. Instead, we will focus on
two critical topics that have not received attention in the obstetrics literature: structuring
predictor and outcomes data (data processing) and sample size considerations.
Data processing
Most predictive models currently in use in medicine are static – by this we mean
that one starts with a set of predictors defined at some discrete point in time (e.g.,
various risk factors such as serum triglycerides, age in years, and family history) which
one then employs to predict some outcome (e.g., risk of myocardial infarction within X
years). Static predictive models usually employ a simple “flat file” data structure, with
one row per observation, a simple yes/no outcome (0 for observations without the
outcome, 1 for those with the outcome), and individual predictors as columns. In our
case, a static prediction model would have 262,071 rows and (assuming 50 predictors
and a single outcome for each phase) a total of 53 columns: one for a study ID, one
patients in the L&D and postpartum services, a very different data structure is required.
This data structure must take the dynamic nature of predictors as well as the existence
of two phases (antepartum and postpartum) into account. For example, if one uses
cervical dilation, it is important to keep in mind that a value of 7 centimeters has a very
different meaning if present for one hour than if present for 8 hours – put differently, the
data structure must take elapsed time and time to the outcome into account. Similarly, it
is critical that time to delivery (which will not be known in real time) or time from delivery
(which will be known) be considered in all analyses. This requires a very different data
structure, one that accounts for the changing value of information. For this approach,
the dataset can be structured to add a row each time a new piece of information arrives
(as is the case with eCART, which updates estimates every time a new laboratory test
or vital sign is entered) or (in the approach we are taking) a row is added for each
patient hour in the dataset. This results in much larger datasets. For example, in our
case, with a dataset of 262,071 hospital records with average hospital length of stay of
~ 63 hours, the dataset we have created consists of ~16.6 million (63 X 262,071) rows
and around ~100 columns (one for the study ID, one each for the
predictors). A detailed description of how one structures data from actual obstetric
processing are common in predictive modeling and are used not just for AAM and
eCART, but also models such as one used by Simon et al. to predict suicide attempts.41
19
sufficient – what are most critical are the number of outcomes and the proportion of
outcomes in the population. A general rule of thumb for regression models is that one
should have 10 outcomes for each predictor in the model.42-44 However, this rule
primarily applies to static regression models. The situation is different for models that
aim to predict in real time because the outcome rate falls dramatically when the event
time frame is extremely brief (e.g., 6 hours). These difficulties are exacerbated by the
“class imbalance” problem, which occurs when the number of non-events is much larger
than the number of events and usual modeling metrics have poor accuracy.45 46 For
example, when the outcome rate is very low, the c statistic is much less valuable;
one is reporting rates based on a single time frame (e.g., the sensitivity when one only
uses the exact initial prediction time) or on multiple time frames (e.g., the sensitivity
when one uses all time periods following an initial prediction). Suppose a predictive
model with a 6 hour “look forward” time frame issues an alert at 3:15 AM on June 27,
2019 and an adverse outcome occurs at 11:50 AM on the following day. If one reports
on performance based on the exact prediction time, then the model failed; on the other
hand, if one reports on whether the model “ever” detected an outcome, then the model
succeeded.
Table 2 shows that event rates in obstetrics are very low, which would make
individual models for each outcome very difficult to develop. Because of this, we will
20
need to pool all study events into two global outcomes (antepartum and postpartum). It
is instructive to consider that, to develop the AAM model – which was statistically
374,838 patients, with 19,153 (2.9%) of the episodes having at least one outcome.17
Thus, in this work, we must face the possibility that our models may not be successful,
or that they will be predictive but have extremely high NNEs, which would make them
INSTANTIATION
touch on these issues briefly, as we and others have discussed them elsewhere9 10 15
and the topics would merit separate articles. Generally speaking, three possible
approaches exist for generating automated probability estimates from an EMR. In the
first, known as a “web service,” data (e.g., vital signs, indicators of the progress of labor)
are exported out of the EMR to an external application that applies the algorithm and
then “writes” the result in the EMR or in an external web page. In the second, a real time
EMR “mirror” or “shadow” server provides data for the algorithm (i.e., data that are a
100% match for the EMR but with a small delay, typically less than a few minutes);
algorithm output is then “written” to the EMR or in an external web page. The last
option, which is the one KPNC will be transitioning to for the above mentioned models,
is to have all algorithms run directly within the EMR (although it would seem, intuitively,
that this is the best option, in actual fact such an approach can – depending on the type
of EMR – cause significant transaction delays and slow down EMR function for others,
21
hospital system’s information technology department, all three options would also
require changes in clinician work flow and formal approval by a hospital’s Executive
Committee.
Another important instantiation issue is how to alert clinicians. When first piloted,
AAM probability estimates were displayed directly in the Epic EMR hospitalist and rapid
response team dashboard every 6 hours.14-17 Subsequently, when the decision was
made to deploy system wide with hourly data scans, KPNC clinical leaders elected to
stop displaying alerts directly in the EMR. Instead, a command center approach is now
in use: alerts are displayed in a separate website where trained nurses review scores
remotely and act as first responders. These trained nurses conduct a preliminary chart
review prior to notifying the rapid response team. They also serve as a buffer against
alert fatigue, in that they can “snooze” an alert while clinicians respond to an alert.
IMPLEMENTATION
change – clinicians must use the new information in meaningful ways. A full description
of the challenges involved in this process is outside the scope of this paper.
just having clinician “buy in” – substantial organizational investment is also necessary,
as has been described in the obstetric literature4 and in the adult setting.16 One issue
that affects adult EWSs – the fact that many patients meeting the alert threshold may
not desire rescue because they are near the end of life14 47 – has not yet been
22
addressed in the obstetric literature. Given that maternal mortality is rare, this is
reasonable, but we do need to start considering the implications of EWSs that could
One important limitation of existing EMRs and predictive models is that it is not
always possible to pinpoint exactly what variables led to a probability estimate. The
major reason for this is that models may require the use of multiple interaction terms,
making it difficult to “tease out” the contribution of an individual variable. In the case of
above.
CONCLUSIONS
increasing number of integrated health care delivery systems, automated EWSs for
obstetrics are going to be developed. As the scientific community starts working on and
evaluating these systems, the issues raised in this paper will need further discussion. In
addition, novel collaborative structures may be needed for the development of predictive
ACKNOWLEDGMENTS
This research is being supported by The Permanente Medical Group, Inc., and Kaiser
Foundation Hospitals, Inc. We thank our executive sponsors, Nancy Goler, MD; Barbara
Crawford, MS, RN, NEA-BC; and Robin Betts, MBA-HM, RN, CPHQ, for securing
23
funding for this work and providing administrative assistance. We also wish to thank the
Division of Research Strategic Programming Group (Jamila Gul, Wei Tao, Mei Lee, and
Jonathan Lontok) for their assistance in developing the study datasets; Drs. Mara
Greenberg and Michael Kuzniewicz for methodological advice; and Drs. Stephen Parodi
and Tracy Flanagan for administrative support. Lastly, we thank Hamid Niki for
REFERENCES
1. MacDorman MF, Declercq E, Cabral H, et al. Is the United States Maternal Mortality Rate
Increasing? Disentangling trends from measurement issues Short title: US Maternal
Mortality Trends. Obstet Gynecol 2016;128(3):447.
2. Unicef. Trends in estimates of maternal mortality ratio (maternal deaths per 100,000 live
births) 1990-2015. February 2017 ed, 2017.
3. Isaacs RA, Wee MY, Bick DE, et al. A national survey of obstetric early warning systems in
the United Kingdom: five years on. Anaesthesia 2014;69(7):687-92. doi:
10.1111/anae.12708
4. Shields LE, Wiesner S, Klein C, et al. Use of Maternal Early Warning Trigger tool reduces
maternal morbidity. Am J Obstet Gynecol 2016;214(4):527 e1-6. doi:
10.1016/j.ajog.2016.01.154
5. Maternal early warning systems—Towards reducing preventable maternal mortality and
severe maternal morbidity through improved clinical surveillance and responsiveness.
Semin Perinatol; 2017. Elsevier.
6. Friedman AM, Campbell ML, Kline CR, et al. Implementing Obstetric Early Warning Systems.
AJP reports 2018;8(2):e79.
7. Rothman MJ, Rothman SI, Beals Jt. Development and validation of a continuous measure of
patient condition using the Electronic Medical Record. J Biomed Inform 2013;46(5):837-
48. doi: 10.1016/j.jbi.2013.06.011 [published Online First: 2013/07/09]
8. Churpek M, Yuen T, Winslow C, et al. Multicenter Development and Validation of a Risk
Stratification Tool for Ward Patients. Am J Respir Crit Care Med 2014;190(6):pp 649–55.
[published Online First: August 4, 2014]
9. Kollef MH, Chen Y, Heard K, et al. A randomized trial of real-time automated clinical
deterioration alerts sent to a rapid response team. J Hosp Med 2014;9(7):424-9. doi:
10.1002/jhm.2193 [published Online First: 2014/04/08]
10. Evans RS, Kuttler KG, Simpson KJ, et al. Automated detection of physiologic deterioration
in hospitalized patients. Journal of the American Medical Informatics Association :
JAMIA 2015;22(2):350-60. doi: 10.1136/amiajnl-2014-002816 [published Online First:
2014/08/29]
11. Parshuram CS, Dryden-Palmer K, Farrell C, et al. Effect of a pediatric early warning system
on all-cause mortality in hospitalized pediatric patients: the EPOCH randomized clinical
trial. JAMA
12. Halpern NA. Early Warning Systems for Hospitalized Pediatric Patients. JAMA
13. Behling DJ, Renaud M. Development of an obstetric vital sign alert to improve outcomes in
acute care obstetrics. Nurs Womens Health 2015;19(2):128-41. doi: 10.1111/1751-
486X.12185
14. Escobar GJ, Dellinger RP. Early detection, prevention, and mitigation of critical illness
outside intensive care settings. J Hosp Med 2016;11 Suppl 1:S5-S10. doi:
10.1002/jhm.2653
15. Escobar GJ, Turk BJ, Ragins A, et al. Piloting electronic medical record-based early
detection of inpatient deterioration in community hospitals. J Hosp Med 2016;11 Suppl
1:S18-S24. doi: 10.1002/jhm.2652
16. Dummett BA, Adams C, Scruth E, et al. Incorporating an Early Detection System Into
Routine Clinical Practice in Two Community Hospitals. J Hosp Med 2016;11 Suppl
1:S25-S31. doi: 10.1002/jhm.2661
17. Kipnis P, Turk BJ, Wulf DA, et al. Development and Validation of an Electronic Medical
Record-Based Alert Score for Detection of Inpatient Deterioration Outside the Icu. J
Biomed Inform 2016 doi: 10.1016/j.jbi.2016.09.013
25
18. Escobar GJ, Gardner MN, Greene JD, et al. Risk-adjusting hospital mortality using a
comprehensive electronic record in an integrated health care delivery system. Med Care
2013;51(5):446-53. doi: 10.1097/MLR.0b013e3182881c8e [published Online First:
2013/04/13]
19. Escobar GJ, Ragins A, Scheirer P, et al. Nonelective Rehospitalizations and Postdischarge
Mortality: Predictive Models Suitable for Use in Real Time. Med Care 2015;53(11):916-
23. doi: 10.1097/MLR.0000000000000435 [published Online First: 2015/10/16]
20. Puopolo KM, Draper D, Wi S, et al. Estimating the probability of neonatal early-onset
infection on the basis of maternal risk factors. Pediatrics 2011;128(5):e1155-63. doi:
peds.2010-3464 [pii] 10.1542/peds.2010-3464 [published Online First: 2011/10/26]
21. Escobar GJ, Puopolo KM, Wi S, et al. Stratification of risk of early-onset sepsis in newborns
>/= 34 weeks' gestation. Pediatrics 2014;133(1):30-6. doi: 10.1542/peds.2013-1689
[published Online First: 2013/12/25]
22. Kuzniewicz MW, Walsh EM, Li S, et al. Development and Implementation of an Early-Onset
Sepsis Calculator to Guide Antibiotic Management in Late Preterm and Term Neonates.
Jt Comm J Qual Patient Saf 2016;42(5):232-9.
23. Kuzniewicz MW, Puopolo KM, Fischer A, et al. A Quantitative, Risk-Based Approach to the
Management of Neonatal Early-Onset Sepsis. JAMA Pediatr 2017;171(4):365-71. doi:
10.1001/jamapediatrics.2016.4678
24. Strunk T, Buchiboyina A, Sharp M, et al. Implementation of the Neonatal Sepsis Calculator
in an Australian Tertiary Perinatal Centre. Neonatology 2018;113(4):379-82.
25. Cook NR. Use and misuse of the receiver operating characteristic curve in risk prediction.
Circulation 2007;115(7):928-35.
26. Steyerberg EW. Clinical Prediction Models: A Practical Approach to Development,
Validation, and Updating New York, NY: Springer 2009.
27. Romero-Brufau S, Huddleston JM, Escobar GJ, et al. Why the C-statistic is not informative
to evaluate early warning scores and what metrics to use. Crit Care 2015;19:285. doi:
10.1186/s13054-015-0999-1 [published Online First: 2015/08/14]
28. Moons KG, Altman DG, Reitsma JB, et al. Transparent Reporting of a multivariable
prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and
elaboration. Ann Intern Med 2015;162(1):W1-W73.
29. Escobar G, Folck B, Gardner M, et al. Looking for trouble in all the right places: the legal
implications associated with "electronic signatures" and high-risk clinical situation. AHRQ
Publication No 05-0021-3 2005;3; Implementation Issues:51-68.
30. Lapinsky SE, Hallett D, Collop N, et al. Evaluation of standard and modified severity of
illness scores in the obstetric patient. J Crit Care 2011;26(5):535 e1-7. doi:
10.1016/j.jcrc.2010.10.003
31. Paternina-Caicedo A, Miranda J, Bourjeily G, et al. Performance of the Obstetric Early
Warning Score in critically ill patients for the prediction of maternal death. Am J Obstet
Gynecol 2017;216(1):58 e1-58 e8. doi: 10.1016/j.ajog.2016.09.103
32. Rajkomar Aea. Scalable and accurate deep learning for electronic health records. arXivorg
2018;arXiv:1801.07860 [cs.CY]:1-26.
33. Vincent JL, Moreno R. Clinical review: scoring systems in the critically ill. Crit Care
2010;14(2):207. doi: cc8204 [pii] 10.1186/cc8204 [published Online First: 2010/04/16]
34. Escobar GJ, LaGuardia J, Turk BJ, et al. Early detection of impending physiologic
deterioration among patients who are not in intensive care: development of predictive
models using data from an automated electronic medical record. J Hosp Med
2012;7(5):388-95. doi: 10.1002/jhm.1929 [published Online First: 2012 ]
35. van Walraven C, Escobar GJ, Greene JD, et al. The Kaiser Permanente inpatient risk
adjustment methodology was valid in an external patient population. J Clin Epidemiol
2010;63(7):798-803.
26
36. Macones GA, Hankins GD, Spong CY, et al. The 2008 National Institute of Child Health and
Human Development workshop report on electronic fetal monitoring: update on
definitions, interpretation, and research guidelines. J Obstet Gynecol Neonatal Nurs
2008;37(5):510-15.
37. Elliott C, Warrick PA, Graham E, et al. Graded classification of fetal heart rate tracings:
association with neonatal metabolic acidosis and neurologic morbidity. Am J Obstet
Gynecol 2010;202(3):258. e1-58. e8.
38. Clark SL, Meyers JA, Frye DK, et al. Recognition and response to electronic fetal heart rate
patterns: impact on newborn outcomes and primary cesarean delivery rate in women
undergoing induction of labor. Am J Obstet Gynecol 2015;212(4):494. e1-94. e6.
39. Clark SL, Hamilton EF, Garite TJ, et al. The limits of electronic fetal heart rate monitoring in
the prevention of neonatal metabolic acidemia. Am J Obstet Gynecol 2017;216(2):163.
e1-63. e6.
40. Cahill AG, Tuuli MG, Stout MJ, et al. A prospective cohort study of fetal heart rate
monitoring: deceleration area is predictive of fetal acidemia. Am J Obstet Gynecol
2018;218(5):523. e1-23. e12.
41. Simon GJEMS. Predicting Suicide Attempts and Suicide Deaths Following Outpatient Visits
Using Electronic Health Records. Am J Psychiatry 2018 doi:
10.1176/appi.ajp.2018.17101167
42. Wasson JH, Sox HC, Neff RK, et al. Clinical prediction rules. Applications and
methodological standards. N Engl J Med 1985;313(13):793-9.
43. Peduzzi P, Concato J, Feinstein AR, et al. Importance of events per independent variable in
proportional hazards regression analysis. II. Accuracy and precision of regression
estimates. J Clin Epidemiol 1995;48(12):1503-10. [published Online First: 1995/12/01]
44. Peduzzi P, Concato J, Kemper E, et al. A simulation study of the number of events per
variable in logistic regression analysis. J Clin Epidemiol 1996;49(12):1373-9.
45. Japkowicz N, Shaju S. "The class imbalance problem: A systematic study." Intelligent data
analysis 6.5. 2002:429-49.
46. Galar M, Fernandez A, Barrenechea E, et al. A Review on Ensembles for the Class
Imbalance Problem: Bagging-, Boosting-, and Hybrid-Based Approaches. IEEE
Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
2012;42(4):pp. 463-84. [published Online First: JULY 2012]
47. Granich R, Sutton Z, Kim YS, et al. Early detection of critical illness outside the intensive
care unit: Clarifying treatment plans and honoring goals of care using a supportive care
team. J Hosp Med 2016;11 Suppl 1:S40-S47. doi: 10.1002/jhm.2660
27
TABLE 1: INCLUSION AND EXCLUSION CRITERIA FOR OBSTETRICS EARLY WARNING COHORT DATASET
INCLUSION CRITERIA
CRITERION DESCRIPTION AND RATIONALE
Admitted to L&D*, delivered Core denominator for “bread and butter” obstetrics. Note that “delivered” includes both live
births as well as fetal losses, since a woman who experienced a fetal loss may still have
adverse outcomes before and after delivery.
Gestational age ≥ 22 weeks Pathophysiology of miscarriages prior to this gestation is not well understood, and it is
unlikely that a physiology-based predictive model using currently available EMR data
elements can predict for such losses.
Arrived at L&D with fetal heart rate Early warning system is calibrated for women who had a living fetus on arrival.
present
Admitted to L&D, record not When constructing a dataset for predictive model development, some clerical errors may be
located using hospital service code present in initial data extract. Women with L&D records located via alternative linkage
strategies (e.g., erroneously listed as being ward patients but maternal record was found by
linkage to neonatal record) should be retained
EXCLUSION CRITERIA
Not pregnant Not eligible for early warning system, even if record is listed (erroneously) as L&D
Not admitted to L&D Some women are initially admitted to other services (emergency department, ward). If
woman is transferred to L&D, L&D record is included in cohort, with T0 for admission being
the time stamp for admission to L&D or L&D triage room. Maternal data (e.g., vital signs,
laboratory test results) from these other services are included so long as they fall within “look
back” time frame of early warning system.
28
TABLE 1: INCLUSION AND EXCLUSION CRITERIA FOR OBSTETRICS EARLY WARNING COHORT DATASET (continued)
EXCLUSION CRITERIA
Intrauterine fetal demise preceding Women who arrive to L&D service without a detectable fetal heart tone are not the primary
L&D admission focus of the early warning system.
Gestational age < 22 weeks, Pathophysiology of miscarriages prior to this gestation is not well understood, and it is
independent of outcome unlikely that a physiology-based predictive model using currently available EMR data
elements can predict for such losses. Early warning system is designed for women who have
a delivery.
Unusual service pattern This category includes women who have other conditions precluding admission to L&D. For
example, women admitted to the ICU for pre-existing conditions (e.g., malignancy,
uncommon cardiac conditions). While these women clearly need special monitoring, their
data might distort a predictive model targeting “bread and butter” obstetrics. Records of
these women need manual review for determination of whether they should be included in
main study cohort.
* Labor & delivery service. For purposes of this table, the term “L&D” includes any designated hospital antepartum or
postpartum unit.
29
TABLE 2: ANTEPARTUM (AP) AND POSTPARTUM (PP) OUTCOMES FOR OBSTETRIC EARLY WARNING SYSTEM
OUTCOME DESCRIPTION & HOW CONFIRMED IN ELECTRONIC MEDICAL RECORD (EMR) FREQUENCY
(N and rate per
1,000 deliveries)
Fetal death (AP) Only includes fetal deaths that occurred after a woman was admitted to labor and 55, 0.21
delivery service. Time stamp is obtained from nursing flow sheet: if fetal heart rate of 0
is documented, that is used; otherwise time stamp is time of delivery.
Hypoxic-ischemic Outcome is ascertained based on research registry that captures medical record 315, 1.20
encephalopathy numbers of all newborns eligible for head cooling protocol. Time stamp used is time of
(HIE) (AP) birth.
Neonatal acidosis Defined as any blood gas (cord or infant) with a base deficit of -12 or more in the initial 904, 3.45
(AP) hour of life, and either of (a) intensive care nursery admission for 24 hours or more or
(b) neonatal disposition of transport or death. Overlap exists with hypoxic-ischemic
encephalopathy. Relevant blood gases are obtained from laboratory database. The
time stamp used is time of birth. Note that the newborn outcome is assigned to the
mother.
Eclampsia (AP, PP) 100% manual ascertainment (all records with International Classification of Diseases AP: 1, 0.05
code are manually reviewed); time stamp assigned based on 1st documented seizure PP: 5, 0.02
in seizure flowsheet.
Severe Defined as meeting criteria for preeclampsia and relevant biochemical abnormalities AP: 1134, 4.33
preeclampsia (AP, (e.g., elevated liver function tests) and ever had LAPS2* ≥ 60 (antepartum) or 80 PP: 107, 0.41
PP) (postpartum). Time stamp is time when severity threshold was reached.
30
TABLE 2: ANTEPARTUM (AP) AND POSTPARTUM (PP) OUTCOMES FOR OBSTETRIC EARLY WARNING SYSTEM
(continued)
Hemorrhage Defined as any one of these: (a) patient was ever transfused with ≥ 4 units packed red AP: 83, 0.32
(AP, PP) blood cells; (b) patient was transfused with 1-3 units packed red blood cells and had a PP: 844, 3.22
documented hematocrit < 18%; and (c) patient had documented estimated blood loss >
1500 mL and hematocrit < 18%. Time stamp is the earliest of one of these: (a) transfusion
time, (b) hematocrit time, (c) estimated blood loss time, and (d) time when patient had a
LAPS2 score ≥ 60 (for antepartum hemorrhage) or 80 (for postpartum hemorrhage).
Emboli (AP, PP) 100% manual ascertainment (all records with relevant ICD code are manually reviewed; AP: 11, 0.04
only those that were not pre-existing and that have evidence of new anticoagulation PP: 22, 0.08
treatment in EMR medication administration record are retained). Includes pulmonary, air,
fat, and amniotic fluid emboli. Time stamp is defined as either (a) the first time patient
reached a LAPS2 score ≥ 70 (AP) or 90 (PP); or (b) the time of the highest score in the
AP or PP phase. Other deep venous thromboses are not included, as they are not
amenable to detection by vital signs-based early warning system.
Transfer to Ascertained from bed history. Patients admitted to intensive care preventively will not be AP: 42, 0.16
intensive care considered to have had this outcome, as will patients whose admission was due to non- PP: 595, 2.27
(AP, PP) obstetrics issues (e.g., surgery for colon cancer, severe influenza). Exclusion may be
algorithmic (i.e., patients admitted to intensive care with low severity of illness may be
considered to have been preventive admissions). Some patients with unusual clinical
conditions may need to be excluded from denominator altogether.
Major Assigned algorithmically based on presence of very high LAPS2 score (≥ 110 AP, 120 AP: 29, 0.11
deterioration PP), with time stamp when patient first reached a score of 90 (AP) or 100 (PP). Outcome PP: 254, 0.97
without transfer is intended to capture major physiologic derangement not captured by the other outcomes
to intensive care listed above.
(AP, PP)
31
TABLE 2: ANTEPARTUM (AP) AND POSTPARTUM (PP) OUTCOMES FOR OBSTETRIC EARLY WARNING SYSTEM
(continued)
Uterine rupture (AP) Ascertained from ICD codes. If patient had elevated LAPS2 score prior to delivery, 249, 0.95
rupture time stamp is time of delivery minus 20 minutes. If no elevated score, then use
time of delivery as time stamp.
Maternal death (PP) After decedents identified in EMR, records manually reviewed by expert panel to 16, 0.06
ascertain time of proximal event (e.g., hemorrhage, embolus), as distal event (e.g.,
cardiac arrest) may be too late for use in early warning system. In some cases, distal
event may have occurred after patient discharged home. The time stamp of the
proximal event will be used for modeling.
* LAPS2: Laboratory-based Acute Physiology Score, version 2. See text and Escobar et al. (2013) for details.
32
Maternal age in years Captured from demographic databases. Extremes are known to be associated with
increased rates of adverse outcomes.
Gestational age in weeks Located by electronic scanning; if missing, calculated algorithmically based on data that
would be available in real time.
Multiple gestation Identified from the presence of fetal heart tones for >1 infant during the delivery encounter.
COmorbidity Point Score, version 2 12-month longitudinal open source comorbidity score calculated based on Centers for
(COPS2) Medicare and Medicaid Services Hierarchical Condition Categories; the higher the COPS2,
the greater the pre-existing comorbid illness burden. See Escobar et al. (2013) for details.
Score can be calculated in real time.
Gestational diabetes Located by electronic scanning for relevant ICD codes and laboratory tests (hemoglobin A1c,
glucose tolerance tests).
Diabetes Located by electronic scanning for relevant ICD codes and laboratory tests (hemoglobin A1c,
glucose tolerance tests).
33
Time of rupture of membranes Discrete time stamp exists in EMR. If missing, default to time of delivery.
Amniotic fluid characteristics Located by electronic scanning; defined algorithmically (all entries that do not state fluid as
clear are bucketed as “not clear”; missing defaults to “clear”).
Fetal scalp electrode, intrauterine Discrete EMR time stamp for placement exists. See text for discussion on fetal heart rate
fetal monitoring monitoring
Body mass index Discrete field exists in EMR; if not available there, outpatient record up to 30 days prior to
arrival to labor and delivery used; otherwise default to normal.
History of emboli Electronic scanning for International Classification of Diseases codes and evidence of
anticoagulation treatment
Individual maternal vital signs Temperature, heart rate, respiratory rate, systolic blood pressure, diastolic blood pressure;
all are found in nursing flowsheets
Neurological status Determined algorithmically based on free text entries from nursing flowsheets; see Escobar
et al. 2012 and 2013 for details.
Laboratory-based Acute The LAPS2 includes neurological status, pulse oximetry, and all vital signs; it also includes
Physiology Score, version 2 16 laboratory tests. See text and Escobar et al. (2012 and 2013) for additional details.
(LAPS2) (3 variants)
Individual laboratory tests All individual laboratory tests used in LAPS2 severity score, plus: magnesium, AST, ALT,
LDH, uric acid, urine creatinine, urine protein, urine protein creatinine ratio; hemoglobin,
hematocrit, platelet count. All are found in EMR or laboratory database, with discrete time
stamps.
FIGURE LEGENDS
event, X (which must be defined explicitly) will occur within some elapsed time (EVENT
TIME FRAME). This time should be sufficient to provide ample Lead Time for an
adequate clinical response. Note that the time when X actually occurred (XACT) is not
the same as when X was documented in the electronic record (XDOC). Data used to
populate the early warning system not only must be from time preceding the T0 (LOOK
BACK TIME FRAME) but should actually be available at the T0 – data might not be
available due to charting delays. Figure also shows that the event time frame can
precede (A), coincide with (B), or come after (C) the time of delivery. Consequently,
Page 35 of 37
36
selected healthy delivery encounters. Upper line shows the LAPS2 OB24, which has a
“look back” time frame of 24 hours, bottom line shows the LAPS2 PP02, which is only
assigned after delivery and which selects the worst laboratory test results from the
preceding 24 hours but (a) does not include any vital signs from the time of delivery or
earlier, and (b) employs a very restricted – 2 hours – look back time frame for vital
signs. Figure highlights the importance of the “look back” time frame – the distribution of
LAPS2 OB24 scores (which include vital signs during the delivery process) is much
higher than that of the LAPS PP02, which explicitly excludes delivery vital signs, and
could offer some mathematical advantages for detection of postpartum outcomes. Note
that, due to large sample size, confidence intervals are very narrow. See text and
Page 36 of 37
37
among 665 women who experienced this outcome within 24 hours after delivery. Each
line shows the proportion of women remaining undetected after delivery based on the
time stamp of a given electronic marker: time of first LAPS2 OB24 ≥ 80 (yellow line);
hematocrit < 18% (blue line); time of first entry for estimated blood loss > 1500 mL
(green line); time of first transfusion order (black line); and the earliest of any of these
(red line). The LAPS2 employed (LAPS2 OB24) used a 24 hour look back time frame.
See text and Escobar et al. (2013) for details on the LAPS2.
Page 37 of 37