Towards Fully Automated Post-Event Data Collection and Analysis Pre-Event and Post-Event Information Fusion
Towards Fully Automated Post-Event Data Collection and Analysis Pre-Event and Post-Event Information Fusion
Engineering Structures
journal homepage: www.elsevier.com/locate/engstruct
Keywords: In post-event reconnaissance missions, engineers and researchers collect perishable information about damaged
Artificial intelligence buildings in the affected geographical region to learn from the consequences of the event. A typical post-event
Data-driven decision making reconnaissance mission is conducted by first doing a preliminary survey, followed by a detailed survey. The
Post-event reconnaissance objective of the preliminary survey is to develop an understanding of the overall situation in the field, and use
Resilience
that information to plan the detailed survey. The preliminary survey is typically conducted by driving slowly
Convolutional neural networks
along a pre-determined route, observing the damage, and noting where further detailed data should be collected.
Machine learning
Bayesian information fusion This involves several manual, time-consuming steps that can be accelerated by exploiting recent advances in
Automated data analysis computer vision and artificial intelligence. The objective of this work is to develop and validate an automated
technique to support post-event reconnaissance teams in the rapid collection of reliable and sufficiently com-
prehensive data, for planning the detailed survey. The focus here is on residential buildings. The technique
incorporates several methods designed to automate the process of categorizing buildings based on their key
physical attributes, and rapidly assessing their post-event structural condition. It is divided into pre-event and
post-event streams, each intending to first extract all possible information about the target buildings using both
pre-event and post-event images. Algorithms based on convolutional neural networks (CNNs) are implemented
for scene (image) classification. A probabilistic approach is developed to fuse the results obtained from analyzing
several images to yield a robust decision regarding the attributes and condition of a target building. We validate
the technique using post-event images captured during reconnaissance missions that took place after hurricanes
Harvey and Irma. The validation data were collected by a structural wind and coastal engineering re-
connaissance team, the National Science Foundation (NSF) funded Structural Extreme Events Reconnaissance
(StEER) Network.
⁎
Corresponding author.
E-mail address: [email protected] (A. Lenjani).
https://doi.org/10.1016/j.engstruct.2019.109884
Received 29 June 2019; Received in revised form 17 October 2019; Accepted 1 November 2019
Available online 13 February 2020
0141-0296/ © 2019 Elsevier Ltd. All rights reserved.
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
mobile applications, cyberinfrastructure, and research opportunities for The objective of this research is to develop and validate an auto-
these reconnaissance teams to leverage. One of the key partners leading mated technique to process post-event reconnaissance image data and
the structural engineering data collection efforts is the Structural Ex- output the relevant attributes and overall damage condition of each
treme Events Reconnaissance (StEER) Network [25]. In addition, the building. Using only the visual content in the images, the technique is
Earthquake Engineering Research Institute (EERI) also initiated the intended to directly support engineers and architects mainly during the
Virtual Earthquake Reconnaissance Team (VERT) that aims to engage preliminary survey phase of a reconnaissance mission. Automation is
young engineers and graduate students in post-disaster reconnaissance applied to extract the relevant information typically collected during
[28]. such missions, making it readily available to the human engineer and
The data collection platforms that support these efforts, including architect that must act upon that information. We first develop an ap-
drones and satellites, have advanced rapidly in recent years. However, propriate classification schema for this application and establish the
many of the steps involved in the organization and analysis of the ability to categorize buildings based on their key physical attributes
complex and unstructured data collected during post-event re- using pre-event data. CNNs are utilized for scene (image) classification
connaissance missions are still predominantly manual and quite time- to categorize the target building, shown in a set of images, based on
consuming. Furthermore, the research needed to accelerate, and even their structural attributes and post-event condition. Next, post-event
automate, the analysis of these data has not kept pace with the en- data is similarly used to rapidly determine their post-event condition. In
ormous investment directed toward the collection of these data. each case, by appropriately fusing the information extracted from
Automating some of the procedures associated with building damage multiple images, we make robust determinations regarding the cate-
surveys will enable reconnaissance teams to more rapidly gather and gorization of each building.
analyze these large volumes of perishable information. Recent de- The information fusion process developed and integrated into the
monstrations of automation include scene recognition and object de- technique considers the quality and completeness of the data collected.
tection with large volumes of images collected after an event by ex- We validate the technique using post-event images of residential
ploiting new developments in convolutional neural networks (CNNs) buildings captured during hurricane Harvey and Irma reconnaissance
[2,11,29,30]. These techniques, which fall into the broad category of missions collected by the NSF-funded StEER Network [24,25]. We
artificial intelligence, are gaining traction. However, there are still evaluate the performance of the technique by comparing our results to
significant challenges associated with real world application of these the documentation collected during the mission, as recorded through
methods, mainly revolving around both the need to acquire sufficient the Fulcrum app [23], and we discuss the need for greater volumes of
quantities of ground truth data and the potential to inadvertently in- data to be collected in future missions.
troduce bias into the training process [10]. The remainder of this paper is organized as follows: Section 2 pro-
Here we develop an end-to-end technique for automating several vides the problem formulation. Section 3 provides a demonstration and
steps in the analysis and decisions associated with post-event damage validation of its effectiveness. The conclusions are discussed in
survey data. Post-event surveys can be broken down into a preliminary Section 4.
survey, sometimes called a “windshield survey,” followed by a detailed
survey [5]. The preliminary survey is conducted to collect initial data to 2. Technical approach
gain a perspective about the overall situation in the field. This initial
data are then used to make decisions regarding what further data must A general diagram of the technique developed is shown in, Fig. 1.
be collected during the detailed survey. To conduct the preliminary The input is a collection of geo-tagged, post-event images of the re-
survey, field engineers usually drive slowly along the streets in the af- sidential buildings in a region. The output is the information needed for
fected region to observe the extent of the damage. This typically takes an assessment of each residential building, including automatically
place within a few days of the event. These coarse data might be aug- generated physical and structural attributes plus post-event condition
mented by occasionally getting out of the vehicle to take photos or information. Certain necessary physical and structural attributes are
perhaps to get a closer look at debris or specific buildings. The pre- best obtained from the pre-event condition, so multiple pre-event
liminary survey is conducted to provide evidence that is used to plan an images are automatically extracted from existing street view databases.
efficient detailed survey. During the detailed survey, several small Post-event building condition information is obtained directly from
teams of engineers and architects, data collectors, are dispatched to the post-event images.
region to visit specific buildings and collect much more detailed in- The technique is implemented through two branches of data ana-
formation about their condition [8,12,25]. Typically, the detailed lysis, conducted independently. We call these two branches the post-
survey involves collecting these data by walking around each building, event data analysis stream and the pre-event data analysis stream. The
or even entering the building if permitted to do so. Many of these teams post-event stream assesses the overall damage condition of the building
intend to capture data that may motivate new lines of scientific inquiry after the event based on the images collected during the preliminary
related to the performance of our infrastructure. survey. The pre-event stream extracts building physical attributes to be
Within our procedure we also leverage relatively new vision sen- used for the preliminary screening, as well as several pre-event views of
sors, such as spherical cameras that can be mounted on street view cars, the building from various perspectives. These two sets of com-
that have the mobility to rapidly collect a large volume of entire-view, plementary information are organized in a way that assists the decision-
high-resolution images in a short period of time [1]. To support many making process of human inspectors regarding where to focus resources
other needs in the commercial sector, regularly-updated images of during a detailed survey. For clarity, we design a classification schema
buildings’ facades are captured and stored through street view services. specific to post-event preliminary surveys. The schema can be easily
These images may be critical for damage surveys, as after an event a extended to support other applications. In the subsequent paragraphs,
building may be so severely damaged that its original attributes may we discuss the process use to develop each data analysis stream. The
not be decipherable. An automated technique has been developed to detailed definitions for the classification schema are provided in Section
extract high-quality pre-event images from several viewpoints using 2.1.
only a single geo-tagged image or its GPS data [17]. Additionally, after The post-event data analysis stream requires the design and training
the event, images may be similarly collected with spherical cameras to of two image classifiers which are implemented sequentially. The first
quickly record the external appearance of buildings and support visual classifier is intended to filter out images that contain useful information
assessment [17,31]. The integration of these readily available data, about the condition of the building, step B1. The best images for de-
efficient and automated analytics capabilities, and processing power, tecting the overall condition of the building for hurricane assessment
can greatly improve the efficiency of the reconnaissance missions. are images that provide a view of the entire building. However, the data
2
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
A1 A2
Pre-event data
analysis stream
Input Output
collected for a given target building may include close images of The definitions for those comprising the post-event and pre-event
components or details, or even irrelevant images (e.g., cars, trees, streams are discussed in the following sections.
windows, doors, etc). Including these in the dataset to be automatically
analyzed may bias the results, or increase the processing time. The
2.1.1. Classifiers used in the post-event stream
filtered data are passed to the next classifier, which is trained to detect
The procedure used in the post-event data analysis stream is shown
the overall condition of the structure, step B2, see Section 2.1.1.
in, Fig. 3. Two classifiers are used for classification of the post-event
The pre-event data analysis stream automatically detects certain
data, one to filter out less valuable images from the larger set, and a
physical attributes of each building that are useful in a preliminary
second to determine the condition of the building. These are applied to
post-event survey using image classification. Since post-event images of
the dataset sequentially, as shown in, Fig. 2b.
buildings that have experienced severe damage cannot reliably be used
The first classifier needed for post-event data analysis is called the
to determine the original physical attributes, it is more appropriate to
Overview classifier. This is a binary classifier that flags images that show
use pre-event images for this purpose. To this end, we developed a fully
a sufficient view of the building. Each post-event image is classified as
automated technique to extract pre-event images from street view
either “Overview” or “Non-Overview,” as indicated in Step 2A.
imagery services, step A1. These pre-event images along with the
The Overview classifier is defined as:
ground truth labels, provided by the field engineers [24], are used to
design and train a set of image classifiers, that can detect certain phy-
sical attributes, explained in Section 2.1.2, step A2. • Overview (hereafter, OV): Images classified as OV show the entire
building, irrespective of whether it is damaged or not, in the sense
In some cases, reliable determination of a physical attribute or even
that they contain more than 70% of the facade (with either a front
the condition of the building requires that classification results from
view or a side view), and they include a portion of the roof. To
several images containing multiple views of the building be used. For
include the possibility of severe damage, an image with some
instance, if several post-event images are collected from a building, and
standing columns, or a pile of debris which can clearly be identified
only one of those images provides a view of the damaged region, the
as a collapsed building, is also classified as OV. Examples of the
classifier will only detect damage in that one specific image; The spe-
latter include images of the general overall view of standing struc-
cific image containing the damage cannot be known in advance.
tural members or a collapsed roof. An additional restriction of OV
Therefore, the relevant images available must be used collectively to
images is that no more than 20% of the image area shows the sur-
make a determination. We have developed an approach to fuse the
rounding buildings. In some cases, partial obstruction, by trees, cars,
information from several images to make such decisions. The problem
and other buildings, is an inevitable challenge. However, if the
formulation is provided in Section 2.2 and the demonstration is in-
obstruction hides less than 30% of the building facade, we still
cluded in Section 3.
consider the image as an OV.
• Non-overview (hereafter, NOV): Images that are not OV are NOV.
2.1. Design of the classification schema Examples of NOV include images of the interior of the building,
measurements, GPS devices, drawings, multiple buildings, building
The classification schema designed to support preliminary hurricane facades occluded by trees, cars or other buildings.
surveys is shown in, Fig. 2, the abbreviations are defined later). Clas-
sifiers are much more effective when clear boundaries exist to distin- Samples of images defined as OV and NOV are shown in Fig. 4a and
guish the visual features of the images in different classes. This is b, respectively.
especially true to achieve robust classification in the real world when Next, as shown in, Fig. 3, the subset of images classified as OV are
using such unstructured and complex data, as is often the case in re- analyzed collectively to determine the overall building condition. A
connaissance datasets. Thus, a clear definition for each class is needed classifier is trained to determine whether a single OV image should be
to establish consistent ground-truth data that are suitable for training. labeled as “Major damage” or “Non-major damage,” which includes
3
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
both minor and no damage. We call this binary classifier the Damage 2.1.2. Classifiers used in the pre-event stream
classifier. Note that a single image is not sufficient to characterize a The sequence of steps used to perform the pre-event data analysis
building as it may be showing a side from which damage is not visible. stream is shown in, Fig. 6. In the pre-event stream, multiple external
Therefore, after classifying the damage in each OV image of a given views of each building, collected before the event, are required. We
building, the overall condition must be decided by fusing all available employ an automated method we previously developed to extract sui-
information (this will be discussed in Section 2.2). The Damage clas- table pre-event residential building images from typical street view
sifier is defined as: panoramas [17,31].
We design three independent classifiers, shown in, Fig. 2a, to label
• Major damage (hereafter, MD): Images classified as MD contain the scenes containing each view of the pre-event target building. These
visual evidence of severe damaged by wind, wind-driven rain, or classifiers detect: first floor elevation, number of stories, and con-
flood. MD includes failure of structural/non-structural components, struction material. To successfully train the classifiers to detect building
such as roof collapse, broken column or wall, or damage on exterior attributes, we need a clear definition of each class. In what follows, we
wall or door [19]. In cases where there is visual evidence of severe describe these definition in detail.
water intrusion/damage, we also classify the image as MD. Con- One important physical attribute of a residential building is first
siderable damage to the roof or exterior doors or windows or garage floor elevation, which is defined as the elevation of the top of the lowest
doors, either from flooding or water intrusion in the case of a hur- finished floor, which must be an enclosed area, of a building. We train a
ricane, are interpreted as major damage. classifier to determine whether a single building image should be
• Non-major damage (hereafter, NMD): Images that are not MD are classified as “Elevated” or “Non-elevated”. The Elevation classifier is
NMD. No damage, or minor damage, such as cracked, curling, lifted, defined as:
or missing shingles, missing flashing, or dents on the doors, are all
considered as NMD. • Elevated (hereafter, EL): This class includes buildings with a first
floor that appears to be elevated more than 5 feet (or, half a story).
Samples of images defined as MD and NMD are shown in Fig. 5a and Buildings are considered as EL when their ground floor, below the
b, respectively. first finished floor, is not covered by walls or cladding and is thus
visually distinguishable from an occupied floor. The lack of cover-
ings or walls is present to potentially allow water to pass through in
Input B1 B2
ND
Major Damage
MD
(MD)
Non-Major
Damage NMD
(NMD)
Reading post-event geo- Filtering OV images Quantifying MD probability Fusing information of Making final
tagged raw images (overview classifier) of each image all images to quantify decision for the
(Damage classifier) MD probability of the target building
target building
4
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
case of flood to reduce hydrodynamic impact loads. In a typical images. Fig. 7a shows samples of EL images.
elevated building, the first floor only contains supporting columns • Non-elevated (hereafter, NEL): This class has the opposite
(sometimes referred to as slits) which are visually identifiable in the meaning as the elevated class. It includes images of buildings
Fig. 5. Samples of images classified as major damage (MD) and non-major damage (NMD).
5
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
Reading the GPS Preparation of the multiple views of pre- Quantifying attributes Fusing information Making final
location of the event images of the target building probability of each of all images to decision for each
geo-tagged image quantify each attribute of the
image (image classifiers) attribute probability target building
of the target building
without first floor elevation, or with a first floor elevation of less will contain buildings that have either one or two stories. So, we train a
than 5 feet. Any images of buildings with a first floor that is covered two-class classifier to classify each of the images as either as “One-
by walls or cladding are classified as NEL. Fig. 7b shows samples of story” or as “Two-stories.” This classifier does not consider any floors
NEL images. that are not visible, for instance in a case where a floor may be below
grade. This classifier is the Number-of-stories classifier, and these two
Another useful physical attribute is the number of stories. Because classes are defined as follows:
we focus on residential buildings here, the vast majority of the images
6
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
• One-story (hereafter, 1S): This class includes images of buildings where fCNN, c (x ) is the CNN-based classifier corresponding to the attri-
bute. How can we use the classification of each image (Ci ) to classify the
which appear to have one-story from a structural engineering point
of view (i.e., dynamically, it behaves like a single story). If any entire building (C)? We have:
elevation is present in the image, it must not be enough to be
classified as EL (i.e., less than about half a story). Fig. 8a shows p (C = c|x1, …, xn) = p (C = c|Ci = c1, …, Cn = cn, x1, …, xn)·
c1, … , cn
samples of One-story images.
• Two-stories (hereafter, 2S): This class includes images of buildings p (C1 = c1, …, Cn = cn |x1, …, x n)
which appear to have two-stories, from the structural engineering = p (C = c|Ci = c1, …, Cn = cn )·
viewpoint. Either a two story building with no first floor elevation, c1, … , cn
or a one story building with greater than 5 feet of elevation at the p (C1 = c1, …, Cn = cn |x1, …, xn )
first floor is included in the Two-stories category. Fig. 8b shows
= p (C = c|Ci = c1, …, Cn = cn )·
samples of Two-stories images. c1, … , cn
n
The third classifier applied to the pre-event images is trained to p (Ci = ci |x1, …, xn )
detect the construction material of the building. In a preliminary i=1
survey, it is important to know if wood is the main construction ma- = p (C = c|Ci = c1, …, Cn = cn )·
terial, or if there is an abundance of other materials present, for in- c1, … , cn
7
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
to either make the best possible decision about all cases, or to make Now, we use the sum rule on the fusion probability:
decision only when it is highly confident. The loss function is structured
to handle the trade-off between the accuracy and informativeness of the p (C = 1|C1 = c1, …, Cn = cn) = p (C = c|C1 = c1…,Cn = cn, Z = 1) p (Z = 1|C1
results through adding ND class to skip making a decision in case of not = c1, …, Cn = cn)
being sufficiently confident. + p (C = 1|C1 = c1, …, Cn = cn, Z = 0) p (Z = 0|C1
= c1, …, Cn = cn)
2.2.1. Post-event = p (C = 1|C1 = c1…,Cn = cn, Z = 1) qn
The case of the post-event stream, and in particular the MD (C = 1) + p (C = 1|C1 = c1, …, Cn = cn, Z = 0)(1 qn ).
vs NMD (C = 0 ) problem, is inherently asymmetric. On one hand, one (5)
must consider the whether or not the set of images shows the building
from all sides. For example, a single image classified as NMD is not The two terms that we need to specify are the probabilities of la-
sufficient to conclude that the building is indeed NMD since the damage beling the building as MD (C = 1) given the image labels and whether
may simply not be visible from the viewpoint of that image. So, to or not the building is covered. For the covered case, we set:
classify a given building as NMD, we need to ensure that all sides of the n
building are shown in the set of images (in this case, we say that the ci
building is covered). If all of these individual images are classified as p (C = 1|C1 = c1…,Cn = cn, Z = 1) = i=1
,
NMD, only then can the building be categorized as NMD. On the other n
hand, to classify a building as MD, it is sufficient to have a single image (6)
classified as MD.
Define a binary r.v. Z taking values {0, 1} indicating that the where · is the first integer greater than its argument. This means that
building is not covered and is covered, respectively. Let there is at least one image labeled as MD, then the entire building is
p (Z = 1|C1 = c1, …, Cn = cn, x1, …, xn ) be probability that the available labeled MD. For a covered building to be labeled NMD, all images must
images sufficiently cover the target building, hereafter coverage prob- be labeled NMD. There are no intermediate cases. For the uncovered
ability. Our dataset does not provide any information about Z (the case, we set:
images do not include sufficient geolocation information). Therefore,
n
we may write:
ci
i=1
p (Z = 1|C1 = c1, …, Cn = cn, x1, …, xn) = p (C = 1|C1 = c1, …, Cn = cn, Z = 0) = max , n ,
n
p (Z = 1|C1 = c1, …, Cn = cn ) = qn , (4)
(7)
where in the last step we used the observation that only the number of
images are affects our state of knowledge about Z, i.e., the labels where n represents the probability that the building is MD but the
themselves are uninformative about Z. Obviously, q1 = 0 and q2 = 0 damage is not visible in n images. Again, n depends on what we know
since one or two images cannot cover the building. Furthermore, we about data collection. In general, we must have 0 i i+1 1. In our
should have that 0 qi qi + 1 1. The specific numerical choice of this case studies, we simply pick n = 0.5 for all n. So, for the uncovered
series of probabilities depends on our state of knowledge about how the case, a single MD labeled image is sufficient to characterize the building
data were collected. For example, if we knew that any three images as MD. However, if all images are labeled NMD, there is still a prob-
cover the building, then we would set q1 = q2 = 0 and qn = 1 for n 3. ability, n , that the building is MD but the damage is not visible.
8
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
9
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
Fig. 10. Post-event reconnaissance dataset collected after Harvey and Irma and published on DesignSafe-CI and Fulcrum [8,23,24].
The general form of the loss function is shown in, Table 2. Without Table 2
loss of generality, we can set the loss of correct predictions to zero. The Loss function.
cost of mistakenly characterizing an MD (NMD) building as NMD (MD) Decision
is 1. The cost of labeling as ND when the building state is MD (NMD) is
1 ( 2 ). These parameters are selected to reflect the goals of the pre- ND MD NMD
liminary survey, see Section 3.1.3.
True label
MD 1 0 1
NMD 2 1 0
3.1.1. Sample results
First, consider the case in which all of the buildings are assumed to
be captured adequately with the images available, qn = 1 for all n 1, demonstration of the end-to-end post-event stream data analysis. Out of
and pick a loss function with 1 = 2 = 0.3. This choice of the loss a total of 1,121 buildings visited after hurricane Irma, the dataset in-
function making mistakes has a unit cost, while not deciding costs thirty cludes 54 buildings with no true label, and 179 buildings with no OV
percent of the mistake cost. In Fig. 12, we visualize the density of the images. Also, 26 buildings are not distinct and those data are merged
fusion predictive probabilities corresponding to each different decision into one building set. Therefore we have 914 labeled buildings with OV
and true label, i.e., density of decisions made at a given fusion prob- images. The results show that 717 buildings are correctly categorized,
ability. It shows six combination of the two true labels, MD and NMD, 110 buildings are classified incorrectly, and 87 buildings labeled ND.
and three possible decisions, MD, NMD and ND. The correct decisions To understand the limitations of the approach, it is informative to
for the buildings with NMD (MD) true labels, depicted in red (blue), examine some specific building examples of correct (incorrect) deci-
show low-variance right (left)-skewed density with a mode close to 0 sions as well as ND. Fig. 13 shows four images of a representative case
(1). However, the densities of the incorrect decisions for both MD and in which a building is correctly categorized as MD. In this case, the first
NMD buildings, have more variance. Table 3 provides the confusion three raw images, numbered as 1, 2, and 3, do not show any evidence of
matrix, the table of true labels versus predicted, for the results of our damage. However, image number 4 does show the damage clearly, and
(a) Accuracy ofthe classifiers for post-event stream. (b) Accuracy ofthe classifiers for pre-event stream.
Fig. 11. Accuracy plots of classifiers.
10
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
does not have any evidence that the building should be categorized as
having major damage, and is not classified as damaged. However,
image number 3, which is the top view of the building capture through
aerial imagery, which is not part of the data collected in preliminary
survey, does show the damage on the back side of the building clearly.
Note that this image would have been filtered out automatically by the
overview classifier. It is included manually here for demonstrating the
true building label. Investigating the case shown in Fig. 15 reveals that
the need for capturing multiple post-event images that cover all around
the building is critical for correct building categorization, see Section 4.
11
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
12
A. Lenjani, et al. Engineering Structures 208 (2020) 109884
Table 8 the Center for Resilient Infrastructures, Systems, and Processes (CRISP)
Confusion matrix of construction material using a loss function with parameters at Purdue, and the National Science Foundation under Grants No. NSF
( 1 = 2 = 0.3). 1608762 and 1835473.
References
[1] Anguelov D, Dulong C, Filip D, Frueh C, Lafon S, Lyon R, et al. Google street view:
Capturing the world at street level. Computer 2010;43:32–8.
[2] Choi J, Yeum C, Dyke S, Jahanshahi M. Computer-aided approach for rapid post-
event visual evaluation of a building façade. Sensors 2018;18:3017.
[3] Chollet F. Xception: deep learning with depthwise separable convolutions. Proceedings
of the IEEE conference on computer vision and pattern recognition. 2017. p. 1251–8.
4. Conclusion [4] Chollet F. et al.; 2015. Keras. <https://keras.io>.
[5] Comerio MC. Disaster hits home: New policy for urban housing recovery. Univ of
California Press; 1998.
After a natural disaster such as a hurricane, information about the [6] CONVERGE Team, 2019 (accessed: 04.06.2019). CONVERGE. <https://converge.
performance of the built environment is gathered to learn lessons and to colorado.edu>.
[7] DesignSafe-CI. (accessed: 22.02.2019). NHERI Five-year Science Plan; 2017.
inform codes and guidelines. A preliminary survey is conducted im- <https://www.designsafe-ci.org/facilities/nco/science-plan>.
mediately after the event to identify the most valuable sites and buildings [8] DesignSafe-CI. (accessed: 22.02.2019)a. DesignSafe-CI: A Comprehensive
to visit during a more detailed survey that follows. That manual process Cyberinfrastructure Environment for Research in Natural Hazards Engineering;
2018. <https://www.designsafe-ci.org/>.
is tedious and time consuming, but the strategic use of automation and
[9] DesignSafe-CI. (accessed: 22.02.2019)b. Rapid experimental facility; 2018.
computer vision can accelerate and even automate the process. <https://rapid.designsafe-ci.org>.
In this paper, a technique is developed to directly support the needs [10] Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual
object classes (voc) challenge. Int J Comput Vision 2010;88:303–38.
of the human engineers conducting a preliminary survey. The technique
[11] Gao Y, Mosalam KM. Deep transfer learning for image-based structural damage
is focused on automating the data analysis steps involved in this process, recognition. Comput-Aided Civil Infrastruct Eng 2018;33:748–68.
achieving this goal by leveraging and adapting recent advances in deep [12] GEER Association Inc; 2018 (accessed: 22.02.2019). Geotechnical Extreme Events
learning research to this important problem. The input to the technique Reconnaissance. <http://www.geerassociation.org>.
[13] Goodfellow I, Bengio Y, Courville A. Deep learning. MIT press; 2016.
is a collection of post-event images collected from residential buildings in [14] Kijewski-Correa T, Prevatt D, Musetich M, Roueche D, Mosalam K, Hu F, et al.
the affected region. The output of the technique is the building attributes, Hurricane florence: field assessment team 1 (fat-1) early access reconnaissance
and the damage classification for the buildings in that region. By for- report (earr); 2018a.
[15] Kijewski-Correa T, Prevatt D, Robertson I, Smith D, Mosalam K, Roueche D, et al.
mulating this data analysis problem in terms of a pre-event stream and a Steer-hurricane michael: preliminary virtual assessment team (p-vat) report; 2018b.
post-event stream, the critical information is automatically extracted [16] Kijewski-Correa T, Taflanidis A, Kennedy A, Womble A, Liang D, Starek M, et al.
from the images collected, for ready use by the human engineer. A Hurricane harvey (texas) supplement–collaborative research: Geotechnical extreme
events reconnaissance (geer) association: turning disaster into knowledge; 2018c.
classification schema is designed to organize the data. Robust scene [17] Lenjani A, Yeum CM, Dyke S, Bilionis I. Automated building image extraction from
classifiers are designed for specific scene classification tasks. Information 360° panoramas for postdisaster evaluation. Comput-Aided Civil Infrastruct Eng
fusion methods are developed to combine the results from multiple 2019:1–17.
[18] Pinelli JP, Roueche D, Kijewski-Correa T, Plaz F, Prevatt D, Zisis I, et al. Overview of
images, yielding a result that collectively considers the individual results
damage observed in regional construction during the passage of hurricane irma
of multiple images. Valuable lessons on how to achieve robust classifi- over the state of Florida. Proc., ASCE Forensic. 2018. p. 18.
cation for such complex and unstructured datasets are also discussed. [19] Pinelli JP, Simiu E, Gurley K, Subramanian C, Zhang L, Cope A, et al. Hurricane
damage prediction model for residential structures. J Struct Eng 2004;130:1685–91.
The technique is demonstrated using a publicly-available, real-
[20] Rathje EM, Dawson C, Padgett JE, Pinelli JP, Stanzione D, Adair A, et al. Designsafe:
world dataset collected by the NSF-funded StEER teams during the new cyberinfrastructure for natural hazards engineering. Nat Hazards Rev
2017 and 2018 hurricanes. The technique provides the engineer in the 2017;18:06017001.
field with automated capabilities, reducing effort, improving con- [21] Roohi M, Hernandez EM, Rosowsky D. Nonlinear seismic response reconstruction and
performance assessment of instrumented wood-frame buildings—validation using
sistency, and accelerating decisions after a major event. Because auto- neeswood capstone full-scale tests. Struct Control Health Monit 2019;26:e2373.
mation has enormous potential in the analysis of these images, the [22] Roueche D, Cleary J, Gurley K, Marshall J, Pinelli JP, Prevatt D, et al. Steer-hur-
collection of more data, with less subjectivity, will make this process ricane michael: Field assessment team 1 (fat-1) early access reconnaissance report
(earr); 2018.
more robust and will also reduce bias in the results. Thus, collecting [23] Spatial Networks, Inc, 2018 (accessed: 22.09.2018). Fulcrum—Mobile Form Builder
more data to learn from such events is strongly encouraged. & Data Collection App. URL <https://www.fulcrumapp.com>.
Limitation of the proposed technique originate from the shortage of [24] StEER Network, 2018 (accessed: 22.09.2018). Building Resilience through
Reconnaissance. <https://web.fulcrumapp.com/communities/nsf-rapid>.
available data per building and the lack of quality in data collection. Both [25] StEER Network, 2018 (accessed: 25.04.2019). Structural Extreme Events
issues hinder extracting unique and unbiased features for training classi- Reconnaissance Network. <https://www.steer.network>.
fiers. Training models with sufficient quantity of high-quality data allows [26] Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the
impact of residual connections on learning. In: Thirty-First AAAI conference on
developing more robust classifiers. We have some suggestions for data
artificial intelligence; 2017.
collection strategies, including consistent data collection procedure, e.g., [27] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception ar-
taking overview images of four sides of the building from same distance; chitecture for computer vision. Proceedings of the IEEE conference on computer
vision and pattern recognition. 2016. p. 2818–26.
more granular data collection, e.g., taking images from all key components
[28] The Earthquake Engineering Research Institute (EERI); 2019 (accessed: 04.06.
of a building; and combining datasets from diverse geographical location 2019). Virtual Earthquake Reconnaissance Team (VERT). URL <http://www.
to include various building attributes, e.g., construction material. learningfromearthquakes.org/activities/vert>.
Future research that builds on this technique can be categorized into [29] Yeum CM, Dyke SJ, Benes B, Hacker T, Ramirez J, Lund A, et al. Postevent re-
connaissance image documentation using automated classification. J Perform
two major directions. The primary is direction is facilitating the col- Constr Facilit 2018;33:04018103.
lection and process of multiple sources of data, e.g., all type of images [30] Yeum CM, Dyke SJ, Ramirez J. Visual data classification in post-event building
(street-level, aerial, and satellite), engineers’ recorded and written ob- reconnaissance. Eng Struct 2018;155:16–24.
[31] Yeum CM, Lenjani A, Dyke SJ, Bilionis I. Automated detection of pre-disaster
servations, social media reports. Another direction is in generalizing the building images from Google street view. In: Proceedings of The 7th World con-
techniques to fuse the available types of information properly. ference on structural control and monitoring, (7WCSCM 2018); 2018c.
Acknowledgement
14