0% found this document useful (0 votes)
5 views10 pages

Post Flood Assessment Using Deep Learning Techniques: Sanket S Kulkarni and Ansuman Mahapatra

This document discusses the use of deep learning techniques, specifically convolutional neural networks (CNN), for post-flood assessment by classifying houses that are completely or partially surrounded by water. The study evaluates different CNN architectures, including pretrained models like VGG16 and MobileNet, and finds that MobileNet v4 achieves an accuracy of 75%. The research highlights the importance of identifying water-surrounded buildings to aid rescue operations in flood-affected areas.

Uploaded by

paparao4uu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views10 pages

Post Flood Assessment Using Deep Learning Techniques: Sanket S Kulkarni and Ansuman Mahapatra

This document discusses the use of deep learning techniques, specifically convolutional neural networks (CNN), for post-flood assessment by classifying houses that are completely or partially surrounded by water. The study evaluates different CNN architectures, including pretrained models like VGG16 and MobileNet, and finds that MobileNet v4 achieves an accuracy of 75%. The research highlights the importance of identifying water-surrounded buildings to aid rescue operations in flood-affected areas.

Uploaded by

paparao4uu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Post Flood Assessment Using Deep Learning Techniques

Sanket S Kulkarni1, a) and Ansuman Mahapatra 1, b)


1
National Institute of Technology Puducherry, Karaikal, Puducherry, India
a)
[email protected],
b)
[email protected]
Corresponding author: b) [email protected]

Abstract: Numerous factors, such as rapid urbanization, increased development and economic activity in flood plains, and
climate change, are responsible for floods. Utilizing satellite images to analyze flood areas is an effective technique to
accomplish flood area analysis. Houses with completely or partially surrounded by flooded water will help the rescue team
to prioritize the rescue areas. Therefore, this article evaluates two custom architectures based on convolutional neural
networks (CNN) along with pretrained VGG16 and MobileNet architecture for classifying the completely or partially
surrounded houses. In this work, a new dataset is prepared from the xBD dataset that is comprised of satellite images of
flood affected houses. The results showed that MobileNet v4 architecture produces better results with an accuracy of 75%.

Keywords: CNN, VGG16, MobileNet, SAR, ReLU

INTRODUCTION

India is highly susceptible to flooding. The overall flood-prone area of the nation is estimated by the Rashtriya
Barh Ayog (RBA) to be 40 million hectares (Mha) [1] of the total geographic area which is at risk of flooding. Floods
frequently result in significant human casualties and damage to property, infrastructure, and public services. The fact
that flood-related damages are on the rise is cause for alarm. In the past ten years, from 1996 to 2005, the average
yearly flood damage was Rs. 4745 crores, compared to Rs. 1805 crore, the corresponding average for the prior 53
years. Floods, the most common kind of natural disaster, occur when an excess of water submerges a normally dry
area. Floods are caused by prolonged periods of heavy rain, rapid snowmelt, or storm surges from tropical cyclones
or tsunamis in coastal locations.

Floods can wreak havoc across a large area, loss of lives, damage private property, and destroy vital public health
facilities. Heavy rainfall is experienced during the months of June to September, as a result of which many of the
developed cities such as Kerala, Chennai, Mumbai, Odisha, parts of northeastern parts of India such as Assam,
Mizoram, Kolkata experiences havoc since there is not a proper infrastructural arrangement in order to manage the
water. There are two different ways in which flood damage assessment can be performed. They are referred to as pre-
disaster assessment and post-disaster assessment. There are several problems with the pre-disaster assessment, such
as developing expensive and time-consuming roads, reservoirs, and dams to prevent the loss of lives. In the literature,
there is less focus on post-flood disaster assessment.
Figure 1: Post flood Satellite Images from xBD dataset [6]

Figure 1 illustrates the post flooded building, which includes the regions completely surrounded by water and
partially covered by water. xBD dataset [6] was used to obtain the satellite images in the dataset for post flood
assessment.

LITERATURE REVIEW

According to the works proposed by Arvinda et al. (2016) [1] for their work on “Flood Assessment using Multi-
Temporal Modis Satellite Images,” suggested that the assessment of floods using multi-temporal MODIS satellite
images applying unsupervised methods has been proposed in this research article. For automatic water pixel
recognition and extraction, conventional methods like the mean shift algorithm are contrasted with artificial neural
network techniques like self-organizing maps. Different techniques are used, and a comparison of unsupervised
techniques using mean shift and self-organizing maps is performed. The results of the extraction aid in identifying
areas that are flooded and areas that have not been flooded.

In the works of Xin Jiang et al. (2021) [2], Sentinel-1 SAR images were used with an unsupervised machine
learning method known as Felz-CNN to propose a segmentation technique for autonomous flood mapping in close to
real-time across large areas and for all weather situations. Three parts make up the algorithm: super-pixel generation,
convolutional neural network-based feature enhancement, and super-pixel aggregation.

According to the work of Mohan Singh and Kapil Dev Tyagi (2022) [3], they aimed to discover the changes
without additional computations and obtain maximum learning characteristics using bi-temporal satellite images.
Seven transposed characteristics from the training data set were utilized for training a machine learning model, which
was then used to forecast the bi-temporal image target data set and produce a high-quality difference image. With a
Kappa coefficient of 0.9985, the proposed machine intelligence learning model's accuracy is 99.94%. The estimated
area of the water-expanded and restructured land around Poyang Lake is 1749.918628 km2. The increased water area
provides an estimate of the amount of damage caused by the water's abrupt rise in the area of Poyang Lake.

In work proposed by Pallavi Jain et al. (2020) [4], which presented a systematic inspection of the tri-band deep
convolutional neural networks flood prediction that used Sentinel-2 further was trained using pre-trained ResNet18
and other models, three-band combinations of RB8aB11 and RB11B outperformed 33 other combinations for flood
detection. F1 score achieved 0.96 with the correct use of the spectral band.
Gupta et al. (2019) [5] utilized the xBD dataset, which provides building polygons, categorization labels for
damage categories, ordinal labels for damage level, and corresponding satellite metadata for pre-event and post-event
multi-band satellite imagery from a range of disaster occurrences. On more than 5,000 km2 of post-disaster photos,
there were more than 700,000 building annotations.

The study suggested by Heng Miao et al. (2021) [6] primarily made use of data acquired from SAR remote sensing
images from seismic catastrophe information following a devastating flood, which included details on the majority of
days with rain and depended on all flood-prone climatic conditions. In this study, the Freeman polarization
decomposition method is altered to consider the SAR image properties of flood-induced structural damage.

According to the authors Cooner et al. (2011) [7], the performance of post-disaster damage assessment was
improved by extracting textural and structural components from both pre-disaster and post-disaster images. Many
works simplify damage assessment by classifying building objects using a binary classification approach that assigns
labels for damage and no damage due to the lack of labelled data.

In work by Nex et al. (2019) [8] initially focused on constructing a damaged map for building damage assessment.
The key idea is to detect post-disaster scenarios. They utilized heterogeneous and large datasets with the help of
transfer learning from different locations, geographical resolutions, and various platforms. For post-flood damage
assessment, they utilized convolutional neural networks to detect structural damage in the buildings. Table 1 discusses
various existing works related to post-disaster assessment using satellite images.

Table 1: Summary of literature review on post-disaster assessment using satellite images

Title Dataset Used Key features Results and Remark


Discussion
Franceschini et al. Spatial aerial -They introduced a -Performance -This approach reduces
(2021) [9] images method allowing image metrics: costs and complies with
Damage captured on detection and world Precision-88% current laws. These
Estimation and damaged coordinate localization. images are frequently
Localization from buildings Damage Estimation and oblique.
Sparse Aerial Localization (DEL), -This model does not
Imagery SFM, and CAM address apply to highly oblique
the issues in the images, contain the
algorithm. horizon, or not
overlapping with other
images.
Jihye Ha and Jung Flood damage - They created a flood - The RF model was - The risk levels and
Eun Kang (2022) data from assessment model using used to determine the recommended flood
[10] Assessment Busan machine learning flood risk level, and assessment model can
of flood‑risk areas metropolitan techniques for the the results produced be used as reliable
using random city metropolitan area of a grid map of Busan information for
forest Busan by combining Metropolitan City catastrophe prevention.
techniques: Busan multiple machine with almost 900,000
Metropolitan City learning models. grid cells.
Yu Shen et al. xBD dataset -Prior to deploying relief -Performance - The data augmentation
(2021) [11] efforts, it is essential to Metrics: approach CutMix is used
BDANet: determine the extent of Precision-0.895 to lessen the difficulty of
Multiscale building damage. They Recall-0.995 challenging classes.
Convolutional presented the BDANet, F1-Score- 0.942 - A robust feature
Neural Network a brand-new two-stage representation is created
with Cross- convolutional neural by the network using
directional network to assess Multi-Scale Feature
Attention for building damage. Fusion (MFF), which
Building Damage Building locations are uses features learned
Assessment from extracted using UNet in from different image
Satellite Images the first stage, and scales.
building damage
detection is performed
in the second stage using
the shared weights.
Ali Ismail and xBD dataset -BLDNet is a unique -Performance -A graph formulation
Mariette Awad graph formulation that Metrics that connects buildings
[12] BLDNET: a enables learning links Training Accuracy, while allowing damaged
semi-supervised and representations from Testing accuracy - data.
change detection local patterns and non- 0.9534, 0.7415 -BLDNET uses a semi-
building damage stationary Training Precision, supervised GCN and the
framework using neighborhood. It is used Testing Precision- Siamese CNN backbone
graph to identify changes in 0.8474, 0.5930 to extract local features
convolutional building damage. Training Recall and and aggregate them with
networks and Testing Recall- neighborhood features
urban domain 0.9804, 0.6744,
knowledge Training Specificity,
Testing Specificity-
0.9817, 0.7810
Training F1-Score
and Testing F1-
Score-
0.9033,0.6189
Isabelle Bouchard xBD dataset -This study investigates -Performance -This work uses a two-
et al. (2022) [13] how well convolutional Metrics- step model method of a
On transfer neural networks and Localization F1- building detector and a
learning for enables the assessment 0.846 (0.002%) damage classifier to
building damage of building Classification F1- optimize the post-
assessment from deterioration. 0.709 (0.003%) incident activity.
satellite imagery
in emergency
contexts
Chi-Shen Cheng 5 UAV videos -An integrated video -Performance - Using a model built
et al. (2021) [14] captured in dataset from Hurricane Metrics: from stacking two CNN
Deep learning for Dorain region Dorian is used to Precision-65.6%, architectures, damaged
post-hurricane of Bahamas introduce and train a
aerial damage stacked convolutional Classification images were identified
assessment neural network (CNN) accuracy-61%, with great accuracy.
of buildings architecture.

The existing works have given more importance to detecting and classifying damaged buildings. However, identifying
completely or partially surrounded houses by water is not explored. Therefore, the contribution of the paper is to
classify the completely or partially surrounded houses using CNN. In the case of a completely water-surrounded
building, there might be a possibility of trapped humans and livestock. Identifying such buildings will help the rescue
team prioritize their rescue operation on the water surrounded buildings.

METHODOLOGY
The four key steps of the post-flood evaluation include data collection, data augmentation, training CNN architectures,
and categorizing flooded areas as completely or partially surrounded by water, as shown in Figure 2. Images of buildings
and areas that are completely or partially surrounded by water have been categorized using CNN. This work uses four
different CNN architectures to categorize images for post-flood assessment.
VGG16 is trained on millions of images across 1000 categories in order to transfer learning. The pretrained networks
perform well in feature extraction from images. The vanishing gradient problem was then solved using a custom CNN
architecture with a ReLU optimizer, and the loss and accompanying weights were computed using a custom CNN
architecture with an L2 optimizer, data augmentation, and dropout layer. There were some shortcomings with this
unique CNN design, such as overfitting.

Dataset (Satellite Data Various Deep Flood Image


or UAV images) Augmentation learning models Classification

Figure 2: Steps involved in post-flood image classification


Figure 2 indicates the steps involved in post-flood image classification, which involves four major stages: a
collection of flood image datasets of completely and partially covered areas by water from satellite images and UAV
images. The next stage is data augmentation which is performed using techniques such as image flipping, image
rotation, and image translation. In the next phase, various deep learning models such as VGG16, Custom CNN
architecture-I, Custom CNN architecture-II, and Mobilenet architecture have been used to compare image classification.
The last stage is flood image classification, i.e., to classify the flood images as completely surrounded areas by water
or partially surrounded areas with flood water.
Data Augmentation is used to increase the number of images in the dataset. Other techniques involved in the data
augmentation process include image rotation, flipping, and image translation.
Image Flipping: Image flipping utilizes the functionalities of Tensor flow, such as flip top-down and left-right has
been used. Images are flipped with 50% to turn in any direction.
Image Translation: In image translation, images can either be translated in directions such as x, y, or both. Image
translation involves the following. An initial tensorization of the image is performed. When the labels are transformed
into tensors, the shape is then fixed. JPEG images are read as uint16; thus, they must be changed to uint8.
Image Rotation: With angles of 0, 90, 180, or 270 degrees, the image is rotated. Each rotation's likelihood was 25%.
RESULTS AND DISCUSSION
The xBD dataset [6] is a vast collection of satellite images having a resolution of 128 ×128. A new dataset is formed
from the xBD dataset to evaluate our suggested approach. It comprises two groups of houses, each with 1000 images:
completely and partially surrounded by flood water. There are 700 images for training, 100 images from validation, and
200 images from each class used to evaluate the trained model. The dataset has been obtained from the xView2
challenge collected from the website [24].
The dataset is trained and evaluated using four Convolutional Neural Networks (CNN). The first two CNN is pre-
trained VGG16 and MobileNet and the rest two are custom CNN architectures.
A. Architectures for Post Flood Assessment:
By freezing the VGG16 original weights, the bottom three layers of the VGG16 are trained and adjusted with a learning
rate of 0.001. The weights are altered to distinguish between the new object classes more clearly. Figure 3 displays a
summary of the previously trained VGG16 model [21] after the final three layers have been replaced.

Figure 3: Summary of Pretrained VGG16 Model

Figure 4: Learning curve for accuracy pre trained VGG16 model after fine tuning.
Figure 5: Learning curve for loss pre trained VGG16 model after fine tuning.
The Figure 4 indicates the learning curve for accuracy of pre trained VGG16 model after fine tuning and the learning
curve for loss of pre-trained VGG16 model after fine tuning. The Figure 5 indicates the learning curve for loss of pre
trained VGG16 model after fine tuning.

Customized CNN Architecture-I:


The RELU (rectified linear activation unit) activation function is primarily employed in the first customised CNN
architecture. Leaky ReLU is an improved ReLU function designed to solve the issue of vanishing gradients for inputs
smaller than zero, which causes neurons in that area to deactivate. This problem is resolved by leaky ReLU, which
represents the tiny linear component of the input data. Figure 6 indicates the learning curve for accuracy and loss
function using customized CNN architecture-I.

Figure 6: Learning curves for accuracy and loss function in customized CNN Architecture-I
Customized CNN architecture-II:
The second modified CNN architecture concentrates on several strategies to prevent ensuing overfitting epochs. It
includes the following components includes one technique for keeping deep learning models from overfitting is the
dropout layer. The outgoing edges of neurons in the hidden layers are randomly set to 0 during training updates. Another
name for the L2 regression is the Adam Optimizer (L2 Regularizer), also known as Ridge regularization regression.
For a model with L2 regularization, the regularization term is now included in the loss function, but the regression
function largely stays the same. The figure 7 indicates the custom CNN architecture-II with drop out layer and L2
regularization of 0.01.
Figure 7: Learning Curve for Custom CNN architecture-II
Five architectural models are used for flood assessments with regions completely and partially surrounded by water
caused by flood water, including VGG16 architecture, Custom CNN architecture-I, Custom CNN architecture-II, and
Mobilenet architectures used for image classification. The results show the Mobilenet [20] architecture effectiveness.
These findings can be improved by increasing the number of images in the training dataset and altering the
hyperparameters.
B. Result and Analysis
This section mainly discusses the performance of various architectures used for post-flood assessment. It also discusses
which model performs better.
Table 2: Comparison of deep learning architectures for flood image classification
Models Training Validation Precision Recall F1
Score
Accuracy Accuracy (%) (%)
(%) (%) (%)
Pretrained 67.82 56.56 62.50 60 61.22
VGG16
Custom 87.07 63.75 50 100 67
CNN-I
Custom 64.64 66.87 50 100 67.23
CNN-II
MobileNet 94.23 75 67.34 77.47 70
Table 2 compares several architectural designs for flood assessments with fully and partially surrounded regions by
water areas resulting from flood water. The outcomes demonstrate the effectiveness of the MobileNet design. These
results can be enhanced by expanding the number of images in the training dataset and changing the hyperparameters.
It is found that MobileNet architecture provides good accuracy for image classification, then Custom CNN-I followed
by Custom CNN-II and then Pretrained VGG16.
CONCLUSION AND FUTURE SCOPE
In this article, a new dataset is developed for the proposed work's evaluation. Following fine-tuning, four models
are evaluated for comparison. We analyzed the performance of each CNN model in terms of accuracy, recall, precision,
and F1 score. The MobileNet architecture has performs better than the other architectures. Very few images of fully
and partially covered areas could be seen in the post-flood images. Despite the fact that data augmentation enhances
image quantity, the outcomes are still lackluster. The training data will be improved with the addition of additional
images. There is a need to evaluate and optimize more CNN designs. The issue with these architectures is overfitting
which can be solved by increasing the number of images in the dataset.
REFERENCES
1. Niti Ayog 2021. Report of Committee Constituted for formulation of strategy for flood management
Government of India. Available from: http://www.niti.gov.in/sites/default/files/2021-03/Flood-Report.pdf
2. Arvind, C.S., Vanjare, A., Omkar, S.N., Senthilnath, J., Mani, V. and Diwakar, P.G., “Flood assessment using
multi-temporal MODIS satellite images”. (Procedia Computer Science,2016) pp.575-586 (2016).
3. Jiang, Xin, Shijing Liang, Xinyue He, Alan D. Ziegler, Peirong Lin, Ming Pan, Dashan Wang, Rapid and
large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with
unsupervised deep learning (ISPRS Journal of Photogrammetry and Remote Sensing,2021), pp. 36-50.
4. Jain, Pallavi, Bianca Schoen-Phelan, and Robert Ross. Tri-Band Assessment of Multi-Spectral Satellite Data
for Flood Detection (MAChine Learning for EArth ObservatioN Workshop co-located with the European
Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases,2020),
vol. 2766.
5. Gupta, Ritwik, Richard Hosfelt, Sandra Sajeev, Nirav Patel, Bryce Goodman, Jigar Doshi, Eric Heim, Howie
Choset, and Matthew Gaston, “xbd: A dataset for assessing building damage from satellite imagery” (arXiv
preprint arXiv:1911.09296, 2019) ", pp. 10–17.
6. Miao, Heng, Xiaoqing Wang, Ling Ding, and Xiang Ding, "Study on the Extraction of Building Damage
Caused by Earthquake from Polarimetric SAR Image Based on Improved Freemam Decomposition".
(International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, IEEE, 2021) pp.
8566-8569.
7. Cooner, Austin J., Yang Shao, and James B. Campbell. "Detection of urban damage using remote sensing and
machine learning algorithms: Revisiting the 2010 Haiti earthquake". (Remote Sensing, Switzerland ,2016),
vol. 8, p. 568.
8. Nex, Francesco, Diogo Duarte, Fabio Giulio Tonolo, and Norman Kerle, "Structural building damage
detection with deep learning: Assessment of a state-of-the-art CNN in operational conditions". (Remote
sensing Switzerland,2019), vol. 11, p. 868.
9. Franceschini, Rene Garcia, Jeffrey Liu, and Saurabh Amin. "Damage Estimation and Localization from Sparse
Aerial Imagery (International Conference on Machine Learning and Applications (ICMLA), IEEE, 2021) ",
pp. 128-134.
10. Ha, Jihye, and Jung Eun Kang. "Assessment of flood-risk areas using random forest techniques: Busan
Metropolitan City". (Natural Hazards ,2022), vol. 3, pp.2407-2429.
11. Shen, Yu, Sijie Zhu, Taojiannan Yang, Chen Chen, Delu Pan, Jianyu Chen, Liang Xiao, and Qian Du. "Bdanet:
Multiscale convolutional neural network with cross-directional attention for building damage assessment from
satellite images". (IEEE Transactions on Geoscience and Remote Sensing, Switzerland, 2021), pp. 1-14.
12. Ismail, Ali, and Mariette Awad. "BLDNet: A Semi-supervised Change Detection Building Damage Framework
using Graph Convolutional Networks and Urban Domain Knowledge ". (arXiv preprint ,2022).
13. Bouchard, Isabelle, Marie-Ève Rancourt, Daniel Aloise, and Freddie Kalaitzis. "On Transfer Learning for
Building Damage Assessment from Satellite Imagery in Emergency Contexts ". (Remote Sensing ,2022), vol.
14, p. 2532.
14. Cheng, Chih‐Shen, Amir H. Behzadan, and Arash Noshadravan. "Deep learning for post‐hurricane aerial
damage assessment of buildings" (Computer‐Aided Civil and Infrastructure Engineering ,2021), vol. 36, pp.
695-710.
15. Li, Yundong, Chen Lin, Hongguang Li, Wei Hu, Han Dong, and Yi Liu. "Unsupervised domain adaptation
with self-attention for post-disaster building damage detection." (Neurocomputing 2020): 27-39.
16. Chen, Hongruixuan, Edoardo Nemni, Sofia Vallecorsa, Xi Li, Chen Wu, and Lars Bromley. "Dual-Tasks
Siamese Transformer Framework for Building Damage Assessment." (arXiv preprint arXiv:2201.10953,
2022).
17. Shi, Lingfei, Feng Zhang, Junshi Xia, Jibo Xie, Zhe Zhang, Zhenhong Du, and Renyi Liu. "Identifying
Damaged Buildings in Aerial Images Using the Object Detection Method." (Remote Sensing ,2021) vol.13, p.
4213.
18. Miura, Hiroyuki, Tomohiro Aridome, and Masashi Matsuoka. "Deep learning-based identification of
collapsed, non-collapsed and blue tarp-covered buildings from post-disaster aerial images." (Remote Sensing,
2020), vol.12, p. 1924.
19. Ding, Jiujie, Jiahuan Zhang, Zongqian Zhan, Xiaofang Tang, and Xin Wang. "A Precision Efficient Method
for Collapsed Building Detection in Post-Earthquake UAV Images Based on the Improved NMS Algorithm
and Faster R-CNN." (Remote Sensing ,2022) vol. 13, p. 663.
20. Sandler, Mark, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. "Mobilenetv2:
Inverted residuals and linear bottlenecks." (IEEE conference on computer vision and pattern
recognition,2018), pp. 4510-4520.
21. Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image
recognition." (arXiv preprint, 2014).
22. Zou, Zhiqiang, Hongyu Gan, Qunying Huang, Tianhui Cai, and Kai Cao. "Disaster Image Classification by
Fusing Multimodal Social Media Data." (ISPRS International Journal of Geo-Information,2021) vol. 10, p.
636.
23. Singh, Mohan, and Kapil Dev Tyagi. "Detection of Expanded Reformed Geographical Area in Bi-temporal
Multispectral Satellite Images Using Machine Intelligence Neural Network ". (Journal of the Indian Society
of Remote Sensing, 2022), pp. 623-633.
24. Xview2 challenge 2018. Available from: https://xview2.org/

You might also like