AI Mapping For Rapid Disaster Assessmen1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

AI Mapping for Rapid Disaster

Assessment

L.Meenachi Vijayaragavan KTS Roshan Karthick T


Associate Professor Student Student
Information Technology Information Technology Information Technology
Dr. Mahalingam College Dr. Mahalingam College Dr. Mahalingam College
Pollachi, India Pollachi, India Pollachi, India
[email protected] vijayaragavankts@gmai [email protected]
l.com

Deepak A
Student
Information Technology
Dr. Mahalingam College
Pollachi, India
[email protected]

Abstract: The Disaster poses significant threats to human life,


infrastructure, and socioeconomic stability. Using deep learning-based
image analysis techniques, we provide a novel strategy in this research to
improve catastrophe management. Three major components comprise our
methodology: date: Evaluating the extent of building damage involves
assessing the type of disaster and determining the disaster occurrence. Our
approach, which makes use of Convolutional Neural Networks (CNNs), can
reliably identify catastrophic events from pictures, allowing for quick
mitigation and response actions. Further supporting customized response
plans and resource allocation is MobileNet V2's ability to precisely classify
different sorts of disasters. To prioritize rescue efforts and infrastructure
restoration, we also used U-Net architecture for building damage level
assessment. Through the integration of various models into a cohesive
system, we provide a thorough approach to catastrophe management,
equipping stakeholders with useful information for effective coordination of
responses. By highlighting the effectiveness of deep learning in tackling
difficult social issues and boosting resilience in the event of calamities, our
research advances plans for disaster planning and response.

Keywords – Disaster management, Deep learning, Convolutional Neural


Networks (CNNs), MobileNet V2, U-Net, Image analysis, Disaster
occurrence determination, Disaster type identification, Building damage
assessment, Rapid response.
2

1. Introduction
The Natural or man-made, disasters pose serious problems for communities all
over the world, affecting infrastructure, businesses, and lives. For the purpose of
minimizing casualties and property damage, as well as expediting the process of
recovery and rehabilitation, prompt and efficient disaster management is essential.
Recent developments in computer vision and deep learning have presented viable
ways to improve disaster management procedures by using automated picture
analysis methods. The objective of this project is to create a complete catastrophe
management system by utilising deep learning techniques. In particular, we
concentrate on determining the catastrophe type, assessing the building damage
degree, and determining the disaster occurrence—three crucial facets of disaster
response. By utilising U-Net, MobileNet V2, and Convolutional Neural Networks
(CNNs), we hope to offer stakeholders engaged in disaster response and recovery
operations practical insights. This project's primary goal is to create a CNN-based
model that can reliably identify disasters using satellite and aerial data. This
methodology facilitates quick and accurate identification of disaster-affected
areas, enabling prompt response operations, by analyzing visual cues indicative of
disaster events, such as smoke, debris, or water.

We then use MobileNet V2, a compact and effective picture classification model,
to classify the kind of disaster that is shown in the images. Since different types of
disasters require different mitigation and relief methods, this information is
essential for properly allocating resources and customizing response tactics.
Finally, we employ U-Net architecture to evaluate the degree of damage incurred
by structures in areas impacted by disasters. This methodology helps prioritize
rescue operations and focus infrastructure repair efforts by separating building
structures from aerial photos and evaluating structural soundness.
Our ultimate goal is to lessen the effects of catastrophes on people and livelihoods
by optimizing resource allocation and improving situational awareness through the
integration of various models into a single disaster management system.
2.Literature Survey
In recent times have seen promising results in disaster management applications
from merging deep learning methods with remote sensing data. Following the
earthquake in Turkey, Xia et al. [1] presented a deep-learning application for as-
sessing building damage using ultra-high-resolution remote sensing data. Their re-
search demonstrates how deep learning models are useful for precisely recogniz-
ing and evaluating building damage, which is important for setting priorities for
rescue missions and managing recovery activities. In a similar vein, Xu et al. [2]
employed convolutional neural networks (CNNs) to detect building deterioration
in satellite imagery. Their research shows how CNNs can be used for automatic
damage identification, which can speed up reaction times and decision-making in
emergencies. Computer vision and satellite photography have been investigated by
3

Kim et al. [3] for disaster assessment, with a focus on water-related building dam -
age identification. Their study emphasizes how crucial it is to use cutting-edge
technologies to improve disaster response capacities, particularly in situations
where conventional evaluation techniques might not be as effective.

Furthermore, the problem of constructing damage identification from satellite pho-


tos on highly unbalanced datasets was tackled by Wang et al. [4].These studies
emphasize how crucial it is to use cutting-edge technology like IoT, deep learning,
and machine learning to improve the precision and effectiveness of disaster man-
agement and prediction systems.

3. Proposed System

The proposed system integrates cutting-edge deep learning algorithms to enhance


disaster management capabilities. Determining the sort of disaster, assessing the
extent of building damage, and determining the disaster's occurrence are its three
main parts.
To effectively detect symptoms of disasters like fires, floods, or earthquakes, a
Convolutional Neural Network (CNN) model is developed for disaster occurrence
determination. This model analyses aerial and satellite imagery. Finally, a U-Net
architecture is applied to precisely evaluate the degree of building damage in
places affected by disasters. This technique helps prioritize rescue attempts and
guide infrastructure repair work by separating and analyzing building structures
from imagery.

Figure1: Flow Diagram of Rapid Disaster Assessment


3.1 Dataset
4

The xBD dataset, a comprehensive collection of satellite photos capturing 19


different natural disasters, is the dataset used in this research effort. The xBD
collection includes 22,068 photos in total, each of which shows a different disaster
scenario. With a combined size of 45,361.79 square kilometers, these photos cover
a large portion of the areas devastated by disasters. Moreover, the xBD dataset has
thorough annotations for the buildings that are visible in the images. Across the
dataset, 850,736 building polygons have been annotated, providing important
ground truth data for training and assessing models aimed at disaster management
tasks like assessing the amount of building damage.We can create and validate
reliable models for determining the type of catastrophe, assessing building
damage, and determining the occurrence of disasters by utilizing the xBD dataset.
This helps to enhance efficient disaster response plans.
Disaster Level Structure Description

No Damage Untouched. No evidence of water,


structural damage, shine impairment, or
burn marks..

Minor Damage The building may show partial fire


damage, surrounded by water, with a
nearby volcanic flow, missing roof
elements, or visible cracks.

Major Damage partial collapse in walls or roofs, presence


of volcanic flow nearby, or surroundings
inundated by water or mud.

Destroyed scorched, fully collapsed, partially or


entirely submerged in water or mud, or
otherwise no longer discernible.

Table 1 – Output Description

3.2 Disaster Occurrence Determination

Disaster occurrence determination plays a pivotal role in effective disaster


management, facilitating prompt response and mitigation efforts. This study
adopts Convolutional Neural Network (CNN) models to analyze aerial and
satellite imagery, aiming to detect potential signs of disasters. The CNN model
employed for this purpose comprises multiple convolutional layers followed by
Pooling layers that are crafted to strategically extract relevant features from input
images. These features undergo processing through fully connected layers to
generate predictions regarding the presence or absence of disasters.
The training dataset for the CNN model encompasses a diverse collection of
annotated images illustrating various types of disasters, ranging from fires and
floods to earthquakes and storms. Supervised learning techniques are utilized for
5

model training, wherein input images are associated with corresponding labels
signifying the presence or absence of disasters.To bolster the model's robustness
and generalization capability, data augmentation methods like rotation, scaling,
and flipping are integrated into the training process. Furthermore, transfer learning
is leveraged by initializing the CNN model with weights pre-trained on extensive
image datasets, allowing the model to harness knowledge acquired from generic
image features
3.3 Disaster Type Identification

Disaster-type identification plays a pivotal role in effective disaster


management, necessitating tailored response strategies and resource allocations. In
our proposed disaster management system, we advocate for the incorporation of
MobileNet V2, a cutting-edge image classification model, to address this critical
challenge. MobileNet V2, renowned for its lightweight architecture optimized for
efficient image classification tasks, It achieves a harmony between model
intricacy and computational effectiveness, making it suitable for deployment on
limited-resource platforms such as mobile phones or edge devices.
This classification capability empowers emergency responders and policymakers
to prioritize response efforts and allocate resources effectively, contingent upon
the nature and severity of the disaster. In summary, the integration of MobileNet
V2 augments the situational awareness of our disaster management system,
facilitating informed decision-making and proactive response measures to mitigate
the adverse impacts of disasters on affected communities.
3.4 Building Damage Level Assessment

Building damage level assessment is a critical aspect of disaster management,


enabling responders to prioritize rescue efforts and allocate resources effectively.
In this project, we employ the U-Net architecture, a convolutional neural network
(CNN) commonly used for semantic segmentation tasks, to assess the damage
levels of buildings within disaster-affected areas. For building damage level
assessment, the U-Net model is trained on a dataset comprising aerial or satellite
images of disaster-affected areas, along with corresponding ground truth labels
indicating the extent of building damage.
The encoder part of the U-Net model extracts features from input images,
gradually reducing spatial resolution through a series of convolutional and pooling
layers. In training, the loss function often incorporates metrics like categorical
cross-entropy or dice coefficient, which gauge the likeness between predicted and
actual segmentation masks. This enables responders to prioritize areas with
significant structural damage for immediate attention, facilitating efficient
allocation of resources and aiding in the timely restoration of infrastructure.

4. Result and Discussion


6

The evaluation of the U-Net model for building damage level assessment was
conducted using an independent test dataset comprised of aerial and satellite
imagery depicting disaster-affected regions. The findings demonstrate the efficacy
of the model in accurately segmenting building structures and classifying the
extent of damage. Quantitative measures such as Intersection over Union and Dice
coefficient were utilized to gauge the segmentation accuracy achieved by the U-
Net model. Additionally, qualitative analysis of the model's predictions showcased
precise localization of damage, distinguishing between undamaged, partially
damaged, and severely damaged regions within buildings. Visual inspection of the
segmentation masks corroborated these findings, with the model's predictions
closely aligning with ground truth annotations.

Figure 2: Accuracy of damage assessment 1

Figure 3 : Accuracy of damage assessment 2


7

Figure 4 : Accuracy Measure

Figure 5: Precision, Recall, F1 Score Bar Chart

Categories CNN,Mobile- AlexNet CNN


Net,U-Net
8

Precision 0.75 0.74 0.75

Recall 0.77 0.76 0.77

F1 Score 0.79 0.78 0.72

Accuracy 0.89 0.85 0.70

Table 2 : Comparing with other Existing Models


The models' effectiveness in determining the degrees of building damage across
several categories is shown by the evaluation measures. The precision score of
0.89 for "No Damage" indicates a great ability to categories intact buildings.
However, lower precision, recall, and F1 scores indicate that the models have
difficulty detecting minor, major, and destroyed damage categories. This indicates
that more work has to be done to improve the models' capacity to distinguish
between different degrees of building damage. This could involve expanding the
dataset or adjusting the model's parameters. If successful, this would increase the
models' usefulness in disaster response and recovery operations. The success of
the U-Net model underscores the potential of deep learning techniques in building
damage assessment for disaster management applications.

Comparing the performance metrics of our models (CNN, MobileNet, U-Net) with
AlexNet, it is clear that our models outperform AlexNet in all dimensions. In
terms of accuracy, our model exhibits significantly higher accuracy compared to
AlexNet. This indicates that our models are good at correctly classifying damaged
and undamaged buildings in satellite images, leading to accurate damage assess-
ments overall. Similarly, our models exhibit higher accuracy, recall, and F1 scores
compared to AlexNet. This means that our models achieve a good balance be-
tween the true positive rate (recall) and the false positive rate (precision), enabling
reliable and consistent identification of damaged buildings. Overall, the average
performance of our models in all metrics is particularly better than AlexNet, indi-
cating the effectiveness of our CNN, MobileNet, and U-Net algorithms in destroy-
ing buildings used to identify damage in satellite imagery for disaster recovery ef-
forts.

5. Conclusion
In summary, this study highlights the effectiveness of utilizing deep learning
methodologies, particularly CNNs and architectures such as U-Net, to bolster
various facets of disaster management. By developing and implementing models
focused on disaster occurrence identification, disaster type categorization, and
assessment of building damage levels, we have underscored the potential of these
technological solutions in elevating situational awareness and response efficiency.
Through the accurate identification of disasters, classification of their types, and
9

evaluation of building damage severity, our proposed system empowers


responders to make well-informed decisions, optimize resource allocation, and
streamline recovery endeavors.Further exploration and collaborative efforts are
essential for refining and scaling these methodologies for practical deployment,
thus advancing overall preparedness and response capabilities in the face of
impending disasters.

References
[1] Xia, Haobin, Jianjun Wu, Jiaqi Yao, Hong Zhu, Adu Gong, Jianhua Yang, Liuru Hu, and
Fan Mo. "A Deep Learning Application for Building Damage Assessment Using Ultra-High-
Resolution Remote Sensing Imagery in Turkey Earthquake." International Journal of Disaster
Risk Science 14, no. 6 (2023): 947-962.

[2] Xu, Joseph Z., Wenhan Lu, Zebo Li, Pranav Khaitan, and Valeriya Zaytseva. "Building
damage detection in satellite imagery using convolutional neural networks." arXiv preprint
arXiv:1910.06444 (2019).

[3] Kim, Danu, Jeongkyung Won, Eunji Lee, Kyung Ryul Park, Jihee Kim, Sangyoon Park,
Hyunjoo Yang, Sungwon Park, Donghyun Ahn, and Meeyoung Cha. "Disaster assessment
using computer vision and satellite imagery: Applications in water-related building damage
detection." (2023).

[4] Wang, Ying, Alvin Wei Ze Chew, and Limao Zhang. "Building damage detection from
satellite images after natural disasters on extremely imbalanced datasets." Automation in
Construction 140 (2022): 104328.

[5] Bande, Swapnil, and Virendra V. Shete. "Smart flood disaster prediction system using IoT
& neural networks." In 2017 International Conference On Smart Technologies For Smart Na-
tion (SmartTechCon), pp. 189-194. Ieee, 2017.

[6] Huang, Di, Shuaian Wang, and Zhiyuan Liu. "A systematic review of prediction methods
for emergency management." International Journal of Disaster Risk Reduction 62 (2021):
102412.

[7] Song, Huaxiang, and Yong Zhou. "Simple is best: A single-CNN method for classifying re-
mote sensing images." Networks & Heterogeneous Media 18, no. 4 (2023).

[8] Jin, Ge, Yanghe Liu, Peiliang Qin, Rongjing Hong, Tingting Xu, and Guoyu Lu. "An End-to-
End Steel Surface Classification Approach Based on EDCGAN and MobileNet
V2." Sensors 23, no. 4 (2023): 1953.

You might also like