AI Mapping For Rapid Disaster Assessmen1
AI Mapping For Rapid Disaster Assessmen1
AI Mapping For Rapid Disaster Assessmen1
Assessment
Deepak A
Student
Information Technology
Dr. Mahalingam College
Pollachi, India
[email protected]
1. Introduction
The Natural or man-made, disasters pose serious problems for communities all
over the world, affecting infrastructure, businesses, and lives. For the purpose of
minimizing casualties and property damage, as well as expediting the process of
recovery and rehabilitation, prompt and efficient disaster management is essential.
Recent developments in computer vision and deep learning have presented viable
ways to improve disaster management procedures by using automated picture
analysis methods. The objective of this project is to create a complete catastrophe
management system by utilising deep learning techniques. In particular, we
concentrate on determining the catastrophe type, assessing the building damage
degree, and determining the disaster occurrence—three crucial facets of disaster
response. By utilising U-Net, MobileNet V2, and Convolutional Neural Networks
(CNNs), we hope to offer stakeholders engaged in disaster response and recovery
operations practical insights. This project's primary goal is to create a CNN-based
model that can reliably identify disasters using satellite and aerial data. This
methodology facilitates quick and accurate identification of disaster-affected
areas, enabling prompt response operations, by analyzing visual cues indicative of
disaster events, such as smoke, debris, or water.
We then use MobileNet V2, a compact and effective picture classification model,
to classify the kind of disaster that is shown in the images. Since different types of
disasters require different mitigation and relief methods, this information is
essential for properly allocating resources and customizing response tactics.
Finally, we employ U-Net architecture to evaluate the degree of damage incurred
by structures in areas impacted by disasters. This methodology helps prioritize
rescue operations and focus infrastructure repair efforts by separating building
structures from aerial photos and evaluating structural soundness.
Our ultimate goal is to lessen the effects of catastrophes on people and livelihoods
by optimizing resource allocation and improving situational awareness through the
integration of various models into a single disaster management system.
2.Literature Survey
In recent times have seen promising results in disaster management applications
from merging deep learning methods with remote sensing data. Following the
earthquake in Turkey, Xia et al. [1] presented a deep-learning application for as-
sessing building damage using ultra-high-resolution remote sensing data. Their re-
search demonstrates how deep learning models are useful for precisely recogniz-
ing and evaluating building damage, which is important for setting priorities for
rescue missions and managing recovery activities. In a similar vein, Xu et al. [2]
employed convolutional neural networks (CNNs) to detect building deterioration
in satellite imagery. Their research shows how CNNs can be used for automatic
damage identification, which can speed up reaction times and decision-making in
emergencies. Computer vision and satellite photography have been investigated by
3
Kim et al. [3] for disaster assessment, with a focus on water-related building dam -
age identification. Their study emphasizes how crucial it is to use cutting-edge
technologies to improve disaster response capacities, particularly in situations
where conventional evaluation techniques might not be as effective.
3. Proposed System
model training, wherein input images are associated with corresponding labels
signifying the presence or absence of disasters.To bolster the model's robustness
and generalization capability, data augmentation methods like rotation, scaling,
and flipping are integrated into the training process. Furthermore, transfer learning
is leveraged by initializing the CNN model with weights pre-trained on extensive
image datasets, allowing the model to harness knowledge acquired from generic
image features
3.3 Disaster Type Identification
The evaluation of the U-Net model for building damage level assessment was
conducted using an independent test dataset comprised of aerial and satellite
imagery depicting disaster-affected regions. The findings demonstrate the efficacy
of the model in accurately segmenting building structures and classifying the
extent of damage. Quantitative measures such as Intersection over Union and Dice
coefficient were utilized to gauge the segmentation accuracy achieved by the U-
Net model. Additionally, qualitative analysis of the model's predictions showcased
precise localization of damage, distinguishing between undamaged, partially
damaged, and severely damaged regions within buildings. Visual inspection of the
segmentation masks corroborated these findings, with the model's predictions
closely aligning with ground truth annotations.
Comparing the performance metrics of our models (CNN, MobileNet, U-Net) with
AlexNet, it is clear that our models outperform AlexNet in all dimensions. In
terms of accuracy, our model exhibits significantly higher accuracy compared to
AlexNet. This indicates that our models are good at correctly classifying damaged
and undamaged buildings in satellite images, leading to accurate damage assess-
ments overall. Similarly, our models exhibit higher accuracy, recall, and F1 scores
compared to AlexNet. This means that our models achieve a good balance be-
tween the true positive rate (recall) and the false positive rate (precision), enabling
reliable and consistent identification of damaged buildings. Overall, the average
performance of our models in all metrics is particularly better than AlexNet, indi-
cating the effectiveness of our CNN, MobileNet, and U-Net algorithms in destroy-
ing buildings used to identify damage in satellite imagery for disaster recovery ef-
forts.
5. Conclusion
In summary, this study highlights the effectiveness of utilizing deep learning
methodologies, particularly CNNs and architectures such as U-Net, to bolster
various facets of disaster management. By developing and implementing models
focused on disaster occurrence identification, disaster type categorization, and
assessment of building damage levels, we have underscored the potential of these
technological solutions in elevating situational awareness and response efficiency.
Through the accurate identification of disasters, classification of their types, and
9
References
[1] Xia, Haobin, Jianjun Wu, Jiaqi Yao, Hong Zhu, Adu Gong, Jianhua Yang, Liuru Hu, and
Fan Mo. "A Deep Learning Application for Building Damage Assessment Using Ultra-High-
Resolution Remote Sensing Imagery in Turkey Earthquake." International Journal of Disaster
Risk Science 14, no. 6 (2023): 947-962.
[2] Xu, Joseph Z., Wenhan Lu, Zebo Li, Pranav Khaitan, and Valeriya Zaytseva. "Building
damage detection in satellite imagery using convolutional neural networks." arXiv preprint
arXiv:1910.06444 (2019).
[3] Kim, Danu, Jeongkyung Won, Eunji Lee, Kyung Ryul Park, Jihee Kim, Sangyoon Park,
Hyunjoo Yang, Sungwon Park, Donghyun Ahn, and Meeyoung Cha. "Disaster assessment
using computer vision and satellite imagery: Applications in water-related building damage
detection." (2023).
[4] Wang, Ying, Alvin Wei Ze Chew, and Limao Zhang. "Building damage detection from
satellite images after natural disasters on extremely imbalanced datasets." Automation in
Construction 140 (2022): 104328.
[5] Bande, Swapnil, and Virendra V. Shete. "Smart flood disaster prediction system using IoT
& neural networks." In 2017 International Conference On Smart Technologies For Smart Na-
tion (SmartTechCon), pp. 189-194. Ieee, 2017.
[6] Huang, Di, Shuaian Wang, and Zhiyuan Liu. "A systematic review of prediction methods
for emergency management." International Journal of Disaster Risk Reduction 62 (2021):
102412.
[7] Song, Huaxiang, and Yong Zhou. "Simple is best: A single-CNN method for classifying re-
mote sensing images." Networks & Heterogeneous Media 18, no. 4 (2023).
[8] Jin, Ge, Yanghe Liu, Peiliang Qin, Rongjing Hong, Tingting Xu, and Guoyu Lu. "An End-to-
End Steel Surface Classification Approach Based on EDCGAN and MobileNet
V2." Sensors 23, no. 4 (2023): 1953.