0% found this document useful (0 votes)
21 views33 pages

Structural Damage Detection Using Deep Learning and Computer Vision

The project focuses on detecting efflorescence damage in buildings using a deep learning image classification model, highlighting the challenges of manual inspection. Various deep learning architectures, including EfficientNet and ResNet, were evaluated for their performance in classifying building damage. The study concludes with suggestions for improving dataset availability and optimizing model performance in future research.

Uploaded by

Pais Ameng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views33 pages

Structural Damage Detection Using Deep Learning and Computer Vision

The project focuses on detecting efflorescence damage in buildings using a deep learning image classification model, highlighting the challenges of manual inspection. Various deep learning architectures, including EfficientNet and ResNet, were evaluated for their performance in classifying building damage. The study concludes with suggestions for improving dataset availability and optimizing model performance in future research.

Uploaded by

Pais Ameng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Final Year Project

Presentation

Efflorescence
Damage Detection
Using Image
Classification Model
by Mohd Faris Bin Hardji (72360)
Supervisor: Dr. Kuryati Binti Kipli
Introduction Project Planning

Outline Objectives Conclusion

Table of Contents
Project Scope References

Literature Review Q&A Session

Methodology

Results
Introduction
Deep Learning & Computer
Vision.
It is difficult and time-consuming to manually inspect buildings for superficial
damage using vision.

COMPUTER VISION DEEP LEARNING

Computer vision is a type of artificial Deep learning involves using machine


intelligence that allows computers to learning algorithms to analyze and interpret
understand and interpret visual input. data through multiple layers of processing.
Problem
Statment
Why
Efflorescence
Damage?
• Not been studied enough in previous
research.
• Often difficult to detect and quantify, as it
can be hidden by other form of damage or
other materials such as paint, coating or
mould damage.
• Common problem in many type of
structures, caused by the leaching of salts
from within the concrete surface, where
they crystallize and form white powder
deposit that can lead to loss of strength
and durability.
Objectives
IMAGE PROCESSING BUILD EVALUATE

To evaluate the performance of


To propose a deep learning image
To build a deep-learning image the image classification model
classification model to classify
classification to classify for the detection of efflorescence
various types of building
efflorescence damage. damage.
damage.
Literature
Review
Methodology
List of Deep Learning Models for Image
Classification.
• ResNet50
• Inception Network
• VGGNet (VGG19)
• DenseNet
• MobileNet
• EfficientNet
Flowchart of
the process.
The flowchart illustrate how the step-by-step
process in order to build this object detection
model.
The Process
2. FORMATTING, 3. TRAINING,
• DATA
ANNOTATION AND VALIDATING AND TEST
PREPARATION
LABELLING RESULTS
• Use ResNet101, VGGNet,
• Convert datasets to jpg Inception Network, MobileNet,
format for standardization to DenseNet, and EfficientNet for
• Use 1000 images of be compatible in Jupyter Lab. structural damage detection
efflorescence damage and • Dataset is formatted into 256 in deep learning.
mould damage as the data X 256 pixel, grouped into 32 • Epoch is set to 20 for training.
set, 500 for efflorescence batches. • Test model using test set and
damae and 500 for mould • 70% of datasets for training, evaluate performance using
damage. 20% for validation and 10% measures such as average
• Data will be sourced from for test. training time, average
Google Images and Microsoft • The datasets is annotated and training loss, average
Images. labelled automatically by validation loss, average
Tensorflow in Jupyter Lab. training accuracy, avetage
validation accuracy, test loss
and test accuracy.
RESULTS
Performance metric
Let's understand the definition of every metric in this study.

AVERAGE TRAINING AVERAGE TRAINING AVERAGE VALIDATION


TIME LOSS LOSS

The average validation loss is a


Represents the inconsistency
metric that measures the
between the predicted outputs of
Average training time in deep average inconsistency between
the model and the actual target
learning refers to the typical the predicted outputs of a
values in the training data,
duration required to train a deep machine learning model during
serving as a measure of how well
neural network on a given validation or evaluation and the
the model is fitting the training
dataset. It represents the time corresponding ground truth
data.
taken for the model to iteratively labels or targets.
update its parameters based on
Average Training Loss = (Sum of
the training data to minimize the Average Validation Loss = (Sum
Individual Training Losses) /
loss function. of Individual Validation Losses) /
Number of Training Instances
Number of Validation Instances
Performance metric
Let's understand the definition of every metric in this study.

AVERAGE TRAINING AVERAGE VALIDATION


TEST LOSS
ACCURACY ACCURACY
Average training accuracy in
Average validation accuracy in
deep learning refers to the Quantifies the discrepancy
deep learning refers to the
average percentage of correctly between the model's predictions
average percentage of correctly
classified instances in the and the true values in the test
classified instances in a separate
training dataset during the dataset and serves as a measure
validation dataset during the
training phase of a neural of the model's performance on
training process.
network. unseen data.

Average Validation Accuracy =


Average Training Accuracy = Test Loss = (Sum of Individual
(Number of Correctly Classified
(Number of Correctly Classified Test Losses) / Number of Test
Instances in Validation) / (Total
Instances in Training) / (Total Instances
Number of Validation Instances)
Number of Training Instances)
Performance metric
Let's understand the definition of every metric in this study.

TEST ACCURACY

The measures of the performance of the trained model in accurately


predicting the class labels of unseen data and provides an indication of its
overall effectiveness.

Test Accuracy = (Number of Correctly Classified Test Instances) / (Total


Number of Test Instances)
Objective 1:
AVERAGE TRAINING TIME
To propose a deep learning image classification model to classify
various types of building damage. VGGNet(VGG19) have the longest average training time of 881.5 second,
followed by DenseNet with 222.7 second, ResNet50 with 127.85 second,
EfficientNet with 119.1 second, Inception Network with 67.05 second and
the shortest average training time is MobileNet with 52.25 second.

AVERAGE TRAINING LOSS


Inception Network have the highest average training loss of 82%, followed
by VGGNet(VGG19) with 64%, DenseNet with 57%, MobileNet with 30%,
ResNet50 with 26% and the lowest average validation loss is EfficientNet
with 17%.

AVERAGE VALIDATION LOSS


MobileNet have the highest average validation loss of 256%, followed by
Inception Network with 81%, VGGNet(VGG19) with 65%, DenseNet with
64%, ResNet with 57% and the lowest average validation loss is EfficientNet
with 36%.

AVERAGE TRAINING ACCURACY


EfficientNet have the highest average training accuracy of 96%, followed by
ResNet50 with 95%, MobileNet with 91%, DenseNet with 83%,
VGGNet(VGG19) with 74% and the lowest average training accuracy is
Inception Network with 70%.
AVERAGE VALIDATION ACCURACY
EfficientNet have the highest average validation accuracy of 92% followed
by ResNet50 with 91%, DenseNet with 80%, VGGNet(VGG19) with 74%,
Inception Network with 70% and the lowest average validation accuracy is
MobileNet with 65%.

TEST LOSS
MobileNet have the highest test loss of 333%, followed by VGGNet(VGG19)
and Inception Network which both shared the same percentage of with
70%, ResNet50 with 64%, DenseNet with 61% and the lowest test loss is
EfficientNet with 41%.

TEST ACCURACY
EfficientNet have the highesttest accuracy of 93%, followed by ResNet50
with 91%, DenseNet with 84%, VGGNet(VGG19) with 81%, Inception
Network with 76% and the lowest test accuracy is MobileNet with 68%.
Objective 2:

To build a deep-learning image


classification to classify efflorescence
damage.

EfficientNet
Objective 3:
Predicted image of efflorescence
To evaluate the performance of the
and mold damage by EfficientNet.
image classification model for the
detection of efflorescence damage.

As can be seen from the figure, as


expected, the EfficientNet image
classification model accurately detects
the efflorescence and mould damage by
predicting 9 test data. The mould and the
efflorescence can be classified and
differentiated by EfficientNet, which
shows that the EfficientNet is a reliable
tool because both of the damages are
near identical to each other.
Project
Planning
Project Planning Gantt Chart
Conclusion
Limitation Suggestion
• Limited of datasets for the • More datasets of efflorescence
efflorescence damage, as the damage and other kinds of
study to classify this image is damage will be available to be
very little. studied and researched.
• No usage of NVIDIA’S CUDA and • More options for optimization of
CUDNN to accelerate the EfficientNet in the future.
performance of the GPU to train • Computational power and cost
the datasets, as currently by the can be decreased as the image
time of this study been made, classification becomes more
both of the software is not compact and mobile.
compatible with TensorFlow as
both of the software is not yet
updated.
References
{18} N. Wang, X. Zhao, P. Zhao, Y. Zhang, Z. Zou, and J. Ou, “Automatic damage detection of historic masonry
buildings based on mobile deep learning,” Autom Constr, vol. 103, pp. 53–66, Jul. 2019, doi:
10.1016/j.autcon.2019.03.003.
[19] S. Li, X. Zhao, and G. Zhou, “Automatic pixel-level multiple damage detection of concrete structure using
fully convolutional network,” Computer-Aided Civil and Infrastructure Engineering, vol. 34, no. 7, pp. 616–634,
Jul. 2019, doi: 10.1111/mice.12433.
[20] B. Kim and S. Cho, “Automated multiple concrete damage detection using instance segmentation deep
learning model,” Applied Sciences (Switzerland), vol. 10, no. 22, pp. 1–17, Nov. 2020, doi:
10.3390/app10228008.
[21] Y. Yao, Y. Yang, Y. Wang, and X. Zhao, “Artificial intelligence-based hull structural plate corrosion damage
detection and recognition using convolutional neural network,” Applied Ocean Research, vol. 90, Sep. 2019, doi:
10.1016/j.apor.2019.05.008.
[22] Y. J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, “Autonomous Structural Visual
Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types,” Computer-Aided Civil and
Infrastructure Engineering, vol. 33, no. 9, pp. 731–747, Sep. 2018, doi: 10.1111/mice.12334.
[23] J. Guo, C. Liu, J. Cao, and D. Jiang, “Damage identification of wind turbine blades with deep convolutional
neural networks,” Renew Energy, vol. 174, pp. 122–133, Aug. 2021, doi: 10.1016/j.renene.2021.04.040.
[24] H. Maeda, Y. Sekimoto, T. Seto, T. Kashiyama, and H. Omata, “Road Damage Detection Using Deep Neural
Networks with Images Captured Through a Smartphone,” Jan. 2018, doi: 10.1111/mice.12387.
[25] H. K. Shin, Y. H. Ahn, S. H. Lee, and H. Y. Kim, “Automatic concrete damage recognition using multi-level
attention convolutional neural network,” Materials, vol. 13, no. 23, pp. 1–13, Dec. 2020, doi:
10.3390/ma13235549.
Any questions?
I would appreciate it so much!
The advance of technology
is based on making it fit in so
that you don't really even notice
it, so it's part of everyday life.
BILL GATES

You might also like