0% found this document useful (0 votes)
132 views

MRI Brain Image Classification Using Various Deep Learning

The document proposes and evaluates several deep learning architectures for classifying MRI brain images. It implements VGG-16, ResNet-50, and Inception v3 on a dataset of 520 MRI images, finding accuracies of 70%, 81%, and 81% respectively. It then proposes a new architecture using 1x1 convolutions initially, residual connections throughout, and shows this achieves the highest accuracy of 86.8% while reducing training time. The document demonstrates evaluating standard architectures on MRI data and proposing a customized architecture that outperforms them.

Uploaded by

Rohit Arya
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views

MRI Brain Image Classification Using Various Deep Learning

The document proposes and evaluates several deep learning architectures for classifying MRI brain images. It implements VGG-16, ResNet-50, and Inception v3 on a dataset of 520 MRI images, finding accuracies of 70%, 81%, and 81% respectively. It then proposes a new architecture using 1x1 convolutions initially, residual connections throughout, and shows this achieves the highest accuracy of 86.8% while reducing training time. The document demonstrates evaluating standard architectures on MRI data and proposing a customized architecture that outperforms them.

Uploaded by

Rohit Arya
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

MRI BRAIN IMAGE CLASSIFICATION USING

VARIOUS DEEP LEARNING ARCHITECTURES

20MCS1009-ROHIT ARYA
GUIDE NAME-DR A K TYAGI
ABSTRACT

Implementing various neural network architectures for classification of MRI Brain Images like VGG
16,Resnets,Inception network and proposing the new architecture which will
1. Improve the accuracy
2. Reduces the time to train
3. Solves the problem of vanishing gradient to implement deeper neural networks
PROPOSED METHODOLOGY
• Given an MRI Brain scan image, we first do image processing as most of
the images contains black region as background hence, we need to remove
this and gather our region of interest (ROI).

• Then we will do data augmentation as we have dataset of 520 images of


various healthy and unhealthy brain scans to increase the data size we will
augment or add new data (images) by doing horizontal shifts, vertical shifts,
rotation, zoom etc.

• This will not only add new data but it will also make the model robust to
rotation, scale, cropping.

• After doing image preprocessing and data augmentation we will pass the
model to a deep learning architecture to extract various features from the
image and then we will use any classifier at the end to classify the image for
the presence of tumor or not. The data pipelining is shown in fig.
ARCHITECTURE-1 CNN MODEL

This architecture takes an input image performs convolution operation with padding and batch normalization
followed by max pooling at the end flattening is done and then the classification task is employed with
sigmoidal activation at the end as our task is binary classification.
ACCURACY-CONV(7,7)

After running through multiple epochs ,


As we can see through multiple epochs training loss and
validation loss decreases.
Training accuracy of the model is quite high 92.3% while the
validation accuracy is 85%
Accuracy on test data is 81%.
VGG-16 ARCHITECTURE

• VGG 16 is 16 layer architecture with a pair of convolution layers, poolings layer and at the end fully
connected layer.
• VGG network is the idea of much deeper networks and with much smaller filters. VGGNet increased
the number of layers from eight layers in AlexNet. Right now it had models with 16 to 19 layers
variant of VGGNet.
• These models kept very small filters with 3 x 3 or 5 x 5 conv all the way. And this just makes this
very simple structure with the periodic pooling all the way through the network.
RESULTS OF VGG-16 MODEL AFTER IMPLEMENTING
ON MRI DATASET

After running for 50 epochs VGG-16 is giving


accuracy of 70% on test data.
RESNET-50

• Ideally as we increase the number of layers error should reduce but in deep neural network it was
found that training accuracy for 56 layered Neural Network is worse than 20 layer Neural Network.
• Hence the idea of “skip connection” was introduced.
• In skip connection if our intermediate layers are learning some useless features, we can skip
through them easily just like an identity function without hurting the performance.
RESULTS OF RESNET MODEL AFTER
IMPLEMENTING ON MRI DATASET

After running for 50 epochs resnet is giving accuracy of


81% on test data.
But, the training time is reduced due to skip connections
INCEPTION_V3

• In VGG-16 we were having different kernel size in each layer i.e (11*11, 5*5, 3*3) so instead
of deciding on which kernel size to use and when to apply pooling why can’t we use a stack or
a pool of kernels together.
• Disadvantage :Cost of computation will increase
RESULTS OF INCEPTION MODEL AFTER
IMPLEMENTING ON MRI DATASET

After running for 50 epochs Inception Model is giving


accuracy of 81% on test data.
PROPOSED ARCHITECTURE
PROPOSED ARCHITECTURE

• Convolution operation of size (1*1) in the beginning and of size (3*3) in every subsequent
layers with same padding.
• Dropout layer
• Max pooling layer of size 4*4.
• Activation Function used in subsequent layers is ReLu.
• Residual Networks are added in subsequent layers.
• A Dense (output unit) of fully connected layer with one neuron with a sigmoid activation
(since this is a binary classification task).
ADVANTAGES OF PROPOSED ARCHITECTURE
• As we are using (1*1) filter in the beginning the number of operations to perform the
convolution operation will be reduced.
• As we are using residual networks, and if our intermediate layers are learning some useless
features, we can skip through them easily just like an identity function without hurting the
performance.
• We can increase the number of layers without hurting the performance .
• Time required to train the model will be reduced due to skip connection and less no. of
operation performed in convolutional operation.
• As we are using residual network problems like vanishing gradient will not be there.
OPTIMIZATION TECHNIQUE IN PROPOSED
ARCHITECTURE

• Filter size of 1*1 used in initial step creates an optimation because number of operations which will
be performed in above architecture will be very less.
OPTIMIZATION TECHNIQUE IN PROPOSED
ARCHITECTURE

• If our model is not learning something useful then we can skip through them
easily using the skip connection and hence the performance will not be hurt.
PROPOSED ARCHITECTURE ON MRI IMAGES

• The above architecture when applied on MRI Images dataset gives


an accuracy of 86.8%.
THANK YOU

You might also like