MRI Brain Image Classification Using Various Deep Learning
MRI Brain Image Classification Using Various Deep Learning
20MCS1009-ROHIT ARYA
GUIDE NAME-DR A K TYAGI
ABSTRACT
Implementing various neural network architectures for classification of MRI Brain Images like VGG
16,Resnets,Inception network and proposing the new architecture which will
1. Improve the accuracy
2. Reduces the time to train
3. Solves the problem of vanishing gradient to implement deeper neural networks
PROPOSED METHODOLOGY
• Given an MRI Brain scan image, we first do image processing as most of
the images contains black region as background hence, we need to remove
this and gather our region of interest (ROI).
• This will not only add new data but it will also make the model robust to
rotation, scale, cropping.
• After doing image preprocessing and data augmentation we will pass the
model to a deep learning architecture to extract various features from the
image and then we will use any classifier at the end to classify the image for
the presence of tumor or not. The data pipelining is shown in fig.
ARCHITECTURE-1 CNN MODEL
This architecture takes an input image performs convolution operation with padding and batch normalization
followed by max pooling at the end flattening is done and then the classification task is employed with
sigmoidal activation at the end as our task is binary classification.
ACCURACY-CONV(7,7)
• VGG 16 is 16 layer architecture with a pair of convolution layers, poolings layer and at the end fully
connected layer.
• VGG network is the idea of much deeper networks and with much smaller filters. VGGNet increased
the number of layers from eight layers in AlexNet. Right now it had models with 16 to 19 layers
variant of VGGNet.
• These models kept very small filters with 3 x 3 or 5 x 5 conv all the way. And this just makes this
very simple structure with the periodic pooling all the way through the network.
RESULTS OF VGG-16 MODEL AFTER IMPLEMENTING
ON MRI DATASET
• Ideally as we increase the number of layers error should reduce but in deep neural network it was
found that training accuracy for 56 layered Neural Network is worse than 20 layer Neural Network.
• Hence the idea of “skip connection” was introduced.
• In skip connection if our intermediate layers are learning some useless features, we can skip
through them easily just like an identity function without hurting the performance.
RESULTS OF RESNET MODEL AFTER
IMPLEMENTING ON MRI DATASET
• In VGG-16 we were having different kernel size in each layer i.e (11*11, 5*5, 3*3) so instead
of deciding on which kernel size to use and when to apply pooling why can’t we use a stack or
a pool of kernels together.
• Disadvantage :Cost of computation will increase
RESULTS OF INCEPTION MODEL AFTER
IMPLEMENTING ON MRI DATASET
• Convolution operation of size (1*1) in the beginning and of size (3*3) in every subsequent
layers with same padding.
• Dropout layer
• Max pooling layer of size 4*4.
• Activation Function used in subsequent layers is ReLu.
• Residual Networks are added in subsequent layers.
• A Dense (output unit) of fully connected layer with one neuron with a sigmoid activation
(since this is a binary classification task).
ADVANTAGES OF PROPOSED ARCHITECTURE
• As we are using (1*1) filter in the beginning the number of operations to perform the
convolution operation will be reduced.
• As we are using residual networks, and if our intermediate layers are learning some useless
features, we can skip through them easily just like an identity function without hurting the
performance.
• We can increase the number of layers without hurting the performance .
• Time required to train the model will be reduced due to skip connection and less no. of
operation performed in convolutional operation.
• As we are using residual network problems like vanishing gradient will not be there.
OPTIMIZATION TECHNIQUE IN PROPOSED
ARCHITECTURE
• Filter size of 1*1 used in initial step creates an optimation because number of operations which will
be performed in above architecture will be very less.
OPTIMIZATION TECHNIQUE IN PROPOSED
ARCHITECTURE
• If our model is not learning something useful then we can skip through them
easily using the skip connection and hence the performance will not be hurt.
PROPOSED ARCHITECTURE ON MRI IMAGES