ITPDL03
ITPDL03
1. Abstract:
Machine learning models based on lab data achieve an AUC of 0.79, but
remote data initially performed poorly with an AUC of 0.51.
INTRODUCTION:
Multiple sclerosis (MS) is a chronic autoimmune disease that affects the central
nervous system, particularly the brain and spinal cord, leading to a range of
neurological impairments. Magnetic Resonance Imaging (MRI) has been widely
used to identify and track the progression of MS lesions, offering detailed views
of the brain's structural changes. Recent advancements in deep learning,
especially Convolutional Neural Networks (CNNs), have shown promise in
automating the detection and classification of MS lesions from MRI data. These
CNN-based models provide accurate, efficient, and scalable solutions for
diagnosing multiple sclerosis, improving patient outcomes through early
detection and better treatment planning.
PROPOSED SYSTEM:
Year : 2024
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory,
and motor problems for people with a detrimental effect on the functioning of
the nervous system. In order to diagnose MS, multiple screening methods have
been proposed so far; among them, magnetic resonance imaging (MRI) has
received considerable attention among physicians. MRI modalities provide
physicians with fundamental information about the structure and function of the
brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS
using MRI is time-consuming, tedious, and prone to manual errors. Research on
the implementation of computer aided diagnosis system (CADS) based on
artificial intelligence (AI) to diagnose MS involves conventional machine
learning and deep learning (DL) methods. In conventional machine learning,
feature extraction, feature selection, and classification steps are carried out by
using trial and error; on the contrary, these steps in DL are based on deep layers
whose values are automatically learn. In this paper, a complete review of
automated MS diagnosis methods performed using DL techniques with MRI
neuroimaging modalities is provided. Initially, the steps involved in various
CADS proposed using MRI modalities and DL techniques for MS diagnosis are
investigated. The important preprocessing techniques employed in various
works are analyzed. Most of the published papers on MS diagnosis using MRI
modalities and DL are presented. The most significant challenges facing and
future direction of automated diagnosis of MS using MRI modalities and DL
techniques are also provided
Year : 2020
Year : 2021
Year : 2020
Year : 2021
8.1 Aim:
The aim of this study is to develop a deep learning-based model using CNN
architectures to automatically detect and classify multiple sclerosis lesions from
MRI data, enhancing diagnostic accuracy and reducing the time taken for
manual assessment.
8.2 Objectives:
Scope
Goal
The goal of this project is to design a CNN-based deep learning model that can
detect and classify multiple sclerosis lesions from MRI data with high accuracy,
contributing to more efficient and reliable diagnosis in medical practice.
DESIGN ARCHITECTURE
General
Image Details
Test
Preprocessing dataset
Sclerosis
Training CNN Algorithm
Classification
dataset
Source images
Training Testing
Dataset Dataset
CNN algorithm
Prediction of Sclerosis
Classification
Use case diagrams are considered for high level requirement analysis of a
system. So when the requirements of a system are analyzed the functionalities
are captured in use cases. So, it can say that uses cases are nothing but the
system functionalities written in an organized manner.
11.4 CLASS DIAGRAM:
MODULE DESCRIPTION
We have to import our data set using keras preprocessing image data
generator function also we create size, rescale, range, zoom range, horizontal
flip. Then we import our image dataset from folder through the data generator
function. Here we set train, test, and validation also we set target size, batch size
and class-mode from this function we have to train using our own created
network by adding layers of CNN.
To train our dataset using classifier and fit generator function also we
make training steps per epoch’s then total number of epochs, validation data and
validation steps using this data we can train our dataset.
DATA PREPROCESSING:
Input layer in CNN contain image data. Image data is represented by three
dimensional matrixes. It needs to reshape it into a single column. Suppose you
have image of dimension 28 x 28 =784, it need to convert it into 784 x 1 before
feeding into input.
Convo Layer:
Convo layer is sometimes called feature extractor layer because features
of the image are get extracted within this layer. First of all, a part of image is
connected to Convo layer to perform convolution operation as we saw earlier
and calculating the dot product between receptive field (it is a local region of the
input image that has the same size as that of filter) and the filter. Result of the
operation is single integer of the output volume. Then the filter over the next
receptive field of the same input image by a Stride and do the same operation
again. It will repeat the same process again and again until it goes through the
whole image. The output will be the input for the next layer.
Pooling Layer:
Pooling layer is used to reduce the spatial volume of input image after
convolution. It is used between two convolution layers. If it applies FC after
Convo layer without applying pooling or max pooling, then it will be
computationally expensive. So, the max pooling is only way to reduce the spatial
volume of input image. It has applied max pooling in single depth slice with
Stride of 2. It can observe the 4 x 4 dimension input is reducing to 2 x 2
dimensions.
Fully connected layer involves weights, biases, and neurons. It connects neurons
in one layer to neurons in another layer. It is used to classify images between
different categories by training.
Softmax or Logistic layer is the last layer of CNN. It resides at the end of
FC layer. Logistic is used for binary classification and softmax is for multi-
classification.
Output Layer:
Output layer contains the label which is in the form of one-hot encoded.
Now you have a good understanding of CNN.
12 METHODOLOGY
CNN Weights
The train dataset is used to train the model (CNN) so that it can identify the test
image and the disease it has CNN has different layers that are Dense, Dropout,
Activation, Flatten, Convolution2D, and MaxPooling2D. After the model is
trained successfully, the software can identify the sclerosis Classification image
contained in the dataset. After successful training and preprocessing,
comparison of the test image and trained model takes place to predict the
Sclerosis classifications.
TYPES OF CNN:
Google Net
Leenet
GOOGLE NET:
The main innovation in the GoogleNet architecture is the use of the "Inception
module," which incorporates multiple parallel convolutional operations of
different filter sizes to capture features at various scales. This allows the
network to learn both fine-grained and high-level abstract features efficiently.
Global Average Pooling: Instead of using fully connected layers at the end of
the network, GoogleNet employs global average pooling. This operation
computes the average value for each feature map in the last convolutional layer,
resulting in a single vector that is used as input to the final softmax layer for
classification. Global average pooling reduces the number of parameters and
helps prevent overfitting.
GoogleNet was one of the early deep learning architectures to demonstrate the
effectiveness of deep networks in image classification tasks. It achieved top
accuracy in the ILSVRC 2014 competition while being more computationally
efficient compared to other models available at the time. Since its introduction,
several improved versions of the Inception architecture, such as Inception-v2,
Inception-v3, and Inception-v4, have been developed by the research
community.
ACCURACY:
LENET:
Input Layer: This layer accepts the input image, which is typically a grayscale
image of a handwritten digit. The input images are usually of size 32x32 pixels.
Fully Connected Layers: The subsampled feature maps are then flattened and
passed through fully connected layers, which are traditional neural network
layers. These layers are responsible for making classification decisions based on
the extracted features.
Output Layer: The final fully connected layer produces the output of the
network, which represents the predicted class probabilities. For handwritten
digit recognition, this would typically involve 10 output nodes, each
corresponding to a digit class (0 to 9).
LeNet was designed before the deep learning era as we know it today, so it has
fewer layers compared to modern architectures like ResNet, VGG, or Inception.
However, its fundamental design principles, including the use of convolutional
and pooling layers, inspired the development of more complex CNN
architectures. LeNet demonstrated the potential of neural networks in handling
image recognition tasks and played a pivotal role in the resurgence of interest in
neural networks during the early 2000s.
ACCURACY:
LIST OF MODULES:
1. Manual Net
2. Google net
3. LENET
4. Deploy
CLASSIFICATIONS:
MANUALNET:
DEPLOY
OUTPUT SCREENSHOT:
Conclusion:
Future Work
Future work will involve enhancing the CNN model by incorporating more
advanced techniques like transfer learning and attention mechanisms to improve
performance further. Additionally, expanding the model's application to include
other neurological disorders, such as Alzheimer's or Parkinson's disease, will
increase its clinical utility. Further research can also explore real-time
deployment in clinical settings and integration with other diagnostic tools for a
comprehensive approach to neurological disease detection.