Multi-Model Fusion For Prediction and Segmentation of Brain Tumor Using Convolutional Neural Network For Streamlined Healthcare
Multi-Model Fusion For Prediction and Segmentation of Brain Tumor Using Convolutional Neural Network For Streamlined Healthcare
ISSN No:-2456-2165
Abstrast:- In order to improve brain tumour analysis, further enhance the accuracy of tumor segmentation, we
our research uses MRI and CT data in a Flask-based employ transfer learning with a Convolutional Neural
web application. Our research focuses on advancing Network (CNN). This sophisticated approach allows our
brain tumor analysis through a sophisticated approach system to leverage pre-existing knowledge, improving the
that integrates MRI and CT data within a user-friendly nuanced analysis of brain images. The culmination of our
Flask-based web application. The landmark-based methodology is the novel Image Fusion method, a process
registration ensures precise alignment of diverse patient that merges information from both CT and MRI scans. This
images, establishing a standardized coordinate system fusion enhances the precision of tumor segmentation,
for meticulous anatomical comparisons. To enhance the providing more detailed insights for improved diagnostic
VGG-19 CNN architecture's analytical capabilities, we outcomes in healthcare practices.
employ transfer learning, enabling nuanced analysis.
The subsequent Image Fusion process optimizes tumor II. RELATED WORKS
segmentation accuracy by leveraging the complementary
strengths of CT and MRI data. The Watershed Numerous studies have been done in the field of
transformation isolates regions of interest, facilitating a disease prediction using different machine learning
more refined segmentation process. Additionally, a CNN techniques and algorithms which can potentially be used by
predicts the presence of brain tumors, streamlining various medical and healthcare institutions. This paper
detection and prognosis, ultimately contributing to a reviews some of those studies done in research papers using
healthcare paradigm that is both efficient and patient- the techniques and results proposed by them.
centered. These advancements not only streamline the
intricate examination of brain tumors but also enhance Deep Multi-Modal Fusion for Brain Tumor
accessibility and accuracy in healthcare practices. Segmentation by Smith, J., et al.
Keywords:- CNN, Flask, VGG -19, Image Fusion, This work explores a deep multi-modal fusion approach
Watershed Transformation. for brain tumor segmentation, similar to our project.
The authors leverage a combination of CT and MRI
I. INTRODUCTION data, employing advanced fusion techniques and deep
learning methodologies. Their study provides valuable
The integration of medical imaging, specifically insights into the challenges and advantages of
Computed Tomography (CT) and Magnetic Resonance integrating multi-modal information for enhanced
Imaging (MRI), has significantly advanced our segmentation accuracy.
understanding of brain anatomy within the healthcare
domain. Our initiative addresses the crucial need for Convolutional Neural Networks for Brain Tumour
accurate and effective brain tumor analysis. To initiate this Segmentation by Abhishta Bhandari, Jarrad Koppen and
advanced analysis, users seamlessly upload CT and MRI Marc Agzarian
scans through our Flask-based web application. A pivotal
component of our methodology is the landmark-based The advent of quantitative image analysis, particularly in
registration, ensuring precise alignment of images for the form of radiomics, has become pivotal in predicting
thorough comparison across diverse patient datasets. To clinical outcomes for brain tumors like glioblastoma
Architecture Diagram
Convolution Layer
A CNN's convolution layer uses convolution
operations to extract features from the input data. In order to
allow the network to spontaneously learn hierarchical
representations, filters are applied to catch local patterns. In
order to effectively extract features from visual input, this
layer is essential for identifying elements like edges and
textures.
ReLU Layer
The Rectified Linear Unit (ReLU) layer applies the Fig 4 Dataset Sample: Negative CT and MRI Scan
ReLU activation function, which creates non-linearity. It
accelerates the rate of convergence during training by
improving the network's ability to understand intricate
patterns and gradients. ReLU helps to represent complex
IV. RESULTS