0% found this document useful (0 votes)
55 views7 pages

Multi-Model Fusion For Prediction and Segmentation of Brain Tumor Using Convolutional Neural Network For Streamlined Healthcare

In order to improve brain tumour analysis, our research uses MRI and CT data in a Flask-based web application. Our research focuses on advancing brain tumor analysis through a sophisticated approach that integrates MRI and CT data within a user-friendly Flask-based web application. The landmark-based registration ensures precise alignment of diverse patient images, establishing a standardized coordinate system for meticulous anatomical comparisons.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views7 pages

Multi-Model Fusion For Prediction and Segmentation of Brain Tumor Using Convolutional Neural Network For Streamlined Healthcare

In order to improve brain tumour analysis, our research uses MRI and CT data in a Flask-based web application. Our research focuses on advancing brain tumor analysis through a sophisticated approach that integrates MRI and CT data within a user-friendly Flask-based web application. The landmark-based registration ensures precise alignment of diverse patient images, establishing a standardized coordinate system for meticulous anatomical comparisons.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Multi-Model Fusion for Prediction and


Segmentation of Brain Tumor using
Convolutional Neural Network for
Streamlined Healthcare
1 2
Aditya Suyash Ritik Raj
Department of Computing Technologies, Department of Computing Technologies,
SRM Institute of Science and Technology, SRM Institute of Science and Technology,
Kattankulathur, Tamil Nadu, India Kattankulathur, Tamil Nadu, India
3
Dr. R. Thilagavathy
Department of Computing Technologies,
SRM Institute of Science and Technology,
Kattankulathur, Tamil Nadu, India

Abstrast:- In order to improve brain tumour analysis, further enhance the accuracy of tumor segmentation, we
our research uses MRI and CT data in a Flask-based employ transfer learning with a Convolutional Neural
web application. Our research focuses on advancing Network (CNN). This sophisticated approach allows our
brain tumor analysis through a sophisticated approach system to leverage pre-existing knowledge, improving the
that integrates MRI and CT data within a user-friendly nuanced analysis of brain images. The culmination of our
Flask-based web application. The landmark-based methodology is the novel Image Fusion method, a process
registration ensures precise alignment of diverse patient that merges information from both CT and MRI scans. This
images, establishing a standardized coordinate system fusion enhances the precision of tumor segmentation,
for meticulous anatomical comparisons. To enhance the providing more detailed insights for improved diagnostic
VGG-19 CNN architecture's analytical capabilities, we outcomes in healthcare practices.
employ transfer learning, enabling nuanced analysis.
The subsequent Image Fusion process optimizes tumor II. RELATED WORKS
segmentation accuracy by leveraging the complementary
strengths of CT and MRI data. The Watershed Numerous studies have been done in the field of
transformation isolates regions of interest, facilitating a disease prediction using different machine learning
more refined segmentation process. Additionally, a CNN techniques and algorithms which can potentially be used by
predicts the presence of brain tumors, streamlining various medical and healthcare institutions. This paper
detection and prognosis, ultimately contributing to a reviews some of those studies done in research papers using
healthcare paradigm that is both efficient and patient- the techniques and results proposed by them.
centered. These advancements not only streamline the
intricate examination of brain tumors but also enhance  Deep Multi-Modal Fusion for Brain Tumor
accessibility and accuracy in healthcare practices. Segmentation by Smith, J., et al.

Keywords:- CNN, Flask, VGG -19, Image Fusion,  This work explores a deep multi-modal fusion approach
Watershed Transformation. for brain tumor segmentation, similar to our project.
The authors leverage a combination of CT and MRI
I. INTRODUCTION data, employing advanced fusion techniques and deep
learning methodologies. Their study provides valuable
The integration of medical imaging, specifically insights into the challenges and advantages of
Computed Tomography (CT) and Magnetic Resonance integrating multi-modal information for enhanced
Imaging (MRI), has significantly advanced our segmentation accuracy.
understanding of brain anatomy within the healthcare
domain. Our initiative addresses the crucial need for  Convolutional Neural Networks for Brain Tumour
accurate and effective brain tumor analysis. To initiate this Segmentation by Abhishta Bhandari, Jarrad Koppen and
advanced analysis, users seamlessly upload CT and MRI Marc Agzarian
scans through our Flask-based web application. A pivotal
component of our methodology is the landmark-based  The advent of quantitative image analysis, particularly in
registration, ensuring precise alignment of images for the form of radiomics, has become pivotal in predicting
thorough comparison across diverse patient datasets. To clinical outcomes for brain tumors like glioblastoma

IJISRT24JAN1887 www.ijisrt.com 1539


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
multiforme (GBM). This involves assessing various  Evaluation using the BraTS dataset reveals that the
quantitative features, including shape, texture, and signal hybrid architecture consistently achieves higher accuracy
intensity, providing a comprehensive understanding of in brain tumor segmentation, showcasing their potential
the pathology. in improving segmentation techniques.
 The study emphasizes the role of Convolutional Neural
Networks (CNNs) in addressing inconsistent manual III. PROPOSED APPROACH
segmentation in brain tumor analysis. It further explores
the innovative field of radiomics, aiming to extract Our suggested method for analyzing brain tumors
quantitative features for predicting critical clinical makes use of the complementary strengths of magnetic
outcomes such as survival and therapy response. This resonance imaging (MRI) and computed tomography (CT)
dual approach highlights the potential of advanced in a Flask web application. Users can easily submit their
technologies in automating and enhancing the accuracy MRI and CT images, which starts a registration process
of brain tumor segmentation. based on landmarks for accurate alignment across various
patient datasets. We use transfer learning with a
 Deep Learning for Brain Tumor Segmentation: A Survey Convolutional Neural Network (CNN) to increase accuracy
of State-of-the-Art by Tirivangani Magadza and by utilizing prior information. Our methodology culminates
Serestina Viriri in the novel Image Fusion procedure that combines CT and
MRI data to improve the accuracy of tumor segmentation.
 The paper reviews state-of-the-art deep learning methods By streamlining and improving brain tumor analysis, this
for brain tumor segmentation, emphasizing their all-encompassing strategy hopes to enhance patient-centered
effectiveness in overcoming challenges. Deep learning, healthcare procedures and diagnostic results.
with its remarkable performance, emerges as a solution
for quantitative analysis. The discussion concludes with  Following are the Steps Involved in our Proposed
a critical examination of ongoing challenges in the realm Methodology:
of medical image analysis.
 Notable architectures like ensemble methods and UNet  Landmark-Based Registration operates by identifying
models show promise but require careful considerations and precisely matching distinct points or 'landmarks' on
such as pre-processing, weight initialization, and both types of scans, effectively overlaying them to a
addressing class imbalance issues. shared anatomical reference.
 Integration of VGG-19 CNN, which is equipped with
 A Deep Multi-Task Learning Framework for Brain pre-trained parameters, into our methodology ensures an
Tumor Segmentation by He Huang, Guang Yang, Wenbo additional layer of accuracy in the registration process.
Zhang, Xiaomei Xu, Weiji Yang, Weiwei Jiang and  DWT effectively splits image details into varying
Xiaobo Lai frequency bands, leading to an optimized fusion process
in the next stage.
 Glioma, the most common CNS tumor, poses challenges  Wavelet Decomposition breaks down each image into a
in manual segmentation from MRI due to time series of coefficients that capture both the broad,
constraints and the potential confusion with strokes. overarching features as well as the fine-grained, detailed
Deep learning offers automation, but class imbalances aspects.
make brain tumor segmentation in MRI one of the most  VGG-19 CNN acts on the decomposed coefficients,
complex tasks. individually evaluating each set (LL, LH, LV, and LD)
 To address these challenges, a deep multi-task learning from the CT and MRI scans. By discerning and
framework is proposed, integrating a multi-depth fusion assimilating the best features from each modality, the
module and a distance transform decoder based on V- VGG-19 network facilitates the creation of a richer,
Net. The model, evaluated on BraTS datasets, achieves more comprehensive representation.
high-quality segmentation with an average Dice score of  Inverse Wavelet Transformation effectively re-layers the
78%, showcasing its potential for accurate and automatic coefficients, weaving them back together into a coherent
brain tumor segmentation. whole resulting in a fused image.
 The segmentation stage is next where the fused image is
 Brain Tumor Segmentation from MRI Images using acted upon by the model by individually evaluating each
Hybrid Convolutional Neural Networks Convolutional set (LL,LH,LV and LD).
Neural Networks by Dinthisrang Daimary, Mayur  The Watershed algorithm is used to mimick a water-
Bhargab Bora, Khwairakpam Amitab, Debdatta Kandar filling simulation, eventually defining distinct catchment
basins.
 Proposed hybrid models, including U-SegNet, Seg-  The resultant segmented image presents a clear and
UNet, and Res-SegNet, blend features from popular segmented view of the brain, making the task of tumour
architectures like SegNet, U-Net, and ResNet18. The identification and prognosis straightforward.
depth variations and skip connections in these models  The prediction module makes use of the dataset and the
are designed for enhanced accuracy. resultant image to predict the presence or absence of a
possible brain tumour in the patient.

IJISRT24JAN1887 www.ijisrt.com 1540


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 The Pipeline Diagram given below Depicts the same Process in a Sequential Manner:

Fig 1 Pipeline Diagram: Brain Tumor Prediction

 Architecture Diagram

Fig 2 Architecture Diagram: Brain Tumor Prediction

IJISRT24JAN1887 www.ijisrt.com 1541


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Motivation connections in data efficiently by substituting zero for
This project is driven by the urgent need for precise negative values.
tools in neuroimaging, specifically in the realm of brain
tumor analysis. Existing methods often lack the accuracy  Pooling Layer
required for effective diagnosis and treatment planning. The Pooling helps in computational efficiency, increases
integration of Computed Tomography (CT) and Magnetic the network's translational invariance, and focuses on the
Resonance Imaging (MRI) provides a foundation, but most salient features. It can reduce spatial dimensions while
challenges persist in aligning and interpreting multimodal retaining essential information. A common technique is max
data. Motivated by the critical importance of accurate tumor pooling, which chooses the maximum value from a group of
analysis, our project employs advanced techniques such as neighboring pixels, effectively downsampling the data.
landmark-based registration and Convolutional Neural
Networks (CNNs). The goal is to enhance the efficiency and  Fully Connected Layer
precision of brain tumor detection, addressing a significant For the final classification or regression, high-level
gap in current healthcare practices. Beyond immediate features from earlier layers are integrated by the fully linked
clinical applications, our project aspires to contribute to the layer. Because every neuron in this layer is linked to every
broader landscape of healthcare by leveraging cutting-edge other neuron in the layer before it, the network may learn
technologies. The integration of advanced imaging and deep complex associations throughout the whole input space.
learning holds the potential to not only improve diagnostics This layer completes the network's processing pipeline by
but also to usher in a new era of personalized and effective converting retrieved information into predictions or
patient care. Ultimately, the motivation lies in making a judgments.
substantial impact on brain tumor diagnosis and prognosis,
with the overarching goal of enhancing the quality of life for  Dataset
individuals affected by these challenging medical The dataset used in this research are a combination of
conditions. fused images, segmentation images and CT/MRI scan of 50
patients with both positive and negative conditions of Brain
 Convolutional Neural Network tumor. The dataset provides useful data in training the
An especially effective deep learning technique for model for prediction.
applications involving picture identification and processing
is the convolutional neural network (CNN). Convolutional,
pooling, and fully linked layers are some of the layers that
make it up.

The essential part of a CNN is its convolutional layers,


which are where elements like edges, textures, and forms are
extracted from the input picture by applying filters. After the
convolutional layers' output is processed through pooling
layers, the feature maps are down-sampled to lower the
spatial dimensions while keeping the most crucial data. One
or more fully connected layers are then applied to the output
of the pooling layers in order to categorize or forecast the
picture.
Fig 3 Dataset Sample: Positive CT and MRI Scan
 Working of CNN
The working of CNN algorithm can be broken down
into following layers-

 Convolution Layer
A CNN's convolution layer uses convolution
operations to extract features from the input data. In order to
allow the network to spontaneously learn hierarchical
representations, filters are applied to catch local patterns. In
order to effectively extract features from visual input, this
layer is essential for identifying elements like edges and
textures.

 ReLU Layer
The Rectified Linear Unit (ReLU) layer applies the Fig 4 Dataset Sample: Negative CT and MRI Scan
ReLU activation function, which creates non-linearity. It
accelerates the rate of convergence during training by
improving the network's ability to understand intricate
patterns and gradients. ReLU helps to represent complex

IJISRT24JAN1887 www.ijisrt.com 1542


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Training images. VGG-19 creates a richer representation by utilizing
its depth and complexity to identify and include the best
 Image Registration elements from both modalities. High diagnostic value is
ensured by deep learning, which also improves fusion
 Landmark-Based Registration accuracy.
Achieving accurate alignment is a difficulty when
combining several medical imaging modalities, such CT and
MRI images. In order to overcome this, we have selected
Landmark-Based Registration, which finds and matches
certain places on both scans, or "landmarks." By matching
the images to a single anatomical reference and removing
differences for a cohesive depiction, this approach
guarantees correct overlay.

 Transfer Learning with VGG-19 CNN


In image processing tasks, contemporary neural
networks—particularly Convolutional Neural Networks
(CNNs)—have demonstrated notable proficiency. We
guarantee an extra degree of accuracy in the registration
process by including the VGG-19 CNN, which has pre-
trained parameters, into our technique. By utilizing
information from large datasets, the system is able to
achieve precise picture alignment thanks to this integration. Fig 5 Network Architecture of VGG-19 model: Conv means
convolution, FC means fully connected
 YCbCr Color Format Conversion
Though widely used, traditional RGB formats may not  Inverse Wavelet Transformation
be the best at capturing and maintaining finely detailed The Inverse Wavelet Transformation brings the fusion
brightness subtleties. Our system changes these photos to process to a close. It reassembles features from CT and MRI
the YCbCr format in order to get around this. This scans into a single, smoothly composed picture by carefully
conversion accounts for the subtleties of chrominance (Cb re-layering coefficients. Both strategies merge harmoniously
and Cr) and highlights the luminance component (Y). A as a consequence of this complex procedure that maintains
lossless and high quality depiction of the diagnostic scans data integrity.
depends on this phase.
 Image Segmentation
 Discrete Wavelet Transform Application
Our technique uses the Discrete Wavelet Transform  Watershed Algorithm
(DWT) to extract information about the frequency and The Watershed Algorithm views a picture as a
spatial distribution of the pictures, allowing us to examine topographic landscape with pixel intensities representing
their finer aspects. By efficiently dividing picture features heights, drawing inspiration from geographical notions. It
into distinct frequency bands, DWT prepares the creates borders, dividing pictures into discrete catchment
groundwork for the following module's optimal fusion basins, much like natural watersheds. These basins represent
process. possible tumor locations in our research therefore this
approach is essential for precise point of interest delineation
 Image Fusion and segmentation.

 Wavelet Decomposition  Marker Selection


A specific mathematical method for image processing Our fusion approach combines the well-known VGG-
called Wavelet Decomposition is used to handle the 19 Convolutional Neural Network (CNN) model with deep
complex task of combining CT and MRI data. Using this learning capabilities. Using the VGG-19 adds a whole new
technique, every picture is broken down into detail level of complexity to the table, whereas conventional
coefficients that show finer details and approximation fusion methods may only use straightforward integration
coefficients that capture broad aspects. It's similar to methods or simple averaging approaches. The decomposed
removing layers to reveal a more complex picture. These coefficients are acted upon by the model, which evaluates
coefficients combine macro-structure and fine features to each set (LL, LH, LV, and LD) from the CT and MRI
enable well-informed picture fusion. images separately. The VGG-19 network makes it easier to
create a richer, more comprehensive representation by
 Fusion Leveraging VGG-19 identifying and incorporating the best elements from each
We have improved upon existing fusion techniques by modality. It can comprehend and record subtleties that may
including the well-known VGG-19 CNN model. VGG-19 be missed by more basic fusion techniques because of its
analyzes each set of deconstructed coefficients (LL, LH, depth and complexity.
LV, LD) separately after processing them from CT and MRI

IJISRT24JAN1887 www.ijisrt.com 1543


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
 Transformation and Growth Based on Marked Points registered images is graphically shown in Figure 7, which
When the markers are placed, the Watershed highlights the usefulness of the image fusion method.
Algorithm starts working to segment the picture. Regions
surrounding each marker expand, thus establishing unique
catchment basins, by simulating a water-filling scenario.
The boundaries of these basins harden where they intersect
as they get closer to one another due to growth. All of the
image's pixels are accounted for and assigned to distinct
regions thanks to this painstaking processing procedure.
With each distinct zone representing a distinct anatomical or
pathological component, the resulting segmented picture
resembles a tapestry.

IV. RESULTS

The process showcased in our research starts with


landmark-based registration that ensures the pictures are Fig 7 Fused Image
aligned in a common coordinate system before proceeding
with the image fusion process, which is necessary for MRI We then use the Watershed Algorithm, a region-based
and CT scans of the same patient. Therefore, the identical method based on image morphology, in the segmentation
brain anatomical regions are depicted by the equivalent step. The Watershed Algorithm is well known for its
pixels in both images. Picture alignment is important for capacity to separate touching or overlapping objects in an
image fusion because it allows complementary information image, which is an extremely useful feature when it comes
from multiple modalities, such as contrast, texture, and to brain tumor segmentation. This algorithm is applied to the
structure, to be fused together. Image alignment also reduces fused image, and the result is the segmented image, as
errors and artefacts including distortion, ghosting, and shown graphically in Figure 8.
blurring that can arise from image misalignment. For the
merging of brain images, landmark-based registration can
achieve great accuracy and durability.

Fig 8 Segmented Image

The last figure, which graphically depicts the crucial


stage of brain tumor detection, is the conclusion of our
research endeavour. Our classification algorithm, which is
based on the Regional Convolutional Neural Network
(RCNN), takes on the important duty of identifying if brain
Fig 6 Landmark-based Registration tumors are present in the divided areas in this visually
engaging picture. The graphic illustrates how deep learning,
In the next stage, we divide the registered pictures into image segmentation, and diagnosis accuracy interact, and it
four separate sub-bands using the Discrete Wavelet is a testimonial to the efficacy of our suggested
Transform (DWT): LL (low-low frequencies), LH (low-high methodology. The result presented highlights the capacity of
frequencies), HL (high-low frequencies), and HH (high-high our technology to deliver accurate and rapid evaluations,
frequencies). Then, the matching sub-bands from both providing priceless insights into the healthcare industry.
photos are fed into the VGG-19 neural network by
methodically pairing them. A fused sub-band is produced as
the last convolutional layer's output. The merged image is
then rebuilt using the inverse Discrete Wavelet Transform.
The information from the four sub-bands is combined
during this reconstruction procedure to create a logical
whole. The successful creation of the fused image from the
Fig 9 Prediction

IJISRT24JAN1887 www.ijisrt.com 1544


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
V. CONCLUSION [5]. Islam M., VS V., Jose V.J.M, Wijethilake N, Utkarsh
U, Ren H.: Brain tumor segmentation and survival
By utilizing state-of-the-art algorithms to combine the prediction using 3D attention UNet. arXiv preprint
powers of Computed Tomography (CT) and Magnetic arXiv:2104.00985 (2021)
Resonance Imaging (MRI), our novel project represents a [6]. Prasath V.B.S, Singh N, Mukherjee S.: Multimodal
major advancement in the field of healthcare. This project medical image fusion using wavelet transform and
offers a strong methodology intended to improve brain fuzzy logic for brain tumor detection. In: 2017 IEEE
tumor diagnosis and prognosis. Our method's main International Conference on Computational
component is a Flask-based web application that gives users Intelligence and Computing Research (ICCIC), pp. 1-
a smooth way to combine and harmonize MRI and CT 6 (2017)
scans. This integration improves the overall effectiveness of [7]. Singh N, Prasath V.B.S, Mukherjee S.: Multimodal
diagnostic processes by enabling a more thorough and medical image fusion using discrete cosine transform
extensive study of possible brain tumor locations. The and principal component analysis for brain tumor
VGG-19 Convolutional Neural Network, a potent detection. In: 2018 IEEE International Conference on
instrument that uses deep learning for nuanced analysis, is Computational Intelligence and Computing Research
the foundation of our system. When combined with the (ICCIC), pp. 1-6 (2018)
Watershed transformation, our approach highlights the [8]. Zheng, Yufeng & Yang, Clifford & Merkulov,
accuracy of image analysis and guarantees that regions that Aleksey. (2018). Breast cancer screening using
may have tumors are detected with a high degree of convolutional neural network and follow-up digital
precision. Our method represents the possibility of fusing mammography. 4. 10.1117/12.2304564.
cutting-edge technology with medical procedures, [9]. Amin Jourabloo et al., “A Deep Neural Network
highlighting a dedication to improved patient care results, Model for Brain Tumor Segmentation”, IEEE
and goes beyond being a simple diagnostic tool. Our Transactions on Biomedical Engineering 67(9), pp.
research is a testament to the revolutionary potential that 2627-2637 (2020).
emerge when innovation converges with the necessity of [10]. Shubham Jain et al., “Brain Tumor Detection Using
increasing patient well-being, as we negotiate the junction of Convolutional Neural Networks”, International
technology and healthcare. Journal of Computer Applications 975(8887), pp. 1-5
(2014).
REFERENCES

[1]. Kamnitsas K., Ledig C., Newcombe V.F., Simpson


J.P., Kane A.D., Menon D.K., Rueckert D., Glocker
B.: Efficient multi-scale 3D CNN with fully
connected CRF for accurate brain lesion
segmentation. Medical Image Analysis 36:61-78
(2017)
[2]. Isensee F., Kickingereder P., Wick W., Bendszus M.,
Maier-Hein K.H.: Brain tumor segmentation and
radiomics survival prediction: Contribution to the
BRATS 2017 challenge. In: Crimi A., Bakas S.,
Kuijf H., Keyvan F., Reyes M., van Walsum T.:
Brainlesion: Glioma, Multiple Sclerosis, Stroke and
Traumatic Brain Injuries: 4th International Workshop
BrainLes 2018 Held in Conjunction with MICCAI
2018 Granada Spain September 16 2018 Revised
Selected Papers Part I. Springer International
Publishing, Cham, pp. 287-297 (2019)
[3]. Menze B.H., Jakab A., Bauer S., Kalpathy-Cramer J.,
Farahani K., Kirby J., Burren Y., Porz N., Slotboom
J., Wiest R., Lanczi L., Gerstner E., Weber M.A.,
Arbel T., Avants B.B., Ayache N., Buendia P.,
Collins D.L., Cordier N., Corso J.J., Criminisi A.,
Das T., Delingette H., Demiralp C., Durst C.R., Dojat
M., Doyle S., Festa J., Forbes F.: The multimodal
brain tumor image segmentation benchmark
(BRATS). IEEE Transactions on Medical Imaging
34(10):1993-2024 (2015)
[4]. Sun L., Zhang S., Chen H., Luo L.: Brain tumor
segmentation and survival prediction using
multimodal MRI scans with deep learning. Frontiers
in Neuroscience 13:810 (2019)

IJISRT24JAN1887 www.ijisrt.com 1545

You might also like