G8 Report Major

Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

Brain Tumor Classification using

Conventional Neural Networks

A Project Report Submitted in the


Partial Fulfillment of the Requirements
for the Award of the Degree of

BACHELOR OF TECHNOLOGY

IN

ELECTRONICS AND COMMUNICATION ENGINEERING

Submitted by

Seelam Anudeesh Reddy 19881A04G8


Nukala Sai Sandeep 19881A04F7
Pyapili Ramkumar 19881A04G3

SUPERVISOR
Dr.D.Krishna
Associate Professor, Dept. of ECE

Department of Electronics and Communication Engineering

March, 2023
Department of Electronics and Communication Engineering

CERTIFICATE

This is to certify that the project titled Brain Tumor Classification using
Conventional Neural Networks is carried out by

Seelam Anudeesh Reddy 19881A04G8


Nukala Sai Sandeep 19881A04F7
Pyapili Ramkumar 19881A04G3

in partial fulfillment of the requirements for the award of the degree of


Bachelor of Technology in Electronics and Communication Engineering
during the year 2022-23.

Signature of the Supervisor Signature of the HOD


Dr.D.Krishna Dr. G.A.E. Satish Kumar
Associate Professor, Dept. of ECE Professor and Head, ECE

Project Viva-Voce held on

Examiner
Acknowledgement

The satisfaction that accompanies the successful completion of the task


would be put incomplete without the mention of the people who made it
possible, whose constant guidance and encouragement crown all the efforts
with success.

We wish to express our deep sense of gratitude to Dr.D.Krishna, Associate


Professor, Dept. of ECE and Project Supervisor, Department of Electronics
and Communication Engineering, Vardhaman College of Engineering, for his
able guidance and useful suggestions, which helped us in completing the project
in time.

We are particularly thankful to Dr. G.A.E. Satish Kumar, the Head of


the Department, Department of Electronics and Communication Engineering,
his guidance, intense support and encouragement, which helped us to mould
our project into a successful one.

We show gratitude to our honorable Principal Dr. J.V.R. Ravindra, for


providing all facilities and support.

We avail this opportunity to express our deep sense of gratitude and heart-
ful thanks to Dr. Teegala Vijender Reddy, Chairman and Sri Teegala
Upender Reddy, Secretary of VCE, for providing a congenial atmosphere to
complete this project successfully.

We also thank all the staff members of Electronics and Communication


Engineering department for their valuable support and generous advice. Finally
thanks to all our friends and family members for their continuous support and
enthusiastic help.

Seelam Anudeesh Reddy


Nukala Sai Sandeep
Pyapili Ramkumar
Abstract

In order to effectively treat brain tumours, a novel method of diagnosis is


proposed in this study. The technique involves classifying the type of tumour
based on brain MRI data that used a Convolutional Neural Network (CNN).
Several pooling and convolutional layers, along with fully linked layers, are all
included in the CNN design. An extensive dataset of MRI images annotated for
tumour kind was utilised to create and train the algorithm. The effectiveness
of the model was evaluated using a variety of parameters, include precision,
accuracy, recall, as well as F1 score. The outcomes demonstrated that CNN
performed more effectively than traditional machine learning techniques and
correctly diagnosed brain cancers. This new technique may actually be helpful
in the early detection and treatment of tumours.

Keywords: Pooling;Diagnosis;Precision
Table of Contents

Title Page No.


Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
CHAPTER 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Brain Tumor detection . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 MRI Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Brian Anatomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Brain Tumors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
CHAPTER 2 Literature Survey . . . . . . . . . . . . . . . . . . . . . 6
2.1 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
CHAPTER 3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Introduction to Python . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 NumPy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.2 OpenCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.3 Pillow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.4 Scikit-image . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Introduction to GUI . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Convolution Neural Networks . . . . . . . . . . . . . . . . . . . . . 27
3.4 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4.1 Design of a Neural Network . . . . . . . . . . . . . . . . . 30
3.4.2 Stroke Generating . . . . . . . . . . . . . . . . . . . . . . . 31
3.4.3 Algorithm for CNN based classification . . . . . . . . . . . 32
3.4.4 Model Building . . . . . . . . . . . . . . . . . . . . . . . . . 32
CHAPTER 4 Architecture . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Module Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

iv
4.3Image Preprocessing and Image Enhancement . . . . . . . . . . . 36
4.3.1 Image Pre-Processing . . . . . . . . . . . . . . . . . . . . . 36
4.3.2 Image Acquisition from dataset . . . . . . . . . . . . . . . 36
4.3.3 Convert the image from one color space to another . . . . 37
4.3.4 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.1 Sobel Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.5 Image Segmentation using Binary Threshold . . . . . . . . . . . . 40
4.5.1 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.5.2 Morphological Operations . . . . . . . . . . . . . . . . . . . 43
4.6 Tumor classification using CNN . . . . . . . . . . . . . . . . . . . 44
4.6.1 Sequential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.6.2 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
CHAPTER 5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1 Generated Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.1 Plotting Losses . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.2 Raw Results . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.3 Output Visualization . . . . . . . . . . . . . . . . . . . . . . 49
CHAPTER 6 Conclusions and Future Scope . . . . . . . . . . . . . 51
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
List of Figures

1.1 Internal structures of Brain . . . . . . . . . . . . . . . . . . . . . . 2


1.2 Basic representation of a Tumor . . . . . . . . . . . . . . . . . . . 3

3.1 Methods involved in Numpy library . . . . . . . . . . . . . . . . . 21


3.2 Applications of Open CV . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Features involved in Scikit-image library . . . . . . . . . . . . . . 25
3.4 Sample GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5 Convolutional Neural Network architecture . . . . . . . . . . . . . 28
3.6 Example of Vechile Identification . . . . . . . . . . . . . . . . . . . 29
3.7 CNN for Medical purpose . . . . . . . . . . . . . . . . . . . . . . 29
3.8 Agriculture usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.9 Design of a Brain tumor classification model . . . . . . . . . . . . 31
3.10 Caption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.11 Model Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.1 Module Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.1 Accuracy of brain tumor classification . . . . . . . . . . . . . . . . 46


5.2 Epoch vs Loss Plot . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3 Image with Tumor . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4 Image with no Tumor . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.5 Malignant Tumor Identified . . . . . . . . . . . . . . . . . . . . . . 50
5.6 No Tumor Identified . . . . . . . . . . . . . . . . . . . . . . . . . . 50

vi
Abbreviations

Abbreviation Description

CNN Convolutional Neural Network

MRI Magnetic Resnance Imaging

SRC System Resource Controller

NLP Natural Language Processing

GUI Graphical User Interface

ROC Receiver Operating Characteristic

FVF Fluid Vector Flow


CHAPTER 1

Introduction

1.1 Brain Tumor detection


Brain tumor detection refers to the identification of abnormal growths or
masses in the brain and their characteristics. These tumors can probably be
benign or malignant and could occur in different parts of the brain, such as
the cerebral hemispheres, cerebellum, brain stem, or spinal cord.

To detect brain tumors, medical professionals typically use a combination


of imaging techniques such as MRI or CT scans, and also conduct a clinical
assessment of the patient by a neurologist or neurosurgeon. The imaging scans
can help identify important information such as the size, location, and type
of tumor, as well as any associated changes in brain tissue or blood vessels.
These assessments are crucial for accurate diagnosis and treatment planning.

Besides imaging techniques, a biopsy may be required to accurately di-


agnose the type of brain tumor and decide on the most effective treatment
options. The treatment plan can contain a combination of surgery, radiation
therapy, chemotherapy, or other treatments based on the tumor’s distinctive
characteristics and the patient’s overall health and prognosis. Early detection
and timely treatment are essential for improving outcomes and increasing the
chances of successful treatment for individuals with brain tumors.

1.2 MRI Scan


Using a powerful magnetic field, radio waves, as well as a computer, Mag-
netic Resonance Imaging (MRI) creates accurate pictures of the inside organs
of the body without any physical contact. It is often employed for diagnosis

1
and may be utilised on a variety of bodily components, including the brain,
spinal, joints, other organs, to deliver extremely accurate and exact diagnostic
information.

The patient is placed on a moveable bed while in an MRI scan that glides
within a large, capsule machine. The apparatus aligns the body’s hydrogen
atoms using a strong magnetic field. These aligned atoms are activated and
release signals when radio waves are focused on the body, and the MRI ma-
chine’s receiving coils pick up on these signals. In order to provide incredibly
detailed photographs of the inside body structures, the signals are analysed
by a computer.

MRI is a highly sensitive imaging technology that can provide detailed


information about soft tissues like the brain, muscles, and organs. It is partic-
ularly useful for diagnosing and monitoring conditions such as brain tumors,
stroke, multiple sclerosis, joint injuries, and spinal cord disorders. Unlike
other imaging modalities like X-rays or CT scans, MRI does not let patients
experience the ionizing radiation, which makes it a safer option for repeated
imaging studies over time.

1.3 Brian Anatomy

Figure 1.1: Internal structures of Brain

Our brain is a sophisticated organ that manages a variety of bodily pro-

Department of Electronics and Communication Engineering 2


cesses. The brainstem, cerebellum, and cerebrum make up its three primary
sections. The greatest region of the brain, the cerebrum, is split into two
hemispheres. The cerebrum’s outermost structure, the cerebral cortex, is in
charge of conscious cognition and volitional behaviour. The cerebellum, which
is found near the bottom of the brain, controls balance and movement. Basic
bodily processes including breathing, heart rhythm, and digestion are con-
trolled by the brainstem.Neurons are highly specialised cells that make up
the brain and are in constant communication with one another via electrical
and chemical impulses. The brain has four ventricles, or fluid-filled chambers,
that are split into grey matter and white matter. The blood-brain wall is
a specialised network of blood arteries and cells that blocks the brain from
toxic blood components.

1.4 Brain Tumors

Figure 1.2: Basic representation of a Tumor

The brain can develop abnormal growths called tumours. Primary and
metastatic brain tumours are the two major subtypes. Whereas metastatic
cancers begin anywhere on the body and move to the brain, primary brain
tumours begin within the brain.
The symptoms of brain tumours can vary based on the size and size of the
cancer, but they might include headaches, seizures, trouble hearing or seeing,
speaking, or coordinating your body, as well as balance and coordination issues.

Department of Electronics and Communication Engineering 3


Surgical, radiation treatment, chemotherapy, or a mixture of these treat-
ments are all possible for treating brain tumours. The nature and status of the
patient, along with the patient’s general condition, influence the therapy option.

Brain tumors can be a serious medical condition, and prompt diagnosis


and treatment is important for the best possible outcome.

1.5 Objective
The objective of a brain tumor classification project is to develop a model
that can accurately classify brain tumors into different categories based on
their characteristics. The aim is to provide a reliable and automated method
for medical professionals to diagnose brain tumors, which can aid in treatment
planning and prognosis determination.

The project may involve analyzing medical images, for example MRI or CT
scans, to identify patterns and features that are indicative of different types
of brain tumors. Machine learning programs can be trained on these images
and associated clinical data to create a predictive model that can classify new
brain tumor images accurately.

The ultimate goal of the project is to improve the accuracy and speed of
brain tumor diagnosis, which can lead to better patient outcomes and more
effective treatment planning.

1.6 Problem Statement


The problem statement for a brain tumor classification project is that
accurate diagnosis and classification of brain tumors is critical for effective
treatment planning and improving patient outcomes. However, the process

Department of Electronics and Communication Engineering 4


of manual classification by medical professionals can be time-consuming, sub-
jective, and prone to errors. Additionally, the increasing incidence of brain
tumors highlights the need for a reliable and automated classification system
to aid medical professionals in diagnosis and treatment planning.

1.7 Motivation
Brain tumors can be classified as malignant or benign, and their causes
and symptoms can vary. Early detection and treatment are crucial for a better
prognosis. Brain tumors can develop when cells grow abnormally and form
a solid mass in the brain. There are two types of brain tumors: primary
and metastatic. Symptoms may differ depending on the size, location, and
type of tumor and can include headaches, nausea, vomiting, and difficulty
walking. CT and MRI scans are used to detect brain tumors, with MRI being
preferred because it is non-invasive, non-ionizing, and produces high-definition
images. MRI has different image sequences, such as flair, T1-weighted, and
T2-weighted images. Brain tumour analysis and identification can be aided by
the use of image processing techniques as pre-processing, segmentation, picture
augmentation, feature extraction, and classification.

Department of Electronics and Communication Engineering 5


CHAPTER 2

Literature Survey

2.1 Literature Survey


As the outcomes of medical diagnosis are significant for the care of patients,
it is essential to develop prediction models that are both reliable and accurate.
Some of the most well classification and clustering techniques are employed in
medical diagnostics to make predictions. In order to make medical pictures
easier to interpret and easier to evaluate, their depiction is grouped together.
The accuracy of predictions made however during diagnosis process, especially
in the identification of anomalies, is intended to be improved using a number
of clustering and classification methods.

Our literature survey involved reviewing 25 papers that presented different


methods for clustering. Each paper proposed a unique approach to segmenta-
tion based on specific parameters. Here are brief summaries of each paper we
examined.

• [1],In their study, K. M. Iftekharuddin, W. Jia, and R. March suggest a


stochastic system based on fractal concept to describe the texture of tu-
mours in brain MRI images. The authors show that their approach works
well for extracting patient-independent aspects of the texture of brain
tumours and for MRI tumour segmentation. Due of the intricacy of a
brain tumor’s appearance in an MRI, researchers utilise a multiresolution-
fractal method named multifractional Brownian motion (mBm) to create
the tumour texture. To assess the efficacy of their strategy, the authors
create a multifractal feature based brain tumour segmentation system
and contrast it along with Gabor kind of multiscale texton features. The
work presents a unique approach to extract spatially variable multifractal

6
characteristics from brain tumour pictures, along with a comprehensive
mathematical explanation of the mBm model.

• C. H. Lee, M. Schmidt, A. Murtha, A. Bistritz, J. Sander, and R. Greiner


[2], they trained several classification models to differentiate between
responders and non-responders to treatment using these features. They
found that the combination of standard histogram-based features and
GMM features achieved the highest classification accuracy (86.5), while
GMM features alone achieved an accuracy of 85.5. They concluded that
using ADC histogram-based features can be an effective method to assess
treatment response in brain tumor patients.

• J. J. Corso et al. [3]proposed a technique for automatic brain tumor


and edema segmentation using texture features such as PTPSA, mBm,
Gabor-like textons, as well as regular intensity and intensity difference
features. They used a Random Forest classifier to classify these features
in multi-modal MRIs. The authors evaluated their technique using the
BRATS 2012 dataset and found that it outperformed other state-of-
the-art methods in both training and challenge cases. The evaluation
was conducted quantitatively using an online tool from Kitware/MIDAS
website.

• D. Cobzas, N. Birkbeck, M. Schmidt, M. Jagersand, and A. Murtha[4]


suggest a variational segmentation algorithm for brain tumors that im-
proves on existing texture-based approaches by using a high-dimensional
feature set derived from MRI data and registered atlases. The authors
use manual segmentation data to develop a statistical model for normal
and tumor tissue and show that using a conditional model to differen-
tiate between normal and abnormal regions improves the segmentation
outcomes over traditional generative models. The authors evaluate the
algorithm’s effectiveness by applying it to multiple MRI scans of patients
with brain cancer.

• M. Wels, G. Carneiro, A. Aplas, M. Huber, J. Hornegger, and D. Comani-


ciu propose the segmentation of paediatric brain tumours in multi-spectral

Department of Electronics and Communication Engineering 7


3-D magnetic resonance images in [5]. They introduce a Markov random
field (MRF) model-based top-down segmentation method that integrates
probabilistic boosting trees (PBT) with graph cuts for segmenting data
at lower levels. While a spatial prior takes into consideration pair-wise
uniformity in classified tags and multi-spectral voxel intensities, the PBT
method provides a racist and discriminatory observation model to cat-
egorise tumour appearance. The difficult problem of identifying and
defining juvenile brain tumours, which exhibit significant irregularity in
both the pathology and adjacent non-pathologic brain tissue, is tackled
with this technique.

• The authors of this paper[6] investigate the impact of noise on the fractal
dimension of digital images. They add three types of noise (Gaussian,
salt and pepper, and speckle) to the images and estimate the fractal
dimension of both the noisy and non-noisy images. Their results show
that noise can affect the fractal dimension, causing an increase in its
value. The authors also report the corresponding error in terms of
RMSE and estimate the average percentage error in fractal dimension as
an offset for determining the true fractal dimension from noisy images.

• A. Islam and colleagues in their study suggested an automated segmenta-


tion [7]approach for detecting posterior fossa (PF) cancers in child brain
MRI data. The authors employed a multi-fractal method called multi-
fractional Brownian motion to replicate the various tumour appearances
seen in MRI (mBm). In addition to proposing a technique that uses
wavelet coefficients to assess the multi-fractal pattern of tissue roughness
in brain MRI, they created a mathematical foundation for mBm in two
dimensions. The scientists used a wavelet-based multi-fractal feature, MR
image intensity, and a regular fractal feature derived by using piecewise-
triangular-prism-surface-area (PTPSA) approach to separate PF tumour
and non-tumor areas in T1, T2, and FLAIR MR images.

• T. Wang, I. Cheng, and A. Basu[8] introduce the fluid vector flow


(FVF) active contour model, which addresses issues with insufficient

Department of Electronics and Communication Engineering 8


capture range and poor convergence when dealing with concave shapes.
FVF is designed to extract concave shapes and captures a large range,
outperforming other techniques such as gradient vector flow, boundary
vector flow, and magnetostatic active contour. Synthetic images, pediatric
head MRI images, and brain tumor MRI images from the Internet
brain segmentation repository are used in three sets of experiments to
demonstrate the effectiveness of FVF. The results suggest that FVF is
a promising new approach for active contour modeling that can handle
a wider range of shapes and achieve better results.

• The ATM SVC method was created by the scientists S. Warfield,[9]


M. Kaus, F. Jolesz, and R. Kikinis to automatically segment normal
and aberrant anatomy in medical pictures. An anatomical template
is used in this algorithm’s spatially variable statistical classification to
control the segmentation produced by statistical classification. An adapt-
able, template-moderated spatially changing statistical classification is
produced by sequentially merging non - stationary categorization and
nonlinear registration techniques. Many segmentation issues involving
various picture contrast techniques and bodily locations were used to
evaluate this system.

• A technique for automatically segmenting brain tumors was created by


M. R. Kaus, S. K. Warfield, A. Nabavi, P. M. Black, F. A. Jolesz and R.
Kikinis[10] and compared to the manual method using three-dimensional
magnetic resonance images from 20 patients with low-grade gliomas and
meningiomas. The automated method took 5-10 minutes to identify brain
and tumor tissue with similar accuracy and reproducibility to the manual
method, which took 3-5 hours. This shows that automated segmentation
is a viable option for low-grade gliomas and meningiomas.

• The paper outlines a new approach for segmenting brain [11]tumors in


MRI scans that differs from traditional methods. Instead of training
on pathology, this approach uses a fitness map to identify deviations
from normalcy in healthy tissue, which can then be used by conven-

Department of Electronics and Communication Engineering 9


tional image segmentation techniques to define tumor boundaries. The
proposed framework incorporates context at multiple levels and utilizes
bi-directional information flow between these levels through multi-level
Markov random fields or iterated Bayesian classification. The approach
has been tested on synthetic and MRI data and has shown promising
results. The essay also highlights the importance of understanding con-
text in recognizing deviations from normalcy, demonstrated through the
use of the method of diagonalized nearest neighbor pattern recognition.

• The paper describes a framework that aims to predict anatomical defor-


mations for surgical planning and tumor growth. The framework includes
two methods: a shape-based approach that utilizes statistical analysis
to model deformability, and a biomechanical approach that incorporates
forces to predict deformation. Both methods use the principal modes of
co-variation between anatomy and deformation to make predictions about
how anatomy will change. The framework was tested using simulated
images, and the results showed that the framework accurately predicts
systematic deformations caused by changes in position or tumor growth.

• The publication presents a generative probabilistic model for segmenting


tumors in multi-dimensional medical images, proposed by Menze, Leem-
put, Lashkari, Weber, Ayache, and Golland.[12]This model addresses dif-
ferences in tumor appearance across modalities by allowing for different
tumor boundaries in each channel. The authors augment a probabilistic
atlas of healthy tissue priors with a latent atlas of the lesion to extract
tumor boundaries and the latent atlas from the image data. The essay
reports the results of experiments conducted on 25 glioma patient data
sets, which demonstrate that the proposed model performs significantly
better than traditional multivariate tumor segmentation methods.

• The paper discusses a study by T. Leung and J. Malik on recognizing


different surface materials based on their textural appearance. The
author[13]propose a model that addresses the spatial variation of two
surface attributes, reflectance and surface normal, to represent natural

Department of Electronics and Communication Engineering 10


textures. The model involves constructing a vocabulary of tiny surface
patches, called 3D textons, along with associated local geometric and
photometric properties. Examples of textons include ridges, grooves,
spots, stripes, or combinations thereof. Each texton is associated with
an appearance vector that characterizes the local irradiance distribution
under different lighting and viewing conditions, represented as a set of
linear Gaussian derivative filter outputs. The proposed model aims to
provide a unified framework that captures the rich variety of textures
observed in natural scenes.

• Bauer and colleagues[14] have developed an automated brain tumor


segmentation method that utilizes both random forest classification and
hierarchical conditional random field regularization. Their method uses an
energy minimization approach and was tested on the BRATS2012 dataset
containing both low- and high-grade gliomas from both real-patient and
simulated images. The results of the testing showed that the method
was successful in achieving an average Dice coefficient of 0.73 and 0.59
for tumor and edema, respectively. Additionally, the method was found
to have a fast computation time.

• The paper by E. Geremia, B. H. Menze[15], and N. Ayache describes


a method for glioma segmentation in multi-channel MR images using
a spatial decision forest. The proposed method incorporates spatial
information to improve segmentation accuracy and is evaluated on the
BRATS 2012 dataset. The paper provides a detailed explanation of the
method and an evaluation of its performance, which showed promising
results. The paper was published in the proceedings of the 2012 MICCAI-
BRATS conference.

• In this paper, A. Hamamci and G. Unal[16] propose a method for mul-


timodal brain tumor segmentation on the BraTS dataset. The proposed
method is based on the Tumor-Cut algorithm, which is a graph-cut-based
segmentation method that utilizes information from multiple imaging
modalities. The method uses a combination of features such as intensity,

Department of Electronics and Communication Engineering 11


texture, and spatial information from MRI modalities (T1, T1c, T2,
FLAIR) to generate an initial tumor segmentation. Then, a graph-cut
algorithm is applied to refine the initial segmentation by incorporating
spatial constraints to ensure that the segmented regions are contiguous
and have a smooth boundary. The method is evaluated on the BraTS
dataset and the results show that it achieves competitive performance
compared to other state-of-the-art methods.

• The authors of the paper[17] ”Multi-Modal Brain Tumor Segmentation


Using Latent Atlases” propose a novel approach for segmenting brain
tumors in multi-modal MRI scans. Their approach utilizes a set of atlases,
one for each imaging modality, and a latent atlas that captures tumor-
specific information. This model is trained on a training set of multi-
modal brain scans and is used to segment new scans. Their approach
achieves the best results compared to other state-of-the-art segmentation
methods on the BraTS 2012 dataset, demonstrating the effectiveness of
using latent atlases for multi-modal brain tumor segmentation

• The article discusses an automated system for detecting brain tumors in


MRI scans by utilizing mathematical morphology, clustering, and statis-
tical [18]validation. The process starts with pre-processing to improve
the image quality, followed by applying morphological operators to iden-
tify the tumor region. Clustering is employed to distinguish the tumor
from the normal brain tissue, and statistical validation determines the
accuracy of the detection. The proposed approach is tested on a dataset
comprising 20 MRI scans with simulated and real tumor images, and the
results reveal high accuracy in detecting tumors with good sensitivity
and specificity

• In their paper, Ahmed, Iftekharuddin, and Vossough present an ap-


proach[19] for segmenting posterior-fossa brain tumors in MRI images
using a combination of intensity, shape, and texture features. They
preprocess the images through histogram equalization and noise removal.
They extract features from various regions of interest, including texture

Department of Electronics and Communication Engineering 12


features using gray-level co-occurrence matrices, shape features using con-
tour analysis, and intensity features using statistical measures such as
mean and standard deviation. They then combine the features using a
weighted fusion approach and use a support vector machine classifier to
train the method. They test their approach on a dataset of 20 patients
with posterior-fossa tumors and find that their feature fusion method
outperforms methods that use individual features.

• The paper by Y. Freund and R. E. Schapire [20] introduced a method


called ”boosting” for on-line learning, which combines weak learners to
form a strong learner. The algorithm works by iteratively applying a
base learning algorithm to re-weighted training examples. It increases the
weights of examples that were misclassified by the current ensemble of
weak learners and decreases the weights of correctly classified examples.
This way, the algorithm focuses on difficult-to-classify examples and al-
lows the ensemble to learn a more accurate decision boundary. Boosting
has been widely used in machine learning and has shown impressive
performance on a wide range of tasks.

The authors provide theoretical guarantees on the performance of the


boosting algorithm, showing that it can achieve arbitrarily small error
rates on any training set if the number of iterations is sufficiently large.
They also demonstrate empirically that the algorithm is robust to noise
and outliers in the data. The effectiveness of the boosting algorithm is
demonstrated on several benchmark classification tasks, such as hand-
written digit recognition and face detection, where it outperforms other
machine learning algorithms.

• Pentland’s method involves estimating the fractal dimension of an im-


age[21], which is a measure of the complexity of its geometric struc-
ture. The fractal dimension is estimated using a box-counting algorithm,
which involves partitioning the image into smaller and smaller squares

Department of Electronics and Communication Engineering 13


and counting the number of squares that contain image information.
The author demonstrates the effectiveness of the method by applying it
to several natural scenes, such as forests and mountains, and showing
that the estimated fractal dimension corresponds well with the perceived
complexity of the scene. Pentland argues that fractal analysis can be
used to develop more effective image retrieval and recognition systems,
as well as provide insights into the structure of natural scenes.

To elaborate further, Pentland’s method for describing natural scenes us-


ing fractal geometry involves partitioning an image into blocks, applying
a fractal compression algorithm to each block, and extracting a set of
statistics that describes the texture and structure of the image. The
fractal compression algorithm works by iteratively transforming a small
patch of an image to resemble a larger patch, using affine transforma-
tions and a set of contraction mappings. The resulting fractal code is a
compact representation of the block, which can be used to reconstruct
the block with a certain level of accuracy.

The extracted statistics, such as the fractal dimension and lacunarity,


capture the self-similarity and scale-invariance of the image, which are
important properties of natural scenes. These statistics can then be used
as input to a classifier for various computer vision tasks such as object
recognition and scene classification. The method has been shown to be
effective in describing the texture and structure of natural scenes, and
has been applied in various applications such as texture analysis and
medical image analysis.

The paper presents experimental results demonstrating the effectiveness


of the fractal-based approach for a variety of tasks, including texture
classification and object recognition. The author also discusses the
potential applications of the method in areas such as computer vision,
remote sensing, and image analysis.

Department of Electronics and Communication Engineering 14


• The paper by J. M. Zook and K. M. Iftekharuddin [22] presents a
statistical analysis of various fractal-based algorithms for brain tumor
detection using magnetic resonance imaging (MRI). The paper compares
the performance of different algorithms based on metrics such as sensi-
tivity, specificity, accuracy, and receiver operating characteristic (ROC)
curves. The authors conclude that fractal-based algorithms can be use-
ful for detecting brain tumors in MRI and can offer a cost-effective
alternative to more complex and expensive methods.

• The paper[23] also presented a novel approach to modeling images using


Markov random fields, which was different from the traditional approaches
that used Fourier transforms and other frequency domain methods. This
new approach allowed the authors to model complex interactions between
neighboring pixels, and capture the spatial coherence of the image.The
paper was highly influential in the field of computer vision and image
processing, and provided a theoretical foundation for many subsequent
works in the area of probabilistic modeling and Bayesian inference for
image restoration and modeling.

• The paper also introduced a [24] novel method for initializing the re-
gion competition algorithm using an initial over-segmentation obtained
through a hierarchical clustering approach. This initialization method
helped to improve the accuracy and speed of the region competition
algorithm by reducing the number of iterations required to converge to
a good segmentation.

Furthermore, the authors proposed an extension of the region competition


framework for interactive segmentation, which involves incorporating user
feedback into the segmentation process. The proposed method allows the
user to modify the segmentation interactively by selecting and adjusting
regions based on their visual appearance.

Department of Electronics and Communication Engineering 15


Overall, the paper ”Region Competition” by Zhu and Yuille has had a
significant impact on the field of image segmentation and has inspired
many subsequent works that build upon the region competition frame-
work.

• The paper ”Fast Multiscale Image Segmentation” by Sharon, Brandt,


and Basri, [25]presents a novel approach for efficient multiscale image
segmentation. The authors highlight the importance of multiscale seg-
mentation in capturing different levels of detail in an image and the
limitations of traditional single-scale segmentation methods. To address
these limitations, they propose a hierarchical clustering algorithm based
on grouping pixels into regions at multiple scales using a min-tree data
structure. The approach is demonstrated on various images, and the
results show improved accuracy and consistency in segmentation at mul-
tiple scales with significantly faster processing times compared to other
popular segmentation techniques.

• The paper ”Automatic Tumor Segmentation Using Knowledge-Based


Techniques” by Clark et al. [26]presents a novel approach to medi-
cal image analysis, specifically for automatic tumor segmentation. The
authors identified the limitations of traditional segmentation techniques
and proposed a hybrid approach that combined image processing al-
gorithms with a knowledge-based system. The knowledge-based system
incorporated expert knowledge about tumor characteristics and used a
probabilistic model to estimate the probability of a pixel belonging to a
tumor. The authors evaluated their approach on MRI brain scans and
showed that it achieved high accuracy and consistency, outperforming
other segmentation techniques in terms of accuracy and computational
efficiency.

• The paper ”Multilevel Segmentation and Integrated Bayesian Model Clas-


sification with an Application to Brain Tumor Segmentation” by Corso
et al. is a [27]significant contribution to the field of medical image anal-

Department of Electronics and Communication Engineering 16


ysis. The paper introduced a novel method for automatic brain tumor
segmentation that combines multilevel segmentation with an integrated
Bayesian model classification. The authors emphasized the importance of
brain tumor segmentation in medical image analysis and the limitations
of traditional segmentation methods. Their proposed method involves
segmenting the brain image at multiple levels of resolution using a hi-
erarchical image segmentation algorithm and classifying the segmented
regions as either tumor or non-tumor based on a set of image features
using an integrated Bayesian model. The authors demonstrated the effec-
tiveness of their approach on a set of magnetic resonance imaging (MRI)
scans of the brain, showing high sensitivity and specificity and outper-
forming other popular segmentation techniques in terms of accuracy and
computational efficiency.

• The paper ”Automatic Segmentation of Non-Enhancing Brain Tumors


in Magnetic Resonance Images” by Fletcher-Heath et al. published
in 2001[28] is an important contribution to the field of medical image
analysis. The authors proposed a novel approach to automatically segment
non-enhancing brain tumors in magnetic resonance imaging (MRI) scans,
which are typically harder to detect than enhancing tumors. They
combined region growing and clustering algorithms with a set of image
features such as intensity, texture, and shape to segment the tumors.
The authors demonstrated the effectiveness of their approach on a set
of MRI scans of the brain, showing that their method was able to
accurately and consistently segment non-enhancing tumors in the images
with high sensitivity and specificity. They compared their method to
other popular segmentation techniques and demonstrated its superiority
in terms of accuracy and computational efficiency. The paper highlights
the importance of automatic segmentation in medical image analysis,
where traditional approaches may be limited by inter- and intra-observer
variability and require manual interaction.

• The paper ”A Brain Tumor Segmentation Framework Based on Outlier

Department of Electronics and Communication Engineering 17


Detection” by Prastawa et al.[29] proposes a new approach to automatic
brain tumor segmentation in magnetic resonance imaging (MRI) scans.
The method is based on the detection of outliers in a statistical model of
healthy brain tissue. The authors explain the challenges associated with
accurately identifying tumor boundaries and present a framework that
involves constructing a statistical model of healthy brain tissue using a
training set of MRI scans. The model is used to identify outliers in a new
test image, which are assumed to correspond to tumor regions. These
outliers are then refined using a region growing algorithm to produce
a final tumor segmentation. The authors demonstrate the effectiveness
of their approach on a set of MRI scans with various types of tumors
and show that their method outperforms other popular segmentation
techniques in terms of accuracy and computational efficiency.

• The paper ”Integrated Segmentation and Classification Approach Applied


to Multiple [30]Sclerosis Analysis” by Akselrod-Ballin et al., presented at
the IEEE Conference on Computer Vision and Pattern Recognition in
2006, is a significant contribution to the field of medical image analysis.
The paper proposed a novel approach for the segmentation and classifi-
cation of multiple sclerosis (MS) lesions in magnetic resonance imaging
(MRI) scans of the brain.The authors began by highlighting the impor-
tance of accurate segmentation and classification of MS lesions in the
diagnosis and monitoring of the disease. They noted that traditional seg-
mentation approaches often require manual interaction and are subject to
inter- and intra-observer variability, which can limit their effectiveness.To
address these limitations, the authors proposed an integrated approach
that involved combining segmentation and classification algorithms into a
single framework. The approach used a region-growing algorithm to seg-
ment MS lesions in the MRI scans, followed by a classification algorithm
that identified the type of lesion based on its shape and location in the
brain.The authors demonstrated the effectiveness of their approach on a
set of MRI scans of MS patients, showing that their method was able
to accurately and consistently segment and classify lesions in the images,

Department of Electronics and Communication Engineering 18


with high sensitivity and specificity. They also compared their method to
other popular segmentation and classification techniques and showed that
it outperformed them in terms of accuracy and computational efficiency.

• Leemput et al. [31] proposed a new automated method for segmenting


multiple sclerosis (MS) lesions in magnetic resonance imaging (MRI)
data using a probabilistic model that described the intensity distribution
of healthy brain tissue and MS lesions. They constructed the model
using a training set of MRI scans from patients with MS and segmented
the lesions by identifying voxels that were outliers with respect to the
model. The authors demonstrated the effectiveness of their method on
a set of MRI scans from 14 MS patients, showing that it achieved good
segmentation accuracy compared to manual segmentation. Additionally,
they compared their method to other popular segmentation techniques
and demonstrated that it outperformed them in terms of accuracy and
computational efficiency. The paper’s contribution was the use of model
outlier detection, which enabled a more robust and automated approach
compared to traditional threshold-based methods and has since been
widely used in the segmentation of MS lesions and other types of brain
abnormalities.

• The paper ”Hierarchical Segmentation of Multiple [32]Sclerosis Lesions in


Multi-Sequence MRI” by Dugas-Phocion et al. describes a novel approach
for segmenting multiple sclerosis (MS) lesions in multi-sequence magnetic
resonance imaging (MRI) data. The method utilizes a hierarchical seg-
mentation approach that starts with coarse segmentations of the entire
brain and then gradually refines the segmentation to focus on the MS
lesions, using information from multiple MRI sequences. The authors
evaluated their method on a dataset of MRI scans from 12 MS patients
and showed that it achieved good segmentation accuracy, particularly
for small and irregularly shaped lesions that were difficult to segment
using traditional methods. One of the main contributions of the paper
was the use of a hierarchical approach, which allowed for more efficient

Department of Electronics and Communication Engineering 19


and accurate segmentation by leveraging information at multiple levels
of detail.

Department of Electronics and Communication Engineering 20


CHAPTER 3

Methodology

3.1 Introduction to Python


Python is a popular programming language that is widely used for a variety
of applications, including image processing. Python3 is the latest version of
the language, and it includes many powerful features and libraries that make
it a great choice for image processing.
There are several libraries in Python3 that are commonly used for image
processing, including:

3.1.1 NumPy
NumPy is a library for numerical computing in Python. It provides
powerful array operations and functions, which are essential for processing and
manipulating images.

Figure 3.1: Methods involved in Numpy library

• NumPy is a powerful library for numerical computing in Python, which


is widely used for scientific computing and data analysis. It provides a

21
powerful array object and a range of functions for working with arrays.
Here are some broad features of NumPy:

• Arrays: NumPy provides a powerful array object that can handle large,
multidimensional arrays of data. It provides functions for creating,
manipulating, and accessing arrays. Arrays in NumPy are much more
efficient than regular Python lists for numerical calculations.

• Mathematical functions: NumPy provides a wide range of mathematical


functions for numerical computations, including basic arithmetic opera-
tions, trigonometric functions, logarithmic functions, and more. These
functions can operate on arrays or individual elements.

• Linear algebra: NumPy provides a range of linear algebra functions,


including matrix operations, eigenvalues and eigenvectors, and matrix
decompositions.

• Fourier transforms: NumPy provides functions for computing fast Fourier


transforms, which are commonly used in signal processing and image
analysis.

• Random number generation: NumPy includes a random number genera-


tion module, which can generate arrays of random numbers with various
distributions.

• Integration with other libraries: NumPy integrates well with other scien-
tific computing libraries in Python, including SciPy, pandas, and scikit-
learn.

NumPy is an essential library for many scientific and data analysis applications
in Python. Its powerful array object and mathematical functions make it ideal
for handling large amounts of data and performing complex computations
efficiently.

3.1.2 OpenCV
OpenCV is an open-source computer vision library that provides a wide
range of image processing functions, including image filtering, feature detection,

Department of Electronics and Communication Engineering 22


and object recognition.

Figure 3.2: Applications of Open CV

Here are some broad features of OpenCV:

• Image and video input/output: OpenCV provides functions for reading


and writing image and video files in various formats, including popular
formats like JPEG, PNG, and MPEG.

• Image and video processing: OpenCV provides a wide range of image and
video processing functions, including filtering, feature detection, object
detection, segmentation, and more.

• Machine learning: OpenCV provides functions for machine learning tasks


such as classification, clustering, and regression. It also includes pre-
trained models for object detection, face recognition, and more.

• Real-time computer vision: OpenCV provides functions for real-time com-


puter vision, including object tracking, gesture recognition, and camera
calibration.

• User interface: OpenCV includes a user interface module for creating


graphical user interfaces (GUIs) to display images and video, and interact
with user input.

• Integration with other libraries: OpenCV integrates well with other


Python libraries, including NumPy and scikit-learn.

OpenCV is a powerful library for image and video processing, and is


used in a wide range of applications, including robotics, augmented reality,

Department of Electronics and Communication Engineering 23


and medical imaging. Its powerful functions for image and video processing,
combined with its support for machine learning, make it an essential tool for
many computer vision and machine learning applications in Python.

3.1.3 Pillow
Pillow is a fork of the Python Imaging Library (PIL), which provides basic
image processing functions like image cropping, resizing, and filtering.
Here are some broad features of Pillow:

• Image input/output: Pillow provides functions for opening and saving


image files in various formats, including JPEG, PNG, BMP, and more.

• Image manipulation: Pillow provides a range of image manipulation


functions, including resizing, cropping, rotating, and flipping images. It
also includes functions for applying various filters to images, including
blurring, sharpening, and edge detection.

• Image enhancements: Pillow includes functions for enhancing image qual-


ity, such as adjusting brightness, contrast, and color balance.

• Text and graphics: Pillow provides functions for drawing text and graph-
ics on images, including shapes like lines, rectangles, and circles.

• Integration with other libraries: Pillow integrates well with other Python
libraries, including NumPy and OpenCV.

Pillow is a simple and easy-to-use image processing library, which is ideal


for basic image processing tasks like opening, manipulating, and saving images.
Its broad support for various image formats, combined with its easy-to-use
functions, make it a great choice for a wide range of image processing
applications in Python.

3.1.4 Scikit-image
Scikit-image is a library for image processing and computer vision in
Python. It provides a range of functions for image segmentation, feature
extraction, and object detection.

Department of Electronics and Communication Engineering 24


Figure 3.3: Features involved in Scikit-image library
below depicted are some features of Scikit-image

• Image input/output: scikit-image provides functions for reading and


writing image files in various formats, including popular formats like
JPEG and PNG.

• Image manipulation: scikit-image includes a range of image manipu-


lation functions, including resizing, cropping, and flipping images. It
also includes functions for applying filters to images, such as blurring,
sharpening, and edge detection.

• Image segmentation: scikit-image provides functions for segmenting im-


ages into different regions based on various features, such as intensity or
color.

• Feature extraction: scikit-image includes functions for extracting features


from images, such as corners, edges, and texture features.

• Image restoration: scikit-image includes functions for restoring degraded


images, such as removing noise and correcting for motion blur.

• Integration with other libraries: scikit-image integrates well with other


scientific computing and data analysis libraries in Python, such as NumPy,
SciPy, and scikit-learn.

Department of Electronics and Communication Engineering 25


Scikit-image is a powerful library for image processing, and is used in a wide
range of applications, including medical imaging, robotics, and remote sensing.
Its range of functions for image manipulation, segmentation, and feature
extraction make it an essential tool for many image processing applications in
Python.
Using these libraries, it is possible to perform a wide range of image pro-
cessing techniques in Python3. This includes tasks such as image enhancement,
filtering, segmentation, object detection, and more. With the help of Python3
and its libraries, it is easy to write custom image processing algorithms and
perform complex operations on large sets of images.

3.2 Introduction to GUI


GUI stands for Graphical User Interface. It is a type of interface that
allows users to interact with software and computer systems using graphical
elements such as icons, windows, buttons, and menus, rather than typing text
commands.

Figure 3.4: Sample GUI

GUIs provide a more intuitive and user-friendly way to interact with


computers compared to command-line interfaces, which require users to enter

Department of Electronics and Communication Engineering 26


text commands to perform tasks. With GUIs, users can perform tasks by
clicking on icons and buttons, selecting options from menus, and dragging and
dropping elements using a mouse or other pointing device.
GUIs are used in a wide range of software applications, including operating
systems, office productivity software, web browsers, and media players, among
others. They are designed to be easy to use and to provide a consistent user
experience across different applications.
GUIs are typically created using programming languages and frameworks
that provide graphical user interface components and tools. Popular GUI
frameworks include Java Swing, Windows Forms, Qt, and GTK+. In addition
to programming, there are also software tools and platforms that allow users
to create GUIs without writing code, such as App Inventor and Bubble.
Overall, GUIs have made it easier for people to interact with computers
and software, and have played a significant role in the widespread adoption
of computing technology in everyday life.

3.3 Convolution Neural Networks


Convolutional Neural Networks (CNNs) are a type of artificial neural
network that are particularly well-suited to analyzing images and other types
of multi-dimensional data. They were first introduced in the 1980s, but their
popularity exploded in the 2010s, as computer vision applications began to
gain traction.
The basic idea behind CNNs is to use a series of convolutional layers
to extract features from an input image or other multi-dimensional data. A
convolutional layer works by applying a set of filters, or kernels, to the input
data, which perform a series of small, localized operations that are intended
to detect specific patterns or features. These filters are learned during the
training process, so the network can adapt to the specific features of the input
data.
After the convolutional layers, a CNN typically includes one or more fully
connected layers, which are more similar to the layers in a traditional neural

Department of Electronics and Communication Engineering 27


Figure 3.5: Convolutional Neural Network architecture
network. These layers use the features extracted by the convolutional layers
to perform classification or other tasks.
One of the key advantages of CNNs is their ability to handle large input
data, such as high-resolution images. By using convolutional layers to extract
features from the data, CNNs are able to reduce the dimensionality of the
input, which makes it easier to process with the fully connected layers.
CNNs have proven to be very effective at a wide range of tasks, including
image classification, object detection, segmentation, and more. They have
been used in many real-world applications, including self-driving cars, medical
imaging, and security systems.
Convolutional Neural Networks (CNNs) have numerous applications in var-
ious domains. Some of the most notable applications of CNNs are:

• Image and Video Recognition: CNNs are widely used for image and
video recognition tasks, such as object detection, face recognition, and
scene recognition. For example, CNNs are used in security systems to
detect and recognize people and objects in real-time.

• Autonomous Vehicles: CNNs play a critical role in autonomous vehicle


technology, enabling vehicles to detect and classify objects in the en-
vironment, such as other vehicles, pedestrians, traffic signs, and road

Department of Electronics and Communication Engineering 28


markings.

Figure 3.6: Example of Vechile Identification

• Medical Imaging: CNNs are used in medical imaging applications, such as


MRI and CT scans, to detect and classify abnormalities in images. They
are also used in the analysis of histology images for cancer diagnosis.

Figure 3.7: CNN for Medical purpose

• Natural Language Processing (NLP): CNNs are used in NLP applications


such as text classification, sentiment analysis, and language translation.
They are also used in speech recognition applications to identify phonemes
and predict the next word in a sentence.

• Gaming: CNNs are used in gaming applications to develop AI agents


that can play games at a human-like level. They are also used in game
development for realistic object recognition and image processing.

• Agriculture: CNNs are used in precision agriculture to detect plant


diseases, monitor crop growth, and optimize irrigation.

Department of Electronics and Communication Engineering 29


Figure 3.8: Agriculture usage
• Finance: CNNs are used in finance for stock price prediction and fraud
detection.

3.4 Proposed Model


In a neural network, the input layer receives the input data, the hidden
layer processes it, and the output layer produces the result. The neurons in the
hidden layer apply non-linear activation functions to the input data, which en-
ables the network to learn complex relationships between the input and output.
The weights and biases of the neurons are updated during the training process,
using optimization algorithms such as gradient descent. The accuracy of the
network is measured using a loss function, and the network is trained until the
loss is minimized. The trained neural network can then be used for various
tasks, such as image classification, natural language processing, and predictions.

3.4.1 Design of a Neural Network


In the feature extraction phase, convolution, activation and pooling opera-
tions are applied to the image to extract important features. These features
are then used in the classification phase to train the neural network using a
loss function. In the testing phase, the preprocessed image is fed into the
trained neural network, and the network outputs a prediction about whether
the image contains a brain tumor or not. This prediction is compared to the

Department of Electronics and Communication Engineering 30


actual label of the image to evaluate the accuracy of the network.

Figure 3.9: Design of a Brain tumor classification model

This reduces the time consumption and improves the performance of the
automatic brain tumor classification scheme. In the testing phase, the pre-
processed testing image is passed through the trained CNN model. Then, the
predicted label is compared with the actual label to calculate the accuracy
of the system. The overall accuracy of the system is evaluated in terms
of confusion matrix, precision, recall and F1-score. The performance of the
proposed CNN based brain tumor classification system is evaluated with the
help of different evaluation metrics such as accuracy, precision, recall, F1-score,
confusion matrix and ROC curve.

3.4.2 Stroke Generating


The gradient descent algorithm updates the parameters of the model in
order to minimize the loss function. The algorithm calculates the gradient
of the loss function with respect to the model parameters, and then up-
dates the parameters in the direction that reduces the loss. This process is
repeated until the loss function reaches a minimum or convergence criteria
are met. The goal of the gradient descent algorithm is to find the best set

Department of Electronics and Communication Engineering 31


of parameters that minimize the loss function and produce accurate predictions.

3.4.3 Algorithm for CNN based classification


Finally, backpropagation is applied to update the weights and biases of
the network. This helps in minimizing the loss and improving the accuracy
of the model. The process of training a neural network involves repeating the
following steps:

1. Feeding the input data through the network to get predictions

2. Comparing the predictions with the actual output labels and computing
the loss

3. Using backpropagation to update the weights and biases of the network

Repeat the above steps until the loss reaches a minimum or a certain number
of iterations is reached.
The dataset is a mix of both real-case MRI images of patients and images
from benchmarking datasets. The combination of real-case and benchmark
images is used to train the convolutional neural network for automatic brain
tumor classification. The tumor images are collected from Radiopaedia and
non-tumor images are collected from the Brain Tumor Image Segmentation
Benchmark (BRATS) 2015 testing dataset.

Figure 3.10: Caption

3.4.4 Model Building


Batch normalization helps to speed up training by reducing the internal
covariate shift, which is a phenomenon where the distribution of the inputs

Department of Electronics and Communication Engineering 32


to a layer in a neural network changes during training, causing hidden layer
activations to change in unpredictable ways that slow down training.
By normalizing the inputs to a layer for each mini-batch, batch normal-
ization helps to prevent internal covariate shift, allowing for faster and more
stable training.

Figure 3.11: Model Summary

Department of Electronics and Communication Engineering 33


CHAPTER 4

Architecture

4.1 Network Architecture


The architecture used for brain tumor identification typically involves a
deep learning model based on convolutional neural networks (CNNs). CNNs
are a type of neural network that is well-suited for image recognition tasks,
making them an ideal choice for identifying brain tumors from medical images
such as MRIs.

The CNN model is made up of several layers of linked neurons or nodes


arranged in various kinds of layers. Typically, the first layer of a CNN is
a convolutional layer that uses convolution operations to extract information
from the input picture. This layer produces a collection of feature maps that
show the existence of different graphical representations in the input picture.

After the convolutional layer, the CNN typically includes pooling layers,
which reduce the spatial dimension of the feature maps by down-sampling
them. This helps to reduce the number of parameters in the model and
improve its computational efficiency.

The CNN then consists of one or even more fully linked layers that conduct
classification using the characteristics that the prior levels have retrieved. The
softmax layer, which produces a probabilistic model over the many types of
brain tumours that now the model has been taught to recognise, is often fed
the result of the layers that are completely linked.

To train the model, a large dataset of labeled brain MRI images is required.
The model is then trained to predict the correct label for each image in the

34
dataset, with the goal of minimizing the difference between the predicted and
true labels. The model can be fine-tuned or retrained on new data to improve
its accuracy and adapt it to new datasets.

Once trained, the model may be used to detect brain cancers in fresh
MRI scans. A probability density function over the several classifications of
brain tumours is produced by the model when the input picture is processed
through the trained CNN. The projected label again for image is then chosen
from the category with the greatest likelihood.

4.2 Module Division


This outlines the proposed system architecture that we plan to develop. It
consists of six sequential steps, starting with taking an input image from the
dataset. The image then undergoes pre-processing and enhancement before
being segmented using binary thresholding. A convolutional neural network
(CNN) is then used to classify the segmented picture for the existence of a
brain tumour. Each of the aforementioned processes are finished with the
output being shown at the end.

Figure 4.1: Module Division

Each module in the proposed architecture serves a unique purpose and is

Department of Electronics and Communication Engineering 35


essential to the overall system’s functionality. The architecture also includes
a designated testing and training dataset, consisting of approximately 2000
images sourced from Kaggle. The input image undergoes pre-processing, which
includes the application of noise filters such as the Median and Bilateral filters.
The input is then undergoes through successive steps with help of Sobel filter.
The next step involves segmenting the image using binary thresholding, fol-
lowed by morphological operations to further refine the segmentation. Overall,
the image then classified using a Convolutional Neural Network to predict the
presence or absence of a brain tumor.

4.3 Image Preprocessing and Image Enhancement

4.3.1 Image Pre-Processing


The approximately 1900 MRI pictures of the brain utilised in this study
were obtained from Kaggle and include good, benign, and malignant instances.
The first stage of the suggested system design uses these MRI pictures as
input. Pre-processing, which involves reducing impulsive sounds and shrinking
the picture, is an important step in enhancing the integrity of the MRI scan of
the brain. The brain MRI picture must first be converted to a gray format as
the first stage of pre-processing. After that, the distorted noise in the picture
is removed using the adaptive bidirectional filtering approach. This stage is
essential for raising overall diagnostic accuracy and classification accuracy rates.

4.3.2 Image Acquisition from dataset


The process of acquiring an image from the collection and processing it
further is known as image acquisition. As processing cannot be done without
an input picture, it is often the initial stage in the workflow sequence. Before
further analysis, the collected picture may need to go through pre-processing
and enhancement procedures. The project entails supplying the file path of
an image that is locally saved on the computer, and subsequently processed

Department of Electronics and Communication Engineering 36


further in accordance with the suggested system architecture.

4.3.3 Convert the image from one color space to another


OpenCV provides over 150 color-space conversion methods. To perform
color conversion, the function cv2.cvtColor(input-image, flag) is used, where
the flag parameter specifies the type of conversion to be performed. In
this project, the input image is converted to a grayscale image, which is a
commonly used representation in medical image analysis. This conversion is
achieved using the appropriate flag value passed to the cv2.cvtColor() function.

4.3.4 Filters
Filters are mostly employed in image processing to minimize the elevated
frequencies in the image.

• Median Filter : To reduce noise from pictures, smoothing is a non-linear


filtration technique frequently used in image processing. This method
involves placing the pixel under consideration in numerical order before
replacing it with the median value for all the pixels included within a
specified window. Contrary to linear filtering techniques, median filtering
is resistant to impulse noise, including salt-and-pepper noise and pepper
noise, which can result in white and dark areas in the image. The
filter reduces the impact of distortion on the image by switching between
pixels’ ”ON” and ”OFF” states.

• Bilateral Filter :A noise-reducing straightening filter called bilateral


filtering replaces each pixel’s intensity with a weighted sum of inten-
sity values from neighbouring pixels. The filter smooths pictures while
maintaining edges by employing a non-linear clustering of nearby image

Department of Electronics and Communication Engineering 37


pixels. The value is dependent on the Gaussian distribution.

4.4 Image Enhancement


To add to your explanation, image enhancement techniques can be used to
improve the overall contrast, sharpness, and brightness of an image, making
it easier to interpret and analyze. Spatial domain techniques include methods
such as histogram equalization, contrast stretching, and gamma correction.
Transform domain techniques involve transforming the image into a different
domain, such as Fourier or wavelet domain, applying enhancements, and then
transforming back to the spatial domain.

Edge detection is a crucial step in image segmentation and object recog-


nition. It helps to identify the boundaries of objects or regions in an image
and can be performed using techniques such as the Sobel operator, Canny
edge detector, and Laplacian of Gaussian. These techniques can detect edges
of varying strengths and sizes, allowing for accurate segmentation of objects.

4.4.1 Sobel Filter


A common edge detection approach is the Sobel filter, which determines
the gradient of something like the picture intensity at every pixel point. By
combining the picture with two kernels—one for horizontal and one for vertical
changes—it achieves this. These kernels are made to draw attention to the
image’s edges and make them simpler to spot. The power and alignment of the
edges may be determined using the gradient’s magnitude and direction once
it has been computed. For a number of tasks, including object identification
and picture segmentation, this information could be employed.

1. We calculate two derivatives:

• Horizontal changes: This is calculated via convolution I with

Department of Electronics and Communication Engineering 38


an odd-sized kernel Gx. For instance, Gx would be calculated as
follows for such kernel size of 3:

−1 0 +1
Gx = −2 0 +2
−1 0 +1
• Vertical Changes: This is calculated by convolution I with an
odd-sized kernel Gy. For instance, Gy would be calculated as
follows for the kernel of size 3:
−1 2 −1
Gy= −0 0 0
+1 +2 +1

2. We estimate the gradient at each pixel location in the picture by


combining the Sobel filter results in the x and y axes. Using the
following formula, this is accomplished:

 12
G = Gx 2 + Gy 2 (4.1)

3. Although sometimes the following simpler equation is used:

G = |Gx | + |Gy | (4.2)

It is important to note that the Sobel filter is a type of linear filter, mean-
ing it applies a linear transformation to the image intensities. It operates by
convolving the image with a small kernel matrix, which consists of coefficients
that are used to compute the approximation of the image gradient at each
pixel. The filter is designed to be sensitive to vertical and horizontal edges in
the image, which makes it useful for detecting features such as edges, corners,
and contours.

Overall, the application of the Sobel filter can improve the visual quality
of the image, making it easier for human observers or machine learning algo-
rithms to detect and classify objects in the image.

Department of Electronics and Communication Engineering 39


4.5 Image Segmentation using Binary Threshold
A crucial stage in many image processing and computer vision implemen-
tations is picture segmentation. It is the process of breaking a picture up
into various sections or segments, each of which represents a distinct object
or aspect of the image. Image segmentation aims to streamline and transform
an image’s representation into something more relevant and understandable.

Several methods, including median filter, feature extraction, clustering,


region expanding, and machine learning-based approaches, can be used to
segment images. The choice of segmentation approach relies on the qualities
of the picture and the particular application. Each technique has benefits and
limits.

Image segmentation has many applications in various fields, including med-


ical imaging, surveillance, robotics, and remote sensing. In medical imaging,
image segmentation is used to locate and delineate organs and tissues, identify
tumors and lesions, and assist in diagnosis and treatment planning. In surveil-
lance and robotics, image segmentation is used to detect and track objects and
people, and in remote sensing, it is used to extract land cover information,
monitor vegetation, and detect changes in the environment.

In order to analyse the size, volume, position, texture, and form of the
extracted picture, segmentation methods are important since they have the
capacity to discover or identify the anomalous component from the image. By
maintaining the threshold information during MR image segmentation, it is
possible to more precisely detect the damaged regions. The idea that things
put near together could have comparable properties and features was formerly
considered fashionable.

Department of Electronics and Communication Engineering 40


4.5.1 Thresholding
In OpenCV, the cv2.threshold() function is used for thresholding. It takes
the input image, threshold value, and maximum value as input parameters
and returns the thresholded image. There are different types of thresholding
techniques available such as binary thresholding, adaptive thresholding, and
Otsu’s thresholding, and the appropriate method can be chosen based on the
image characteristics and the desired outcome.

cv2.threshold(src, thresh, maxval, type[dst])

The function supports a variety of thresholding techniques, including:

1. Binary thresholding: This is the simplest type of thresholding where


all the pixel values above a certain threshold value are set to 255 (white)
and all the pixel values below the threshold value are set to 0 (black).

2. Inverted binary thresholding: This is similar to binary thresholding,


but the roles of white and black colors are reversed.

3. Threshold truncation: In this type of thresholding, all pixel values


above the threshold value are set to the threshold value.

4. Threshold to zero: In this type of thresholding, all pixel values below


the threshold value are set to 0.

5. Inverted threshold to zero: This is similar to threshold to zero, but


the roles of white and black colors are reversed.

The function returns the computed threshold value and thresholder image.

1. src - refers to the input image (i.e., the image that is to be thresholded).
The thresholded image is the output image that is generated by the
thresholding process, where each pixel is assigned a binary value (either
0 or 255) based on whether its intensity value is below or above the
threshold value.

Department of Electronics and Communication Engineering 41


2. thresh - refers to the threshold value that is used to convert a grayscale
image into a binary image. The threshold value determines which pixels
in the grayscale image are considered as part of the object of interest
(foreground) and which pixels are considered as background. Pixels with
intensities higher than the threshold value are usually set to white (255)
and pixels with intensities lower than the threshold value are set to
black (0) in the resulting binary image. The threshold value can be set
manually or computed automatically using various algorithms such as
Otsu’s method.

3. maxval - refers to the maximum value that a pixel can take in the
output image. When a pixel value in the input image is much greater
when compared with the threshold value, it is assigned the maximum
value specified by maxval. For example, if maxval is set to 255, the
output pixel will have a value of 255. This is typically used to highlight
the areas of an image that are of interest, such as the edges or the
objects present in the image.

4. type - The ”type” parameter in the threshold function specifies the type
of thresholding to be performed. There are different types of thresh-
olding that can be used depending on the requirements of the application.

The possible values for ”type” parameter are:

• cv2.THRESHBINARY: If the pixel intensity value is greater than the thresh-


old, it is assigned the maximum value (specified by the ”maxval” pa-
rameter), otherwise it is assigned 0.

• cv2.THRESHBINARYINV: If the pixel intensity value is greater than the


threshold, it is assigned 0, otherwise it is assigned the maximum value.

• cv2.THRESHTRUNC: If the pixel intensity value is greater than the thresh-


old, it is assigned the threshold value, otherwise it is assigned its original
value.

Department of Electronics and Communication Engineering 42


• cv2.THRESHTOZERO: If the pixel intensity value is greater than the thresh-
old, it is assigned its original value, otherwise it is assigned 0.

• cv2.THRESHTOZEROINV: If the pixel intensity value is greater than the


threshold, it is assigned 0, otherwise it is assigned its original value.

4.5.2 Morphological Operations


Morphological operations use a structuring element to produce an output
image of the same size as the input image. Each pixel value in the output
image is determined based on a comparison of the corresponding pixel in the
input image with its neighbors. Morphological techniques are often used in
conjunction with segmentation techniques, and typically process binary images.
Erosion and dilation are two common methods of morphological operations,
and are often used together. The first step of this process is opening, which
involves creating a gap between objects and connecting small collections of
pixels. This is followed by dilation, which restores the image to its original
size. The closing operation is then performed to handle different holes in the
image region while maintaining the original object sizes. Dilation adds pixels
to object boundaries, while erosion removes pixels from object boundaries,
making them basic morphological operations.

Watershed Method: This approach views an image’s gradient magni-


tude as a topographic surface, where high gradients represent peaks and low
gradients represent valleys. The process begins by filling each isolated valley
with differently coloured water. As the water level rises, water from different
valleys will merge. To prevent this, barriers are built where the water merges.
The process of filling water and building barriers continues until all the peaks
are submerged. The barriers created during this process represent the image
segmentation result.

Department of Electronics and Communication Engineering 43


4.6 Tumor classification using CNN
Classification is a useful method for identifying images, especially medical
imaging. It involves predicting which class an image belongs to based on its
features. Convolutional Neural Networks (CNNs) are a type of Deep Learning
algorithm that can perform image classification tasks effectively. They can take
in an input image and differentiate between different objects in the image by
assigning importance to various aspects. Unlike other classification algorithms,
CNNs require less preprocessing. They can learn filters and characteristics on
their own through training. CNNs can capture spatial and temporal depen-
dencies in images and reduce the number of parameters involved, which makes
it easier to process the images while maintaining critical features. Overall,
the role of a CNN is to reduce the complexity of images while preserving
important features to make accurate predictions.

For this step we need to import Keras and other packages that we’re going
to use in building the CNN. Import the following packages:

• Sequential is used to initialize the neural network.

• Convolution2D is used to make the convolutional network that deals.

• MaxPooling2D layer is used to add the pooling layers.

4.6.1 Sequential
• To initialize the neural network, we create an object of the Sequential
class.

• classifier= Sequential()

4.6.2 Pooling
• The Pooling layer in a Convolutional Neural Network is designed to
shrink the size of the convolved features, leading to a reduction in the

Department of Electronics and Communication Engineering 44


amount of computation required to process the data. This reduction
in dimensionality helps to extract important features that are invariant
to rotation and position, which facilitates the effective training of the
model. In other words, the Pooling layer helps to identify and extract
the most dominant and relevant features from the convolved features,
which can then be used for accurate classification and prediction.

• Max pooling is a common technique used in convolutional neural net-


works (CNNs) to reduce the spatial dimensions (size) of the feature maps
produced by convolutional layers. It is achieved by dividing the feature
map into non-overlapping rectangular regions and selecting the maximum
value within each region to represent that region in the next layer. The
typical pool size is 2x2, which reduces the size of the feature map by
a factor of 2 in each dimension, while preserving the most important
information in the feature map. This process of reducing the size of
the feature map helps in reducing the computation required to pro-
cess the data in the subsequent layers of the network, and also helps in
avoiding overfitting by reducing the number of parameters in the network.

• classifier.add(MaxPooling2D(pools ize = (2, 2)))

Department of Electronics and Communication Engineering 45


CHAPTER 5

Results

5.1 Generated Results


However, in the proposed work, the convolution neural network is used
for automatic brain tumor classification. The pre-trained model is used to
reduce the computation time and to increase the accuracy. The loss function
is calculated using the gradient descent algorithm and the quality of the
parameters is measured by the loss function. The sensitivity of the filter is
reduced by subsampling, and the activation layer controls the signal transfer
from one layer to another layer. The training process is fastened by using the
rectified linear unit (RELU) activation function. The neurons in the proceeding
layer are connected to every neuron in the subsequent layer. Finally, the loss
layer is added at the end to give a feedback to the neural network during
training. The accuracy of the proposed method is compared with the existing
SVM based method and results show that the proposed method has a higher
accuracy and lower computation time compared to the SVM based method.

Figure 5.1: Accuracy of brain tumor classification

In conclusion, the proposed method of using Convolution Neural Network


for automatic brain tumor classification outperforms the existing techniques

46
such as SVM. The accuracy of the proposed method is higher and the compu-
tation time is lower compared to SVM. The proposed method doesn’t require
separate feature extraction and the feature values are obtained from the CNN
itself, making the process simpler and more efficient. The final classification
results are given as Tumor or Non-Tumor brain based on the probability score
value.

5.1.1 Plotting Losses


In the context of brain tumor identification, plotting losses is a way to
visualize the performance of a machine learning model during the training
process. The loss is a measure of how well the model is able to fit the
training data, and plotting it allows researchers and practitioners to observe
trends and make informed decisions about the training process.
There are several types of losses that can be used for brain tumor identi-
fication, including binary cross-entropy loss, dice loss, and Jaccard loss. The
choice of loss function depends on the specific problem and the data being
used.
For example, binary cross-entropy loss is a common choice for binary clas-
sification problems, such as distinguishing between healthy and tumor tissues.
Dice loss and Jaccard loss, on the other hand, are commonly used for seg-
mentation tasks, where the goal is to accurately label each voxel in the 3D
MRI scan as healthy or tumor tissue. When plotting losses, it is important
to keep an eye out for signs of overfitting or underfitting. Overfitting occurs
when the model is too complex and starts to memorize the training data
instead of learning general patterns. This is indicated by a decrease in loss on
the training set and an increase in loss on the validation set. On the other
hand, underfitting occurs when the model is too simple and is unable to fit
the training data well. This is indicated by an increase in loss on both the
training and validation sets.

Department of Electronics and Communication Engineering 47


Figure 5.2: Epoch vs Loss Plot

5.1.2 Raw Results


The output process in brain tumor classification involves a series of steps
that take the prediction made by the machine learning model and present it
in a form that is easy to understand and interpret for medical professionals
and patients. The steps involved in the output process are:

1. Thresholding: In the case of a binary classification model, the output is


usually a continuous value that indicates the probability of the presence
of a tumor. This value is then thresholded to produce a binary label,
where values above a certain threshold are considered positive (tumor
present) and values below the threshold are considered negative (tumor
not present).

2. Post-processing: The raw predictions made by the model may not always
perfectly align with the ground-truth labels, especially in the case of
segmentation maps. To improve the accuracy of the predictions, post-
processing steps such as morphological operations, connected component
analysis, and false positive reduction techniques can be applied to refine
the output.

3. Visualization: The final output is then visualized, typically as a binary


label or as a segmentation map overlaid on the original MRI image.
This allows medical professionals and patients to easily understand the
results and make informed decisions about the diagnosis and treatment
of the brain tumor.

Department of Electronics and Communication Engineering 48


4. Evaluation: Finally, the performance of the model is evaluated using
metrics such as accuracy, sensitivity, specificity, precision, recall, and F1-
score, to assess its ability to accurately detect and classify brain tumors
in MRI images. These metrics provide valuable insights into the strengths
and limitations of the model and can guide future improvements to the
model design and training process.

Figure 5.3: Image with Tumor

Figure 5.4: Image with no Tumor

5.1.3 Output Visualization


The output is classified into 2 types benign and malignant. A malignant
growth or tumor is cancerous, meaning it can invade and damage nearby
tissues and can spread to other parts of the body (metastasize) through the
bloodstream or lymphatic system. Malignant tumors are usually more serious
and require more intensive treatment, such as surgery, chemotherapy, and
radiation therapy.

Department of Electronics and Communication Engineering 49


Figure 5.5: Malignant Tumor Identified

Figure 5.6: No Tumor Identified


It is important to note that while benign tumors are not cancer, they can
still cause health problems and may need to be removed if they are causing
symptoms or affecting normal bodily functions.

A benign growth or tumor is noncancerous, meaning it does not spread


to other parts of the body or invade nearby tissues. It is usually not life-
threatening and can often be removed without any further treatment being
required.

Department of Electronics and Communication Engineering 50


CHAPTER 6

Conclusions and Future Scope

6.1 Conclusion
This work aims to design a high accuracy, efficient, and low complexity
automatic brain tumor classification system using Convolutional Neural Net-
work (CNN). The traditional methods such as Fuzzy C Means (FCM) based
segmentation and texture and shape feature extraction with SVM and DNN
based classification were not efficient due to high computation time and low
accuracy. The proposed CNN-based classification reduces computation time
and increases accuracy. The system is implemented using python and the
ImageNet database is used for classification, with the training performed only
on the final layer. Raw pixel values and depth, width, and height features
are extracted from the CNN and the Gradient Descent based loss function is
used for accuracy improvement. The results show a high training accuracy of
97 and a low validation loss.

6.2 Future Scope


Brain tumor identification is a critical area of research that has the po-
tential to save countless lives. As technology and machine learning algorithms
continue to advance, the future scope of brain tumor identification projects is
vast and exciting. Here are some possible future directions for this field:

1. Improved accuracy: One of the primary goals of brain tumor identi-


fication projects is to improve the accuracy of diagnosis. In the future,
machine learning algorithms will become even more sophisticated, allowing
for more accurate and precise detection of brain tumors.

51
2. Early detection: Detecting brain tumors at an early stage is critical for
successful treatment. In the future, brain tumor identification projects
will focus on developing algorithms that can detect brain tumors at an
earlier stage, when they are easier to treat.

3. Personalized treatment: Brain tumors can vary greatly from person


to person, and personalized treatment is key to successful outcomes. In
the future, brain tumor identification projects will focus on developing
algorithms that can analyze a patient’s individual tumor and provide
personalized treatment recommendations.

4. Integration with other medical technologies: Brain tumor identifi-


cation projects can be integrated with other medical technologies, such
as robotics and neurosurgery, to improve treatment outcomes. In the
future, we can expect to see brain tumor identification projects working
in tandem with other medical technologies to provide better treatment
options for patients.

5. Large-scale data analysis: With the increasing availability of medical


data, brain tumor identification projects will have access to large data
sets that can be used to improve diagnosis and treatment. In the
future, we can expect to see brain tumor identification projects that use
large-scale data analysis to improve their algorithms and provide better
treatment options for patients.

Overall, the future of brain tumor identification projects is bright and


promising. With continued research and development, we can expect to see
significant advancements in the accuracy, speed, and personalized nature of
brain tumor identification and treatment.

Department of Electronics and Communication Engineering 52


REFERENCES

[1] Khan M Iftekharuddin, Wei Jia, and Ronald Marsh. “Fractal analysis
of tumor in brain MR images”. In: Machine Vision and Applications 13
(2003), pp. 352–362.
[2] Chi-Hoon Lee, Mark Schmidt, Albert Murtha, Aalo Bistritz, Jöerg Sander,
and Russell Greiner. “Segmenting brain tumors with conditional random
fields and support vector machines”. In: Computer Vision for Biomedical
Image Applications: First International Workshop, CVBIA 2005, Beijing,
China, October 21, 2005. Proceedings 1. Springer. 2005, pp. 469–478.
[3] Jason J Corso, Alan Yuille, Nancy L Sicotte, and Arthur W Toga.
“Detection and segmentation of pathological structures by the extended
graph-shifts algorithm”. In: Medical Image Computing and Computer-
Assisted Intervention–MICCAI 2007: 10th International Conference, Bris-
bane, Australia, October 29-November 2, 2007, Proceedings, Part I 10.
Springer. 2007, pp. 985–993.
[4] Dana Cobzas, Neil Birkbeck, Mark Schmidt, Martin Jagersand, and
Albert Murtha. “3D variational brain tumor segmentation using a high
dimensional feature set”. In: 2007 IEEE 11th international conference on
computer vision. IEEE. 2007, pp. 1–8.
[5] Michael Wels, Gustavo Carneiro, Alexander Aplas, Martin Huber, Joachim
Hornegger, and Dorin Comaniciu. “A discriminative model-constrained
graph cuts approach to fully automated pediatric brain tumor segmenta-
tion in 3-D MRI”. In: Lecture Notes in Computer Science 5241 (2008),
p. 67.
[6] Renaud Lopes, P Dubois, Imen Bhouri, Mohamed Hedi Bedoui, Salah
Maouche, and Nacim Betrouni. “Local fractal and multifractal features
for volumic texture characterization”. In: Pattern Recognition 44.8 (2011),
pp. 1690–1697.
[7] Atiq Islam, Khan M Iftekharuddin, Robert J Ogg, Fred H Laningham,
and Bhuvaneswari Sivakumar. “Multifractal modeling, segmentation, pre-
diction, and statistical validation of posterior fossa tumors”. In: Med-
ical Imaging 2008: Computer-Aided Diagnosis. Vol. 6915. SPIE. 2008,
pp. 1036–1047.
[8] Tao Wang, Irene Cheng, Anup Basu, et al. “Fluid vector flow and
applications in brain tumor segmentation”. In: IEEE transactions on
biomedical engineering 56.3 (2009), pp. 781–789.
[9] Michael R Kaus, Simon K Warfield, Arya Nabavi, Peter M Black, Ferenc
A Jolesz, and Ron Kikinis. “Automated segmentation of MR images of
brain tumors”. In: Radiology 218.2 (2001), pp. 586–591.
[10] D Gering, W Grimson, and R Kikinis. “Recognizing deviations from
normalcy for brain tumor segmentation, in Proceedings of International
Conference Medical Image Computation Assist.” In: Intervention (Am-
stelveen, Netherlands) 5 (2005), pp. 508–515.

53
[11] Christos Davatzikos, Dinggang Shen, Ashraf Mohamed, and Stelios K
Kyriacou. “A framework for predictive modeling of anatomical deforma-
tions”. In: IEEE transactions on medical imaging 20.8 (2001), pp. 836–
843.
[12] Nassir Navab, Joachim Hornegger, William M Wells, and Alejandro
Frangi. Medical Image Computing and Computer-Assisted Intervention–
MICCAI 2015: 18th International Conference, Munich, Germany, October
5-9, 2015, Proceedings, Part III. Vol. 9351. Springer, 2015.
[13] Thomas Leung and Jitendra Malik. “Representing and recognizing the
visual appearance of materials using three-dimensional textons”. In: In-
ternational journal of computer vision 43 (2001), pp. 29–44.
[14] Stefan Bauer, Thomas Fejes, Johannes Slotboom, Roland Wiest, Lutz-P
Nolte, and Mauricio Reyes. “Segmentation of brain tumor images based
on integrated hierarchical classification and regularization”. In: MICCAI
BraTS Workshop. Nice: Miccai Society. Vol. 11. 2012.
[15] Ezequiel Geremia, Bjoern H Menze, Nicholas Ayache, et al. “Spatial
decision forests for glioma segmentation in multi-channel MR images”.
In: MICCAI Challenge on Multimodal Brain Tumor Segmentation 34
(2012), pp. 14–18.
[16] Andac Hamamci and Gozde Unal. “Multimodal brain tumor segmentation
using the tumor-cut method on the BraTS dataset”. In: Proc MICCAI-
BraTS (2012), pp. 19–23.
[17] T Riklin Raviv, K Van Leemput, and Bjoern H Menze. “Multi-modal
brain tumor segmentation via latent atlases”. In: Proceeding MICCAIBRATS
64 (2012).
[18] Apoorva Raghunandan and D R Shilpa. “Design of High-Speed Hybrid
Full Adders using FinFET 18nm Technology”. In: 2019 4th International
Conference on Recent Trends on Electronics, Information, Communication
Technology (RTEICT). 2019, pp. 410–415. doi: 10.1109/RTEICT46194.
2019.9016866.
[19] Khan M Iftekharuddin, Mohammad A Islam, Jahangheer Shaik, Carlos
Parra, and Robert Ogg. “Automatic brain tumor detection in MRI:
methodology and statistical validation”. In: Medical Imaging 2005: Image
Processing. Vol. 5747. SPIE. 2005, pp. 2012–2022.
[20] Yoav Freund and Robert E Schapire. “A decision-theoretic generalization
of on-line learning and an application to boosting”. In: Journal of
computer and system sciences 55.1 (1997), pp. 119–139.
[21] Alex P Pentland. “Fractal-based description of natural scenes”. In:
IEEE transactions on pattern analysis and machine intelligence 6 (1984),
pp. 661–674.
[22] Justin M Zook and Khan M Iftekharuddin. “Statistical analysis of fractal-
based brain tumor detection algorithms”. In: Magnetic resonance imaging
23.5 (2005), pp. 671–678.

Department of Electronics and Communication Engineering 54


[23] Stuart Geman and Donald Geman. “Stochastic relaxation, Gibbs distribu-
tions, and the Bayesian restoration of images”. In: Readings in Computer
Vision. Elsevier, 1987, pp. 564–584.
[24] Xi Qu, Zhiwei Xu, Jinxiang Yu, and Jun Zhu. “Understanding local
government debt in China: A regional competition perspective”. en. In:
Reg. Sci. Urban Econ. 98.103859 (Jan. 2023), p. 103859.
[25] E Sharon, A Brandt, and R Basri. “Fast multiscale image segmenta-
tion”. In: Proceedings IEEE Conference on Computer Vision and Pattern
Recognition. CVPR 2000 (Cat. No.PR00662). Hilton Head Island, SC,
USA: IEEE Comput. Soc, 2002.
[26] M C Clark, L O Hall, D B Goldgof, R Velthuizen, F R Murtagh,
and M S Silbiger. “Automatic tumor segmentation using knowledge-
based techniques”. en. In: IEEE Trans. Med. Imaging 17.2 (Apr. 1998),
pp. 187–201.
[27] Jason J Corso, Eitan Sharon, and Alan Yuille. “Multilevel segmentation
and integrated bayesian model classification with an application to brain
tumor segmentation”. en. In: Med. Image Comput. Comput. Assist. Interv.
9.Pt 2 (2006), pp. 790–798.
[28] L M Fletcher-Heath, L O Hall, D B Goldgof, and F R Murtagh.
“Automatic segmentation of non-enhancing brain tumors in magnetic
resonance images”. en. In: Artif. Intell. Med. 21.1-3 (Jan. 2001), pp. 43–
63.
[29] Marcel Prastawa, Elizabeth Bullitt, Sean Ho, and Guido Gerig. “A brain
tumor segmentation framework based on outlier detection”. en. In: Med.
Image Anal. 8.3 (Sept. 2004), pp. 275–283.
[30] A Akselrod-Ballin, M Galun, R Basri, A Brandt, M J Gomori, M Filippi,
and P Valsasina. “An integrated segmentation and classification approach
applied to multiple sclerosis analysis”. In: 2006 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition - Volume 1
(CVPR’06). New York, NY, USA: IEEE, 2006.
[31] K V Leemput, F Maes, D Vandermeulen, A Colchester, and P Suetens.
“Automated Segmentation of Multiple Sclerosis Lesions by Model Outlier
Detection”. In: IEEE Trans. on Medical Imaging 20.8 (2001), pp. 677–
688.
[32] G Dugas-Phocion, M A Gonzalez, C Lebrun, S Chanalet, C Bensa,
G Malandain, and N Ayache. “Hierarchical segmentation of multiple
sclerosis lesions in multi-sequence MRI”. In: 2004 2nd IEEE International
Symposium on Biomedical Imaging: Macro to Nano (IEEE Cat No.
04EX821). Arlington, VA, USA: IEEE, 2005.

Department of Electronics and Communication Engineering 55

You might also like