G8 Report Major
G8 Report Major
G8 Report Major
BACHELOR OF TECHNOLOGY
IN
Submitted by
SUPERVISOR
Dr.D.Krishna
Associate Professor, Dept. of ECE
March, 2023
Department of Electronics and Communication Engineering
CERTIFICATE
This is to certify that the project titled Brain Tumor Classification using
Conventional Neural Networks is carried out by
Examiner
Acknowledgement
We avail this opportunity to express our deep sense of gratitude and heart-
ful thanks to Dr. Teegala Vijender Reddy, Chairman and Sri Teegala
Upender Reddy, Secretary of VCE, for providing a congenial atmosphere to
complete this project successfully.
Keywords: Pooling;Diagnosis;Precision
Table of Contents
iv
4.3Image Preprocessing and Image Enhancement . . . . . . . . . . . 36
4.3.1 Image Pre-Processing . . . . . . . . . . . . . . . . . . . . . 36
4.3.2 Image Acquisition from dataset . . . . . . . . . . . . . . . 36
4.3.3 Convert the image from one color space to another . . . . 37
4.3.4 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4.1 Sobel Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.5 Image Segmentation using Binary Threshold . . . . . . . . . . . . 40
4.5.1 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.5.2 Morphological Operations . . . . . . . . . . . . . . . . . . . 43
4.6 Tumor classification using CNN . . . . . . . . . . . . . . . . . . . 44
4.6.1 Sequential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.6.2 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
CHAPTER 5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1 Generated Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.1.1 Plotting Losses . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.2 Raw Results . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.3 Output Visualization . . . . . . . . . . . . . . . . . . . . . . 49
CHAPTER 6 Conclusions and Future Scope . . . . . . . . . . . . . 51
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
List of Figures
vi
Abbreviations
Abbreviation Description
Introduction
1
and may be utilised on a variety of bodily components, including the brain,
spinal, joints, other organs, to deliver extremely accurate and exact diagnostic
information.
The patient is placed on a moveable bed while in an MRI scan that glides
within a large, capsule machine. The apparatus aligns the body’s hydrogen
atoms using a strong magnetic field. These aligned atoms are activated and
release signals when radio waves are focused on the body, and the MRI ma-
chine’s receiving coils pick up on these signals. In order to provide incredibly
detailed photographs of the inside body structures, the signals are analysed
by a computer.
The brain can develop abnormal growths called tumours. Primary and
metastatic brain tumours are the two major subtypes. Whereas metastatic
cancers begin anywhere on the body and move to the brain, primary brain
tumours begin within the brain.
The symptoms of brain tumours can vary based on the size and size of the
cancer, but they might include headaches, seizures, trouble hearing or seeing,
speaking, or coordinating your body, as well as balance and coordination issues.
1.5 Objective
The objective of a brain tumor classification project is to develop a model
that can accurately classify brain tumors into different categories based on
their characteristics. The aim is to provide a reliable and automated method
for medical professionals to diagnose brain tumors, which can aid in treatment
planning and prognosis determination.
The project may involve analyzing medical images, for example MRI or CT
scans, to identify patterns and features that are indicative of different types
of brain tumors. Machine learning programs can be trained on these images
and associated clinical data to create a predictive model that can classify new
brain tumor images accurately.
The ultimate goal of the project is to improve the accuracy and speed of
brain tumor diagnosis, which can lead to better patient outcomes and more
effective treatment planning.
1.7 Motivation
Brain tumors can be classified as malignant or benign, and their causes
and symptoms can vary. Early detection and treatment are crucial for a better
prognosis. Brain tumors can develop when cells grow abnormally and form
a solid mass in the brain. There are two types of brain tumors: primary
and metastatic. Symptoms may differ depending on the size, location, and
type of tumor and can include headaches, nausea, vomiting, and difficulty
walking. CT and MRI scans are used to detect brain tumors, with MRI being
preferred because it is non-invasive, non-ionizing, and produces high-definition
images. MRI has different image sequences, such as flair, T1-weighted, and
T2-weighted images. Brain tumour analysis and identification can be aided by
the use of image processing techniques as pre-processing, segmentation, picture
augmentation, feature extraction, and classification.
Literature Survey
6
characteristics from brain tumour pictures, along with a comprehensive
mathematical explanation of the mBm model.
• The authors of this paper[6] investigate the impact of noise on the fractal
dimension of digital images. They add three types of noise (Gaussian,
salt and pepper, and speckle) to the images and estimate the fractal
dimension of both the noisy and non-noisy images. Their results show
that noise can affect the fractal dimension, causing an increase in its
value. The authors also report the corresponding error in terms of
RMSE and estimate the average percentage error in fractal dimension as
an offset for determining the true fractal dimension from noisy images.
• The paper also introduced a [24] novel method for initializing the re-
gion competition algorithm using an initial over-segmentation obtained
through a hierarchical clustering approach. This initialization method
helped to improve the accuracy and speed of the region competition
algorithm by reducing the number of iterations required to converge to
a good segmentation.
Methodology
3.1.1 NumPy
NumPy is a library for numerical computing in Python. It provides
powerful array operations and functions, which are essential for processing and
manipulating images.
21
powerful array object and a range of functions for working with arrays.
Here are some broad features of NumPy:
• Arrays: NumPy provides a powerful array object that can handle large,
multidimensional arrays of data. It provides functions for creating,
manipulating, and accessing arrays. Arrays in NumPy are much more
efficient than regular Python lists for numerical calculations.
• Integration with other libraries: NumPy integrates well with other scien-
tific computing libraries in Python, including SciPy, pandas, and scikit-
learn.
NumPy is an essential library for many scientific and data analysis applications
in Python. Its powerful array object and mathematical functions make it ideal
for handling large amounts of data and performing complex computations
efficiently.
3.1.2 OpenCV
OpenCV is an open-source computer vision library that provides a wide
range of image processing functions, including image filtering, feature detection,
• Image and video processing: OpenCV provides a wide range of image and
video processing functions, including filtering, feature detection, object
detection, segmentation, and more.
3.1.3 Pillow
Pillow is a fork of the Python Imaging Library (PIL), which provides basic
image processing functions like image cropping, resizing, and filtering.
Here are some broad features of Pillow:
• Text and graphics: Pillow provides functions for drawing text and graph-
ics on images, including shapes like lines, rectangles, and circles.
• Integration with other libraries: Pillow integrates well with other Python
libraries, including NumPy and OpenCV.
3.1.4 Scikit-image
Scikit-image is a library for image processing and computer vision in
Python. It provides a range of functions for image segmentation, feature
extraction, and object detection.
• Image and Video Recognition: CNNs are widely used for image and
video recognition tasks, such as object detection, face recognition, and
scene recognition. For example, CNNs are used in security systems to
detect and recognize people and objects in real-time.
This reduces the time consumption and improves the performance of the
automatic brain tumor classification scheme. In the testing phase, the pre-
processed testing image is passed through the trained CNN model. Then, the
predicted label is compared with the actual label to calculate the accuracy
of the system. The overall accuracy of the system is evaluated in terms
of confusion matrix, precision, recall and F1-score. The performance of the
proposed CNN based brain tumor classification system is evaluated with the
help of different evaluation metrics such as accuracy, precision, recall, F1-score,
confusion matrix and ROC curve.
2. Comparing the predictions with the actual output labels and computing
the loss
Repeat the above steps until the loss reaches a minimum or a certain number
of iterations is reached.
The dataset is a mix of both real-case MRI images of patients and images
from benchmarking datasets. The combination of real-case and benchmark
images is used to train the convolutional neural network for automatic brain
tumor classification. The tumor images are collected from Radiopaedia and
non-tumor images are collected from the Brain Tumor Image Segmentation
Benchmark (BRATS) 2015 testing dataset.
Architecture
After the convolutional layer, the CNN typically includes pooling layers,
which reduce the spatial dimension of the feature maps by down-sampling
them. This helps to reduce the number of parameters in the model and
improve its computational efficiency.
The CNN then consists of one or even more fully linked layers that conduct
classification using the characteristics that the prior levels have retrieved. The
softmax layer, which produces a probabilistic model over the many types of
brain tumours that now the model has been taught to recognise, is often fed
the result of the layers that are completely linked.
To train the model, a large dataset of labeled brain MRI images is required.
The model is then trained to predict the correct label for each image in the
34
dataset, with the goal of minimizing the difference between the predicted and
true labels. The model can be fine-tuned or retrained on new data to improve
its accuracy and adapt it to new datasets.
Once trained, the model may be used to detect brain cancers in fresh
MRI scans. A probability density function over the several classifications of
brain tumours is produced by the model when the input picture is processed
through the trained CNN. The projected label again for image is then chosen
from the category with the greatest likelihood.
4.3.4 Filters
Filters are mostly employed in image processing to minimize the elevated
frequencies in the image.
−1 0 +1
Gx = −2 0 +2
−1 0 +1
• Vertical Changes: This is calculated by convolution I with an
odd-sized kernel Gy. For instance, Gy would be calculated as
follows for the kernel of size 3:
−1 2 −1
Gy= −0 0 0
+1 +2 +1
12
G = Gx 2 + Gy 2 (4.1)
It is important to note that the Sobel filter is a type of linear filter, mean-
ing it applies a linear transformation to the image intensities. It operates by
convolving the image with a small kernel matrix, which consists of coefficients
that are used to compute the approximation of the image gradient at each
pixel. The filter is designed to be sensitive to vertical and horizontal edges in
the image, which makes it useful for detecting features such as edges, corners,
and contours.
Overall, the application of the Sobel filter can improve the visual quality
of the image, making it easier for human observers or machine learning algo-
rithms to detect and classify objects in the image.
In order to analyse the size, volume, position, texture, and form of the
extracted picture, segmentation methods are important since they have the
capacity to discover or identify the anomalous component from the image. By
maintaining the threshold information during MR image segmentation, it is
possible to more precisely detect the damaged regions. The idea that things
put near together could have comparable properties and features was formerly
considered fashionable.
The function returns the computed threshold value and thresholder image.
1. src - refers to the input image (i.e., the image that is to be thresholded).
The thresholded image is the output image that is generated by the
thresholding process, where each pixel is assigned a binary value (either
0 or 255) based on whether its intensity value is below or above the
threshold value.
3. maxval - refers to the maximum value that a pixel can take in the
output image. When a pixel value in the input image is much greater
when compared with the threshold value, it is assigned the maximum
value specified by maxval. For example, if maxval is set to 255, the
output pixel will have a value of 255. This is typically used to highlight
the areas of an image that are of interest, such as the edges or the
objects present in the image.
4. type - The ”type” parameter in the threshold function specifies the type
of thresholding to be performed. There are different types of thresh-
olding that can be used depending on the requirements of the application.
For this step we need to import Keras and other packages that we’re going
to use in building the CNN. Import the following packages:
4.6.1 Sequential
• To initialize the neural network, we create an object of the Sequential
class.
• classifier= Sequential()
4.6.2 Pooling
• The Pooling layer in a Convolutional Neural Network is designed to
shrink the size of the convolved features, leading to a reduction in the
Results
46
such as SVM. The accuracy of the proposed method is higher and the compu-
tation time is lower compared to SVM. The proposed method doesn’t require
separate feature extraction and the feature values are obtained from the CNN
itself, making the process simpler and more efficient. The final classification
results are given as Tumor or Non-Tumor brain based on the probability score
value.
2. Post-processing: The raw predictions made by the model may not always
perfectly align with the ground-truth labels, especially in the case of
segmentation maps. To improve the accuracy of the predictions, post-
processing steps such as morphological operations, connected component
analysis, and false positive reduction techniques can be applied to refine
the output.
6.1 Conclusion
This work aims to design a high accuracy, efficient, and low complexity
automatic brain tumor classification system using Convolutional Neural Net-
work (CNN). The traditional methods such as Fuzzy C Means (FCM) based
segmentation and texture and shape feature extraction with SVM and DNN
based classification were not efficient due to high computation time and low
accuracy. The proposed CNN-based classification reduces computation time
and increases accuracy. The system is implemented using python and the
ImageNet database is used for classification, with the training performed only
on the final layer. Raw pixel values and depth, width, and height features
are extracted from the CNN and the Gradient Descent based loss function is
used for accuracy improvement. The results show a high training accuracy of
97 and a low validation loss.
51
2. Early detection: Detecting brain tumors at an early stage is critical for
successful treatment. In the future, brain tumor identification projects
will focus on developing algorithms that can detect brain tumors at an
earlier stage, when they are easier to treat.
[1] Khan M Iftekharuddin, Wei Jia, and Ronald Marsh. “Fractal analysis
of tumor in brain MR images”. In: Machine Vision and Applications 13
(2003), pp. 352–362.
[2] Chi-Hoon Lee, Mark Schmidt, Albert Murtha, Aalo Bistritz, Jöerg Sander,
and Russell Greiner. “Segmenting brain tumors with conditional random
fields and support vector machines”. In: Computer Vision for Biomedical
Image Applications: First International Workshop, CVBIA 2005, Beijing,
China, October 21, 2005. Proceedings 1. Springer. 2005, pp. 469–478.
[3] Jason J Corso, Alan Yuille, Nancy L Sicotte, and Arthur W Toga.
“Detection and segmentation of pathological structures by the extended
graph-shifts algorithm”. In: Medical Image Computing and Computer-
Assisted Intervention–MICCAI 2007: 10th International Conference, Bris-
bane, Australia, October 29-November 2, 2007, Proceedings, Part I 10.
Springer. 2007, pp. 985–993.
[4] Dana Cobzas, Neil Birkbeck, Mark Schmidt, Martin Jagersand, and
Albert Murtha. “3D variational brain tumor segmentation using a high
dimensional feature set”. In: 2007 IEEE 11th international conference on
computer vision. IEEE. 2007, pp. 1–8.
[5] Michael Wels, Gustavo Carneiro, Alexander Aplas, Martin Huber, Joachim
Hornegger, and Dorin Comaniciu. “A discriminative model-constrained
graph cuts approach to fully automated pediatric brain tumor segmenta-
tion in 3-D MRI”. In: Lecture Notes in Computer Science 5241 (2008),
p. 67.
[6] Renaud Lopes, P Dubois, Imen Bhouri, Mohamed Hedi Bedoui, Salah
Maouche, and Nacim Betrouni. “Local fractal and multifractal features
for volumic texture characterization”. In: Pattern Recognition 44.8 (2011),
pp. 1690–1697.
[7] Atiq Islam, Khan M Iftekharuddin, Robert J Ogg, Fred H Laningham,
and Bhuvaneswari Sivakumar. “Multifractal modeling, segmentation, pre-
diction, and statistical validation of posterior fossa tumors”. In: Med-
ical Imaging 2008: Computer-Aided Diagnosis. Vol. 6915. SPIE. 2008,
pp. 1036–1047.
[8] Tao Wang, Irene Cheng, Anup Basu, et al. “Fluid vector flow and
applications in brain tumor segmentation”. In: IEEE transactions on
biomedical engineering 56.3 (2009), pp. 781–789.
[9] Michael R Kaus, Simon K Warfield, Arya Nabavi, Peter M Black, Ferenc
A Jolesz, and Ron Kikinis. “Automated segmentation of MR images of
brain tumors”. In: Radiology 218.2 (2001), pp. 586–591.
[10] D Gering, W Grimson, and R Kikinis. “Recognizing deviations from
normalcy for brain tumor segmentation, in Proceedings of International
Conference Medical Image Computation Assist.” In: Intervention (Am-
stelveen, Netherlands) 5 (2005), pp. 508–515.
53
[11] Christos Davatzikos, Dinggang Shen, Ashraf Mohamed, and Stelios K
Kyriacou. “A framework for predictive modeling of anatomical deforma-
tions”. In: IEEE transactions on medical imaging 20.8 (2001), pp. 836–
843.
[12] Nassir Navab, Joachim Hornegger, William M Wells, and Alejandro
Frangi. Medical Image Computing and Computer-Assisted Intervention–
MICCAI 2015: 18th International Conference, Munich, Germany, October
5-9, 2015, Proceedings, Part III. Vol. 9351. Springer, 2015.
[13] Thomas Leung and Jitendra Malik. “Representing and recognizing the
visual appearance of materials using three-dimensional textons”. In: In-
ternational journal of computer vision 43 (2001), pp. 29–44.
[14] Stefan Bauer, Thomas Fejes, Johannes Slotboom, Roland Wiest, Lutz-P
Nolte, and Mauricio Reyes. “Segmentation of brain tumor images based
on integrated hierarchical classification and regularization”. In: MICCAI
BraTS Workshop. Nice: Miccai Society. Vol. 11. 2012.
[15] Ezequiel Geremia, Bjoern H Menze, Nicholas Ayache, et al. “Spatial
decision forests for glioma segmentation in multi-channel MR images”.
In: MICCAI Challenge on Multimodal Brain Tumor Segmentation 34
(2012), pp. 14–18.
[16] Andac Hamamci and Gozde Unal. “Multimodal brain tumor segmentation
using the tumor-cut method on the BraTS dataset”. In: Proc MICCAI-
BraTS (2012), pp. 19–23.
[17] T Riklin Raviv, K Van Leemput, and Bjoern H Menze. “Multi-modal
brain tumor segmentation via latent atlases”. In: Proceeding MICCAIBRATS
64 (2012).
[18] Apoorva Raghunandan and D R Shilpa. “Design of High-Speed Hybrid
Full Adders using FinFET 18nm Technology”. In: 2019 4th International
Conference on Recent Trends on Electronics, Information, Communication
Technology (RTEICT). 2019, pp. 410–415. doi: 10.1109/RTEICT46194.
2019.9016866.
[19] Khan M Iftekharuddin, Mohammad A Islam, Jahangheer Shaik, Carlos
Parra, and Robert Ogg. “Automatic brain tumor detection in MRI:
methodology and statistical validation”. In: Medical Imaging 2005: Image
Processing. Vol. 5747. SPIE. 2005, pp. 2012–2022.
[20] Yoav Freund and Robert E Schapire. “A decision-theoretic generalization
of on-line learning and an application to boosting”. In: Journal of
computer and system sciences 55.1 (1997), pp. 119–139.
[21] Alex P Pentland. “Fractal-based description of natural scenes”. In:
IEEE transactions on pattern analysis and machine intelligence 6 (1984),
pp. 661–674.
[22] Justin M Zook and Khan M Iftekharuddin. “Statistical analysis of fractal-
based brain tumor detection algorithms”. In: Magnetic resonance imaging
23.5 (2005), pp. 671–678.