0% found this document useful (0 votes)
118 views66 pages

VCEH B Tech Project Report

This document describes a project report on developing a semantic based similarity technique for retrieving brain tumor images. The technique uses deep learning models like U-Net for image segmentation and ResNext50 for feature extraction to build a computer-aided diagnostic system for identifying brain tumors from MRI scans. The methodology involves collecting MRI images, pre-processing them, segmenting the tumor region, extracting features, calculating similarity between images, retrieving similar images and evaluating the system. Preliminary results show the proposed technique achieves high accuracy and recall in identifying brain tumors.

Uploaded by

mr copy xerox
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views66 pages

VCEH B Tech Project Report

This document describes a project report on developing a semantic based similarity technique for retrieving brain tumor images. The technique uses deep learning models like U-Net for image segmentation and ResNext50 for feature extraction to build a computer-aided diagnostic system for identifying brain tumors from MRI scans. The methodology involves collecting MRI images, pre-processing them, segmenting the tumor region, extracting features, calculating similarity between images, retrieving similar images and evaluating the system. Preliminary results show the proposed technique achieves high accuracy and recall in identifying brain tumors.

Uploaded by

mr copy xerox
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

SEMANTIC BASED SIMILARITY

TECHNIQUE FOR RETRIEVING


BRAIN TUMOR IMAGES

A Project Report Submitted in the


Partial Fulfillment of the Requirements
for the Award of the Degree of

BACHELOR OF TECHNOLOGY

IN

ELECTRONICS AND COMMUNICATION ENGINEERING

Submitted by

T.SAI TEJA 19881A04H3


J.RAVALI 19881A04E0
A.NIKHIL 19881A04C3

SUPERVISOR
K.ASHWINI
Ass.professor

Department of Electronics and Communication Engineering

March, 2023
Department of Electronics and Communication Engineering

CERTIFICATE

This is to certify that the project titled SEMANTIC BASED SIMILAR-


ITY TECHNIQUE FOR RETRIEVING BRAIN TUMOR IMAGES
is carried out by

T.SAI TEJA 19881A04H3


J.RAVALI 19881A04E0
A.NIKHIL 19881A04C3

in partial fulfillment of the requirements for the award of the degree of


Bachelor of Technology in Electronics and Communication Engineering

during the year 2022-23.

Signature of the Supervisor Signature of the HOD


Ms.K.ASHWINI Dr. G.A.E. Satish Kumar
Associate Professor Professor and Head, ECE

Project Viva-Voce held on

Examiner

Kacharam (V), Shamshabad (M), Ranga Reddy (Dist.)–501218, Hyderabad, T.S.


Ph: 08413-253335, 253201, Fax: 08413-253482, www.vardhaman.org
Acknowledgement

The satisfaction that accompanies the successful completion of the task


would be put incomplete without the mention of the people who made it
possible, whose constant guidance and encouragement crown all the efforts
with success.

We wish to express our deep sense of gratitude to Ms.K.ASHWINI,Ass.Prof


and Project Supervisor, Department of Electronics and Communication Engi-
neering, Vardhaman College of Engineering, for his able guidance and useful
suggestions, which helped us in completing the project in time.

We are particularly thankful to Dr. G.A.E. Satish Kumar, the Head of


the Department, Department of Electronics and Communication Engineering,
his guidance, intense support and encouragement, which helped us to mould
our project into a successful one.

We show gratitude to our honorable Principal Dr. J.V.R. Ravindra, for


providing all facilities and support.

We avail this opportunity to express our deep sense of gratitude and heart-
ful thanks to Dr. Teegala Vijender Reddy, Chairman and Sri Teegala
Upender Reddy, Secretary of VCE, for providing a congenial atmosphere to
complete this project successfully.

We also thank all the staff members of Electronics and Communication


Engineering department for their valuable support and generous advice. Finally
thanks to all our friends and family members for their continuous support and
enthusiastic help.

T.SAI TEJA
J.RAVALI
A.NIKHIL

ii
Abstract

Due to differences in MRI volume levels throughout institutes, precise seg-


mentation of brain tumors necessitates the extraction of uniform characteristics
from multifaceted MRIs. The most frequently used diagnostic method for de-
tecting brain tumors, which are abnormal cell masses in the brain made up
of neurons, cells called glial cells, as well as meninges, is magnetic resonance
imaging (MRI). Natural tumors of the brain have benign or cancerous masses
that develop in brain tissue. Technological advancements have enhanced brain
tumor screening, and this study suggests an automatic method for detecting
brain tumors via MRI Gray-scale pictures.
For improved segmentation, the suggested automated method includes early
enhancement to reduce Gray-scale color variations and the use of a filter op-
eration to eliminate unwanted noise. Because this research was conducted on
grayscale pictures, threshold-based segmentation was used rather than color
separation. Pathology specialists gave feature information that was used to
pinpoint the brain malignancy region of focus. In this research, a new design
was used to allow high efficiency and decreased size and processing cost of
deep neural networks, resulting in a high-performance computer-aided diag-
nostic system for brain tumor identification from MRI. Preliminary evaluation
with transferable knowledge revealed excellent performance with high accuracy
and forecast chance. When different levels were relearned, the prediction rates
varied.

Keywords:Magnetic resonance imaging, Semantic based similarity tech-


nique, Brain tumor, deep learning

iv
Table of Contents

Title Page No.


Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTER 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction to Semantic Based Image Retrieval . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Scope of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
CHAPTER 2 Literature Survey . . . . . . . . . . . . . . . . . . . . . 6
CHAPTER 3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1 methodology of semantic based image retrieval . . . . . . . . . . . 17
3.2 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.1 Image acquisition . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.2 Noise reduction . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.3 Intensity normalization . . . . . . . . . . . . . . . . . . . . . 20
3.3.4 Skull stripping . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.5 Image registration . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Image segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.6 Similarity Calculation . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.8 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.9 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.10 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.11 Software Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

vi
3.11.1 Test strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.11.2 Test data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.11.3 Test plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.11.4 Test scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.11.5 Test cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.11.6 Traceability Matrix . . . . . . . . . . . . . . . . . . . . . . . 28
3.12 Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
CHAPTER 4 Network Architecture . . . . . . . . . . . . . . . . . . 30
4.1 Proposed System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.1.1 Advantages of Proposed System . . . . . . . . . . . . . . . 31
4.2 Existing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Drawbacks of Existing System . . . . . . . . . . . . . . . . . . . . 33
4.4 System Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.4.1 Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.5 Image Segmentation using Unet . . . . . . . . . . . . . . . . . . . 34
4.5.1 Process of Image segmentation using Unet . . . . . . . . . 35
4.6 Feature Extraction using ResNext50 . . . . . . . . . . . . . . . . . 36
4.7 Image Retrieving using Convolution Neural Network . . . . . . . 37
4.8 Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.9 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.10 Tumor classification using CNN . . . . . . . . . . . . . . . . . . . 40
4.11 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.12 Advantages of Implemented Algorithm . . . . . . . . . . . . . . . . 41
CHAPTER 5 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.1 Experiment results . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.1.1 Bar Graph of the Data set . . . . . . . . . . . . . . . . . . 45
CHAPTER 6 Conclusions and Future Scope . . . . . . . . . . . . . 49
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.2 Future scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
List of Tables

5.1 Performance of cluster technique for colour mode . . . . . . . . . 44


5.2 Performance of cluster technique for accuracy . . . . . . . . . . . 45
5.3 Performance of Methodology for accuracy . . . . . . . . . . . . . . 46

viii
List of Figures

1.1 Motivation for Semantic Based Image Retrieval . . . . . . . . . . 2

3.1 Collection of data from medical data base . . . . . . . . . . . . . 18


3.2 Pre-Processing of brain tumor images . . . . . . . . . . . . . . . . 19
3.3 Image segmentation of brain tumor images . . . . . . . . . . . . . 21
3.4 Feature Extraction of brain tumor images . . . . . . . . . . . . . 22
3.5 Image Retrieval of brain tumor images . . . . . . . . . . . . . . . 23
3.6 Segmentation Block of Code . . . . . . . . . . . . . . . . . . . . . 29

4.1 Block Diagram of System Network . . . . . . . . . . . . . . . . . . 34


4.2 How Unet works in image segmentation . . . . . . . . . . . . . . . 35
4.3 Block diagram of Unet architecture . . . . . . . . . . . . . . . . . 36
4.4 How does the model Faster R-CNN ResNet 50 work . . . . . . . 37
4.5 Image Retrieving using Convolution neural network . . . . . . . . 38
4.6 Predicting the Brain tumor Using Threshold . . . . . . . . . . . 40

5.1 Generated Output of semantic based image retrieval 1 . . . . . . 43


5.2 Generated Output of Semantic Based Image Retrieval 2 . . . . . 43
5.3 Generated Output of Semantic Based Image Retrieval 3 . . . . 44
5.4 Generated Output of Semantic Based Image Retrieval 4 . . . . . 44
5.5 Distribution of data by positive or negative . . . . . . . . . . . . 45
5.6 detecting the tumor images using MRI . . . . . . . . . . . . . . 45
5.7 Brain MRI Images for Brain Tumor Detection LGG Segmentation
Data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.8 Coding part used for Image Segmentation of brain tumor images
using Unet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.9 Coding part used for Feature Extraction of brain tumor images
using ResNext50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.10 Retrieving brain tumor images using Convolution nerual network 48
5.11 Segmentation Block of Code . . . . . . . . . . . . . . . . . . . . . 48

ix
Abbreviations

Abbreviation Description

CNN Convolution Neural Network

MRI Magnetic Resonance Images

PS-OCT Polarization-Sensitive Optical Coherence Tomo graph

GAN Generative Adversarial Networks

EMD Empirical Mode Decomposition

CBMIR Content-Based Medical Image Retrieval Systems


CHAPTER 1

Introduction

1.1 Introduction to Semantic Based Image Re-


trieval
In the field of medical imaging, segmentation is a crucial task. The manual
segmentation of medical images, however, is a time-consuming and expensive
process. To automate this task, machine learning (ML) models can be used,
but they require large data sets of annotated medical images, which might
not be accessible in specific contexts, such as paediatric cancer. To overcome
this challenge, a pipeline consisting of multiple ML models is proposed that
uses magnetic resonance imaging (MRI) brain images and corresponding binary
labels indicating whether an image contains a high-grade glioma (HGG) tumor
or no tumor. The pipeline trains a classifier that predicts the likelihood of
an MRI brain image containing an HGG tumor and then generates initial
explanation maps of the classifier. The explanation maps are then used as
pseudo-ground truths to train three different models to refine the explanations.
The first model uses a standard segmentation model with an auxiliary loss
that directly links the training to the classifier. The other two models are a
super-pixel generation model and a super-pixel scoring model, which are trained
simultaneously. The super-pixel generator injects information from the pixel
intensities of the MRI images, and the scoring model injects information from
the pseudo-ground truth explanations. The super-pixels generated can then be
clustered using the scoring model to yield tumor segmentation’s that focus on
the aspects of the pseudo-ground truth explanations that are consistent with
the pixel intensities of the MRI images. Deep learning and other advanced
technologies have made it easier to rapidly classify, segment, and detect brain
tumors. However, because convolutional neural networks (CNNs) have a large
number of parameters, training a deep CNN model on a normal laptop or

1
desktop computer is difficult. Transfer learning is used to overcome this issue
by implementing popular CNN architectures with pre-trained weights and minor
model changes to fit the dataset. Image denoising, data augmentation, and
segmentation methods are used for both traditional machine learning classifiers
and sophisticated CNN models to improve prediction accuracy without human
involvement.Interpreting medical images such as CT and MRI requires extensive
training and ability because organ and lesion segmentation must be done layer
by layer. Manual segmentation places a significant burden on doctors, which
can introduce bias if subjective views are involved. To analyze complicated
images, doctors must typically make a joint diagnosis, which takes time.
Furthermore, automatic segmentation remains a difficult and unsolved issue.

1.2 Motivation

Figure 1.1: Motivation for Semantic Based Image Retrieval

Creating exact outlines of the tumor areas is the goal of brain tumor
segmentation. In tackling a variety of computer vision tasks, such as image
classification, object detection, and semantic segmentation, deep learning meth-
ods have shown remarkable results. These methods can now be used to more
accurately and quickly identify brain lesions at an early stage. Additionally,
this technique automates image processing and analysis, makes it easier for
scientists to identify different types of brain structures, and enhances diagnosis.

Department of Electronics and Communication Engineering 2


1.3 Scope of Work
The development of a system that can quickly locate pertinent medical
images from a big database using semantic features is part of the scope of
work for semantic-based image retrieval in brain tumor images. The system
would use deep learning techniques to extract useful information from the
images and would make use of NLP to interpret the user’s question. The
system would need to be trained on a sizable collection of labeled medical
images in order to learn the traits and connections between the images and
their associated medical conditions in order to complete this job. After that,
the system would use these newly acquired features to match the user’s query
to the database’s most pertinent pictures. The system’s effectiveness would be
assessed based on how well it processed queries, was user-friendly, and could
correctly retrieve pertinent medical images from the database. This technology
has the potential to improve patient outcomes, advance medical study in the
area of brain tumor detection and treatment, and give medical professionals
access to the most pertinent images for diagnosis and treatment planning.

1.4 Problem Definition


In order to extract significant features and decipher user queries, semantic-
based picture retrieval employs deep learning and natural language processing
methods. However, developing an effective semantic-based image retrieval
system for brain tumor images requires addressing several challenges, including
the selection and labeling of a large data set of medical images, the development
of robust deep learning algorithms to extract meaningful features, and the
optimization of the search algorithm to achieve high accuracy and efficiency.
Making sure the system is user-friendly and simple to integrate into medical
workflows is a major task as well. To protect patient information, it is also
crucial to confirm that the system conforms with ethical standards and data
privacy laws. A semantic-based image retrieval system for brain tumor images
that is developed and successfully put into use could revolutionize medical

Department of Electronics and Communication Engineering 3


image analysis and enhance patient results.

1.5 Objective
The goal of this research is to create a semantic similarity method for
retrieving images of brain tumors. The suggested method combines a number
of deep learning methods, including Unet for segmenting images, ResNeXt50 for
extracting features, and CNN for retrieving images. Based on their semantic
content, the system seeks to offer a quick and accurate method of retrieving
images of brain tumors that are comparable. The ability to access a sizable
database of comparable images and compare them to the patient’s MRI scans
could possibly help medical professionals diagnose and treat brain tumors
more successfully. The overall objective is to support the creation of more
sophisticated and intelligent medical imaging systems for improved patient
treatment.

1.6 Thesis Outline


This project report begins with the sketch of Semantic Based Image Re-
trieval .We explain the uses of semantic based similarity technique for retriev-
ing brain tumor images and the motivation behind choosing this project.The
project report is classified as follows.

Chapter 1: Introduction - This chapter explains the sketch of the report.

Chapter 2: Literature Survey - This chapter briefs about different detection


methods used by different authors.

Chapter 3: Image Retrieval - This chapter describes the Image retrieval of


brain tumor MRI images and elaborates all content in details.

Chapter 4: System Block Diagram - This chapter demonstrates the block


diagram of the Proposed methods and data set.

Chapter 5: Result - This chapter gives the details about the simulation results
using Python Language.

Department of Electronics and Communication Engineering 4


Chapter 6: Conclusions and Future Scope - This chapter concludes the method
and describes the future scope of this method.

Department of Electronics and Communication Engineering 5


CHAPTER 2

Literature Survey

There are many researchers which discuss MRI image detection methods
with the help of deep learning.Neelum Noreen[1] et al. A combination method
was used to develop a deep learning model for detecting tumors in the brain.
To enhance diagnostic precision, the model was developed to integrate both
MRI and CT images. The authors’ findings were encouraging, showing that
the model they developed outperformed conventional machine learning models
in terms of accuracy, sensitivity, and specificity. The suggested model has the
ability to transform brain tumor evaluation, allowing for early detection and
more efficient therapy. More study is required, however, to verify the model’s
efficacy on bigger data sets and in clinical situations.
Abdu Gumaei[2] et al. For tumors in the brain categorization, a combi-
nation of extraction of features technique coupled with a regularized extreme
learning machine was suggested. To extract useful information from images
generated by MRI, the technique used a mix of wavelet-based features and
gray-level co-occurrence matrix features. The study’s authors reported encour-
aging findings, showing that their suggested approach outperformed several
traditional machine learning models in terms of accuracy. The suggested ap-
proach has the ability to enhance brain tumor categorization accuracy and
speed, resulting in early discovery and more effective therapy. However, ad-
ditional validation of the technique is required to guarantee its efficacy in
clinical situations.
Hossam H. Sultan [3]et al. A deep neural network model for multi-
classification of brain tumor pictures was suggested. The model extracted
pertinent characteristics from magnetic resonance imaging (MRI) pictures using
convolutional neural networks and obtained high accuracy in categorizing brain
tumor photos into various kinds. The authors’ findings were encouraging,
showing that their suggested model outperformed several traditional machine

6
learning models. The suggested model has the ability to speed up and increase
the precision of brain tumor categorization, resulting in early discovery and
improved therapy. However, additional validation of the model is required to
guarantee its efficacy in clinical situations.
Yun-Qian Li [4]et al.PS-OCT (polarization-sensitive optical coherent scan-
ning) has been suggested for identifying brain tumors. The technique dis-
tinguishes between normal and abnormal tissues by utilizing the birefringent
characteristics of tumor tissues. The authors published encouraging findings,
showing that their proposed PS-OCT method can detect cancers of the brain
a particularly prevalent form of brain tumor. The suggested technique has the
ability to increase brain tumor diagnostic precision, allowing for early detection
and more efficient treatment. However, additional validation of the technique
is required to guarantee its efficacy in clinical situations.
Stefan T. Lang [5]et al. A mathematical modeling research was carried
out to examine the effect of peritumoral edema during tumor treatment field
therapy. A mathematical model was used in the research to mimic the electric
field dispersion and tumor reaction to treatment. The authors presented
encouraging findings, showing that the existence of peritumoral edema can
have a substantial impact on the efficacy of tumor treatment field therapy.
The suggested computational model has the ability to enhance cancer field
therapeutic design and optimizing, resulting in more successful results from
treatment. However, additional validation of the model is required to guarantee
its efficacy in clinical situations.
Jianxin Zhang [6]et al.An Attention Gate ResU-Net model for automated
MRI neural tumor separation was suggested. To correctly separate brain
tumor areas in MRI pictures, the algorithm used a mix of residual links,
U-Net design, and focus gates. The authors revealed encouraging findings,
showing the fact that the model they suggested outperformed several state-of-
the-art segmentation algorithms in terms of accuracy. The suggested model has
the ability to speed up and increase the precision of brain tumor segmentation,
resulting in early discovery and more effective therapy. However, additional
validation of the model is required to guarantee its efficacy in clinical situations.

Department of Electronics and Communication Engineering 7


Neil Micallef [7]et al.The use of the U-Net++ model for automated clas-
sification of brain tumors was investigated. The U-Net++ model extends
the famous U-Net design by adding stacked and thick skip links to enhance
segmentation accuracy. The researchers reported encouraging findings, showing
that their suggested U-Net++ model outperformed several state-of-the-art seg-
mentation models in terms of precision. The suggested model has the ability
to speed up and increase the precision of brain tumor segmentation, resulting
in early discovery and improved therapy. However, additional validation of the
model is required to guarantee its efficacy in clinical situations.
Zhiguan Huang [8]et al. A team of researchers has developed a novel
Convolutional Neural Network (CNN) for classifying brain tumor images. The
model combines complex network theory with CNN architecture and features a
modified activation function. According to the authors, their proposed model
achieved high accuracy and outperformed several existing classification models.
These promising results suggest that the proposed model has the potential
to improve the accuracy and speed of brain tumor image classification, which
could lead to earlier detection and more effective treatment. However, further
validation of the model is needed to ensure its effectiveness in real-world
clinical settings.
Kuankuan Hao [9]et al.A new approach to brain tumor segmentation has
been developed by a group of researchers. Their proposed method introduces
a generalized pooling technique based on the max-pooling operation. The
pool size is dynamically determined by a learned attention map, which makes
the method more adaptable to different image characteristics. The authors
reported promising results, showing that their proposed method outperformed
several existing state-of-the-art segmentation models, achieving high accuracy.
The proposed method has the potential to enhance the accuracy and speed
of brain tumor segmentation, which could facilitate earlier detection and more
effective treatment. However, further validation of the method is necessary to
ensure its effectiveness in real-world clinical settings.
Chenjie Ge[10] et al.A team of researchers has proposed a new approach
for molecular-based brain tumor classification that addresses the challenge of

Department of Electronics and Communication Engineering 8


limited training data. Their method uses pairwise Generative Adversarial
Networks (GANs) to enlarge the training dataset. The authors reported
promising results, showing that their proposed method achieved high accuracy
and outperformed several existing state-of-the-art classification models. The
proposed method has the potential to improve the accuracy and generalization
ability of molecular-based brain tumor classification models, which could enable
more personalized treatment options for patients. However, further validation
of the method is necessary to ensure its effectiveness in real-world clinical
settings.
T. Kalaiselvi[11] et al.A team of researchers has developed an automated
glioma brain tumor detection system using deep convolutional neural networks
(CNNs). The aim of their study was to improve the accuracy of brain tumor
detection in magnetic resonance imaging (MRI) scans. The researchers created
a deep learning model that combines 2D and 3D CNNs to detect and segment
brain tumors in MRI images. The proposed method was evaluated using
a dataset of brain MRI scans and showed high accuracy and sensitivity in
detecting gliomas. The authors suggest that their proposed method has the
potential to enhance the accuracy and speed of brain tumor diagnosis, leading
to more effective and personalized treatment options for patients. Nevertheless,
further research is needed to validate the performance of the proposed method
in real-world clinical settings
.P. Das [12]et al. discussed the importance of content-based medical visual
information retrieval (CBMVIR) and its various applications in medical image
analysis. The author emphasizes the need for CBMVIR in managing the vast
amount of medical image data generated every day and describes the challenges
faced in implementing CBMVIR systems. The chapter provides an overview
of the CBMVIR workflow, which includes image acquisition, preprocessing,
feature extraction, indexing, and retrieval. The author concludes by discussing
the potential future directions for CBMVIR in medical image analysis.
S. Zhang [13]et al. The article presents a medical image retrieval method
that combines empirical mode decomposition (EMD) and deep convolutional
neural networks (DCNNs). The proposed approach aims to improve retrieval

Department of Electronics and Communication Engineering 9


accuracy and overcome the limitations of traditional image retrieval methods.
The results showed that the EMD-DCNN approach outperforms other state-
of-the-art methods in terms of retrieval accuracy and efficiency. The study
suggests that EMD-DCNN is a promising method for medical image retrieval.
Z. N. K. Swati [14]et al. They proposed a content-based brain tumor image
retrieval method using transfer learning. The method involved training a deep
convolutional neural network on a large-scale dataset and then fine-tuning
it on a smaller dataset of brain tumor images. The features learned by the
network were then used to retrieve similar images from a database using cosine
similarity. The method was evaluated on a dataset of 300 brain tumor images
and achieved a retrieval accuracy of 95.33 percentage.The results suggest that
transfer learning can be an effective approach for developing a content-based
image retrieval system for brain tumor images.
P. Kaur [15]et al. The aim of this research paper is to provide an extensive
review of content-based medical image retrieval systems (CBMIR) and their
diverse applications in the medical domain. The article starts by discussing the
significant challenges and emerging trends in CBMIR, including the selection
and extraction of relevant image features, the development of accurate similarity
measures, and the incorporation of relevance feedback mechanisms to improve
retrieval results.Furthermore, the paper highlights the latest advancements
in CBMIR techniques, such as deep learning-based feature extraction and
machine learning-based similarity measures, which have shown promising results
in various medical imaging applications. These advancements have enabled
the development of automated and efficient CBMIR systems that can assist
medical professionals in diagnosing and treating various medical conditions.The
article also discusses the potential use of CBMIR systems in various medical
fields, such as radiology, oncology, and pathology. For example, CBMIR
systems can be used to retrieve similar medical images from large image
databases, helping doctors to identify diseases, track disease progression, and
plan treatment strategies. Additionally, CBMIR systems can be used to
compare medical images over time, allowing for accurate and timely diagnosis
and treatment.Overall, this paper provides a comprehensive review of CBMIR

Department of Electronics and Communication Engineering 10


techniques and their applications in the medical domain, highlighting the
potential of CBMIR systems to revolutionize medical imaging and improve
patient outcomes.
Maheswaran [16]et al. presents the design and development of an eco-
friendly weeding system for row-based crops. The system is based on the
concept of inter-row weeding, where the weeds are removed by mechanical
means without disturbing the crop rows. The paper discusses the design
of the weeder and the various components used in its construction. The
performance of the weeder is evaluated in terms of its weeding efficiency and
power consumption. The results show that the weeder is effective in removing
weeds and consumes less power compared to other similar systems. The
paper concludes by highlighting the potential of the weeder as an eco-friendly
alternative to chemical-based weeding systems.
S.Gharge [17]et alThe paper proposes a medical image retrieval system that
uses region-based shape features for CT images. The system extracts the region
of interest (ROI) from CT images and computes the shape features of the
ROI using the morphological skeletonization technique. These shape features
are then utilized to retrieve similar images from a database of CT images.
To validate their method, the researchers evaluated it on a dataset of 100
CT images and compared its performance with other state-of-the-art methods.
The experimental results show that the proposed method outperforms other
methods in terms of precision and recall. Based on the findings, the authors
concluded that the proposed method is effective for medical image retrieval
and has the potential for use in clinical practice.
S. K. Sundararajan [18]et al.A study proposes a content-based image
retrieval (CBIR) system for medical images using a deep belief convolutional
neural network (DB-CNN). The system utilizes the learned features from the
pre-trained DB-CNN model for feature representation of the medical images.
To evaluate their method, the researchers tested it on the publicly available
ImageCLEFmed dataset and achieved promising results in terms of precision
and recall for different query types. Based on the findings, the study concludes
that the proposed DB-CNN-based CBIR system is effective and can be utilized

Department of Electronics and Communication Engineering 11


for retrieving medical images based on their content features.
M. Liu [19]et al. A study proposes a new method for accurate brain tumor
segmentation using dual-force convolutional neural networks (DF-CNN). The
method aims to address the challenges of existing approaches, including low
segmentation accuracy and the need for a large amount of annotated data for
training. The DF-CNN model consists of two sub-networks: the feature extrac-
tion network and the segmentation network. The feature extraction network
extracts discriminative features of the tumor region, while the segmentation
network performs the segmentation using a dual-force strategy. This strategy
comprises two types of forces: external and internal forces. The proposed
method shows promising results and could potentially improve the accuracy
of brain tumor segmentation, leading to more effective treatment options for
patients. However, further studies are needed to validate its effectiveness in
clinical settings.
N.Noreen[20] et al.The authors presented a brain tumor classification model
that utilizes a concatenation approach, where features extracted from multiple
regions of interest (ROIs) in brain MRI scans are combined and input into a
deep neural network. The paper includes a comprehensive explanation of the
model architecture, training, and validation procedures. The authors conducted
a comparison of their proposed model with other state-of-the-art methods for
brain tumor classification. The results showed that their model achieved a
high accuracy rate of 97.77percentage, outperforming other models in terms
of sensitivity and specificity. Overall, the proposed model provides promising
results for accurate brain tumor classification.
A.H.Abdel gaward[21] et al.They proposed technique uses a combination of
the Sobel operator and the Canny edge detector, which are well-known edge
detection techniques. The authors optimize the parameters of these methods
to improve their performance in detecting the edges of brain tumors. They
use the MATLAB software to implement and evaluate their technique.The
authors evaluate the performance of their technique using several metrics,
including accuracy, sensitivity, and specificity. They compare their technique
with other edge detection methods and a deep learning-based approach. The

Department of Electronics and Communication Engineering 12


results show that the proposed technique outperforms other methods in terms
of accuracy and sensitivity while maintaining a high specificity. The authors
also demonstrate the effectiveness of their technique in detecting tumors of
different sizes and shapes.
B.Deepa[22] et al. The authors propose a four-stage approach for brain
tumor classification, consisting of pre-processing, feature extraction, feature
selection, and classification. In the pre-processing stage, a Weiner filter is
applied to enhance the quality of MR images. The authors use the Hough
transform to extract tumor region features and the wavelet transform for
texture features in the feature extraction stage. In the feature selection stage,
principal component analysis (PCA) is employed to select the most relevant
features. Finally, a neural network algorithm is used for classification.The
authors evaluate their approach using various metrics, including accuracy,
sensitivity, specificity, and F1 score, and compare it with other state-of-the-art
methods. Results show that their approach achieves higher accuracy and
efficiency and can effectively classify tumors of different sizes and shapes.
L.Fan[23] et al. This paper first introduces the different types of noise
that can affect images, including Gaussian noise, salt-and-pepper noise, and
speckle noise. The authors then review several classical image denoising tech-
niques, including the median filter, the mean filter, and the bilateral filter.
They describe the principles of these techniques, their advantages, and their
limitations.The paper then discusses more advanced image denoising tech-
niques, including wavelet-based denoising, non-local means denoising, and total
variation denoising. The authors describe the principles of these techniques
and compare their performance with classical techniques. They also discuss
some recent advances in deep learning-based denoising methods, such as deep
convolutional neural networks.
M.Gurbina [24]et al.They first preprocess the MRI images by applying con-
trast stretching and histogram equalization techniques to enhance the contrast
and improve the quality of the images. They then apply different wavelet
transforms, including the discrete wavelet transform (DWT), the stationary
wavelet transform (SWT), and the dual-tree complex wavelet transform (DT-

Department of Electronics and Communication Engineering 13


CWT), to extract features from the images.The extracted features are then
used to train SVM classifiers for tumor detection and classification. The
authors experiment with different SVM kernels, including linear, polynomial,
and radial basis function (RBF) kernels, to find the best performing classifier.
They also compare the performance of their method with other state-of-the-art
methods for tumor detection and classification.
M.Li [25]et al. Theyfirst preprocess the MRI images by applying skull
stripping, normalization, and intensity rescaling techniques to improve the
image quality and remove irrelevant information. They then extract features
from the images using three different modalities, including T1-weighted, T2-
weighted, and fluid-attenuated inversion recovery (FLAIR) images.The features
from the three modalities are then fused using a feature concatenation method,
which combines the features into a single feature vector. The fused features
are then fed into a CNN for tumor detection and classification. The authors
experiment with different CNN architectures, including VGG-16, ResNet-50,
and Inception-v3, to find the best performing network.
A. M. Abdelaty [26]et al.The authors presented a method for simulating and
implementing fractional-order systems using product integration rules. They
demonstrated the effectiveness of their method by simulating various fractional-
order systems and implementing them on an FPGA (Field Programmable Gate
Array). The results showed that the proposed method can accurately simulate
fractional-order systems while reducing computational complexity and hardware
resources. The paper contributes to the field of fractional calculus and its
applications in engineering and technology.
B. M. Aboalnaga[27] et al. The paper describes an investigation into
the changes in the electrical properties of carrots under different tempera-
tures. The authors used electrical impedance spectroscopy (EIS) to measure
the frequency-dependent electrical properties of the carrots as a function of
temperature. The study showed that the Cole bio-impedance model of the
carrots changes significantly as a function of temperature, indicating changes
in the physiological state of the carrots. The results of the study suggest
that EIS is a useful non-destructive technique for monitoring the physiological

Department of Electronics and Communication Engineering 14


changes in plants and vegetables during storage and processing. The findings
of the study can be useful in developing new technologies for the preservation
and processing of plant-based food products.
O. Li [28]et al. The authors of this study proposed a novel method for color
edge detection in images that addresses the challenges faced by traditional edge
detection techniques. The proposed method utilizes an anisotropic morpholog-
ical directional derivative matrix to capture the local texture information of
an image and estimate the edge orientation. This method effectively preserves
the color edges in color images, even in the presence of noise.To validate
the effectiveness of the proposed method, the authors conducted extensive
experiments and compared the results with other state-of-the-art color edge
detection methods. The experimental results demonstrated that the proposed
method outperforms the other methods in terms of accuracy and robustness
in different scenarios. This method can be useful in various image process-
ing applications, including image segmentation, object recognition, and scene
analysis. It can also be used in medical imaging to accurately detect edges
and boundaries of tumors or other structures of interest.
S. M. Ismail[29] et al.This paper presents a novel image encryption system
that merges fractional-order edge detection and generalized chaotic maps.
The proposed encryption system consists of two stages: the first stage is
a fractional-order edge detection process, which enhances the edges of the
image and generates a binary edge map. The second stage is the encryption
process, which uses the binary edge map and chaotic maps to scramble the
image pixels. The proposed system is evaluated using various performance
metrics, including key space analysis, histogram analysis, correlation analysis,
and differential analysis. The results demonstrate that the proposed system
is highly secure and robust against attacks compared to other existing image
encryption systems.
L. Qiusheng [30]et al. They presented a compressed sensing MRI method
based on hybrid regularization by denoising and epigraph projection. The
proposed method combines the strengths of denoising and sparsity in image
representation for improved reconstruction quality. The approach utilizes a

Department of Electronics and Communication Engineering 15


wavelet-based denoising method as the first step, which is followed by a
constrained optimization problem that exploits sparsity in the wavelet domain.
The proposed method also employs an epigraph projection algorithm to impose
the constraints on the solution space. The effectiveness of the proposed method
is demonstrated through simulation results and comparisons with other state-
of-the-art compressed sensing MRI methods.

Department of Electronics and Communication Engineering 16


CHAPTER 3

Methodology

3.1 methodology of semantic based image retrieval


The methodology of the proposed project semantic based similarity tech-
nique for retrieving brain tumor images involves the following steps:
DataCollection: Collecting brain tumor MRI images dataset from publicly
available sources or hospitals.
Pre-processing: Pre-processing the collected data by removing noise, nor-
malizing the intensity levels, and resizing the images to a standard size.
Image Segmentation: Using a pre-trained U-Net deep learning model for
segmenting the brain tumor regions in the MRI images.
Feature Extraction: Using a pre-trained ResNeXt-50 deep learning model
for extracting features from the segmented tumor regions.
Similarity Calculation: Calculating the similarity between the query image
and the dataset images based on the extracted features using a convolutional
neural network (CNN).
Retrieval: Retrieving the most similar images from the dataset based on
the calculated similarity scores.
Evaluation: Evaluating the performance of the proposed system using
standard evaluation metrics such as precision, recall, and F1-score.
Optimization: Optimizing the system parameters and architecture to achieve
better performance.
Deployment: Deploying the system as a web application or a standalone
software for practical use.

3.2 Data Collection


”Like any research project, gathering data is a crucial first step in developing
semantic-based similarity methods to retrieve brain tumor images. The success

17
of the project depends on both the quantity and quality of the data used
to train and evaluate the models. To obtain the necessary data, the Cancer
Imaging Archive (TCIA), a publicly available resource of medical imaging data,
including brain MRI scans, is the primary source. Additional sources of data
include hospital and study center archives, where patient data is gathered
and stored for medical diagnosis and research. Prior to using the images
for analysis, pre-processing techniques are applied to ensure accuracy and
consistency. To enhance the quality and clarity of the images, pre-processing
methods such as noise reduction, normalization, and contrast enhancement are
used.”

Figure 3.1: Collection of data from medical data base

Data collection is a crucial first step in any research project, and this applies
to retrieving brain tumor images using semantic-based similarity methods as
well. The quality and quantity of data used for training and evaluating the
models are critical to the success of the project. The Cancer Imaging Archive
(TCIA) provides a large collection of publicly available medical imaging data,
including brain MRI scans, and serves as the primary source of data for
this project. Additional sources of data include hospital and study center
archives where patient data is collected and saved for medical diagnosis and
research. Pre-processing methods such as noise reduction, normalization, and
contrast enhancement are utilized to ensure accuracy and uniformity in the
project’s images. Medical professionals then mark the pre-processed images
to identify the regions of interest (ROIs) containing the tumor. Annotations

Department of Electronics and Communication Engineering 18


provide a basis for comparison during training, ensuring that the models
accurately identify the tumor. Next, the annotated images are divided into
training, validation, and testing groups. The validation set fine-tunes the
models’ hyperparameters to prevent overfitting, the training set trains the
models, and the testing set evaluates their performance. To protect the
privacy and confidentiality of patient data, all data used in the project must
be de-identified and anonymized by removing personal information, such as
names, addresses, and social security numbers. The use of de-identified data is
necessary to comply with privacy laws such as the Health Insurance Portability
and Accountability Act (HIPAA) in the United States.

3.3 Pre-processing
Pre-processing refers to a set of operations that are performed on raw data
to prepare it for further analysis. In the context of medical image analysis,
preprocessing typically involves a series of steps to improve the quality of the
images, remove noise and artifacts, and enhance the contrast between different
tissues or structures.

Figure 3.2: Pre-Processing of brain tumor images

The preprocessing steps for brain tumor images typically include the fol-
lowing:

3.3.1 Image acquisition


The first step is to acquire the MRI images using a suitable imaging
protocol. This may involve T1-weighted, T2-weighted, and contrast-enhanced
MRI sequences.

Department of Electronics and Communication Engineering 19


3.3.2 Noise reduction
MRI images are often affected by various sources of noise, including elec-
tronic noise, motion artifacts, and thermal noise. To reduce these sources of
noise, various filtering techniques such as Gaussian filtering or median filtering
can be applied.

3.3.3 Intensity normalization


The intensity of MRI images can vary across different images and even
within the same image. To address this, the images can be normalized to a
common intensity range.

3.3.4 Skull stripping


The skull and other non-brain tissues can be removed from the images
to focus on the brain structures. This can be done using various algorithms,
such as threshold or region growing.

3.3.5 Image registration


If multiple images are acquired for a single patient, they may need to
be aligned to each other to ensure accurate comparison. Image registration
can be done using various methods, such as affine registration or non-rigid
registration.

3.4 Image segmentation


Image segmentation is a technique used to partition an image into several
segments or regions with similar visual characteristics, such as color or tex-
ture. The primary purpose of image segmentation is to simplify the image’s
representation and make it easier to analyze and interpret. The Unet model
is a well-known approach for image segmentation, specifically designed for
biomedical image segmentation. The Unet model comprises a contracting path
and an expansive path, with skip connections between them to retain spatial

Department of Electronics and Communication Engineering 20


information. The contracting path applies convolutional and pooling layers
to reduce the input image’s spatial resolution and extract high-level features.
The expansive path uses deconvolutional and upsampling layers to increase
the spatial resolution and produce a segmentation mask.

Figure 3.3: Image segmentation of brain tumor images

Image segmentation is the process of dividing an image into multiple


segments or regions based on visual properties such as color or texture. The
objective of image segmentation is to simplify the image representation by
breaking it down into smaller segments for analysis. In biomedical image
segmentation, one of the most commonly used models is the Unet model,
which is a convolutional neural network architecture. It consists of two
paths, the contracting and expansive paths, with skip connections to preserve
spatial information. The model is trained on annotated MRI images of the
brain to learn features that distinguish the tumor region from the rest of
the brain tissue. The Unet model has greatly improved the accuracy and
efficiency of image segmentation, making it a crucial step in many medical
imaging applications such as brain tumor detection and diagnosis. Accurate
segmentation of the tumor region can aid in treatment planning and monitoring
disease progression, thereby improving patient outcomes.

Department of Electronics and Communication Engineering 21


3.5 Feature Extraction
After segmenting the brain MRI scans to isolate the tumor regions, the
next step is feature extraction, which involves extracting meaningful features
from the segmented images that can be used for classification. In this
project, ResNeXt50, a deep convolutional neural network (CNN) architecture,
is employed for feature extraction. ResNeXt50 is a highly effective CNN model
that has shown superior performance in various image classification tasks. It is

Figure 3.4: Feature Extraction of brain tumor images

a variation of the ResNet architecture that utilizes a cardinality parameter to


enable more efficient training and better performance. The ResNeXt50 model
comprises 50 layers and over 25 million parameters. To extract features from
the segmented images, each image goes through the ResNeXt50 model, which
produces a 2048-dimensional feature vector for each image. These feature
vectors capture the high-level features of the brain tumor regions in the MRI
scans, which can then be employed for classification.

3.6 Similarity Calculation


The next step after extracting features from brain tumor images is to mea-
sure the similarity between the query image and the images in the database.
A cosine similarity metric is used in this project to calculate the similarity
between feature vectors. Cosine similarity measures the cosine of the angle
between two vectors in n-dimensional space, with values ranging from -1 (op-
posite directions) to 1 (same direction). To calculate the similarity, the feature
vector of each query image is first generated using the trained ResNeXt50

Department of Electronics and Communication Engineering 22


model. Then, the cosine similarity score between the query image’s feature
vector and the feature vectors of all images in the database is computed. The
image with the highest cosine similarity score is considered the most similar
to the query image and is retrieved as the result.
To expedite the retrieval process, an HNSW (Hierarchical Navigable Small
World) index is used as an approximate nearest neighbor algorithm. The
HNSW index creates a hierarchical graph structure by connecting similar
images, allowing for rapid retrieval of the most similar images while reducing
the number of comparisons required. The similarity calculation stage is essential
in retrieving brain tumor images that are similar to the query image with
high accuracy.

3.7 Image Retrieval


After the feature extraction and similarity calculation, the image retrieval
step involves sorting the images based on their similarity scores and selecting
the top-k images as the search results. The selection of the value of k depends
on the specific application requirements and user preferences. In this project,
the retrieval of brain tumor images is performed using a convolutional neural
network (CNN). The CNN is trained on a large data set of brain tumor
images and can accurately classify images into different categories. For image
retrieval, the CNN is used as a feature extractor to extract feature vectors
from the images. The feature vectors are then compared using a distance
metric to compute the similarity between the images. Once the similarity

Figure 3.5: Image Retrieval of brain tumor images

Department of Electronics and Communication Engineering 23


scores are computed, the images are sorted in descending order of their scores.
The top-k images are selected as the search results and displayed to the user.
The user can then browse through the search results and select the images
that best meet their requirements. Overall, image retrieval using CNNs is
a powerful technique that has been successfully applied in a wide range of
applications, including object recognition, face recognition, and medical image
analysis. It is particularly useful in applications where the images are complex
and contain a large number of features.

3.8 Evaluation
Evaluation is a critical component of any machine learning project to assess
the accuracy and effectiveness of the proposed model. In this project, we will
evaluate the performance of a semantic-based similarity technique for retrieving
brain tumor images using various metrics. The evaluation metrics used in this
project will include precision, recall, and F1 score. Precision measures the
number of relevant images among the retrieved images, while recall measures
the proportion of relevant images that were successfully retrieved. The F1
score is the harmonic mean of precision and recall. Furthermore, we will
plot the receiver operating characteristic (ROC) curve to evaluate the model’s
performance in terms of true positive rate (TPR) and false positive rate
(FPR). We will also calculate the area under the ROC curve (AUC), which
provides a comprehensive measure of the model’s performance.
To assess the effectiveness of the proposed technique, we will compare it
to other state-of-the-art image retrieval techniques using the same dataset.
The comparison will be based on the metrics mentioned above. Overall, the
evaluation process will provide valuable insights into the effectiveness of the
proposed technique and its potential for enhancing the accuracy of brain tumor
image retrieval.

Department of Electronics and Communication Engineering 24


3.9 Optimization
Optimizing a system can be achieved through various methods, such as
hyper parameter tuning, model optimization, and data augmentation. Hyper
parameter tuning involves adjusting the parameters of deep learning models
to determine the most suitable values for them. This can be accomplished
using different techniques such as grid search, random search, or Bayesian
optimization. Model optimization involves enhancing the architecture of deep
learning models by altering the number of layers, the number of neurons in
each layer, and the activation functions. Testing different architectures can be
done to identify the most effective one for a given task. Data augmentation
can be utilized to improve the performance of models by enlarging the data
set. This process involves creating new data from existing data through various
transformations, such as rotation, scaling, and flipping. It aids the models in
generalizing better, improving their performance on test data. In conclusion,
optimization can enhance system performance, making it more precise and
effective in retrieving similar brain tumor images.

3.10 Deployment
The deployment stage involves integrating the developed model into an
application or platform where it can be used to serve its intended purpose.
In the case of this project, the deployment stage would involve integrating
the developed image retrieval model into a software application or platform
that can be used by medical professionals to retrieve relevant brain tumor
images based on semantic similarity. The deployment process may involve
several steps such as converting the model to an appropriate format that can
be easily integrated into the application, setting up a suitable server or cloud
environment to host the model, and creating an interface that allows users to
interact with the application and retrieve the desired images.
The deployment stage also involves testing the application to ensure that
it is functioning properly and meeting the required performance criteria. This
may involve testing the application on a small scale initially and gradually

Department of Electronics and Communication Engineering 25


scaling up to larger volumes of data to ensure that the system can handle the
load and deliver results efficiently. Finally, the deployment stage also involves
providing ongoing support and maintenance to ensure that the application
continues to function effectively over time. This may involve addressing any
issues that arise, updating the application as necessary to reflect changes in
the data or the environment, and providing training and support to users to
help them make the most of the application.

3.11 Software Testing


Testing documentation refers to the set of documents created during or
before the testing of a software application. It is important for the customer,
individual, and organization to have a record of the testing process as it reflects
the maturity level of the project. Documentation can save time, effort, and
resources for the organization, as it enables the testing or development team to
quickly find the cause of any errors in the software by examining the relevant
documents.Documentation has several benefits, including clarifying the quality
of methods and objectives, ensuring internal coordination when a customer
uses the software application, providing feedback on preventive tasks, and
creating objective evidence for the performance of the quality management
system.One type of testing documentation is the test scenario, which is a
high-level classification of testable requirements based on the functionality of a
module and obtained from the use cases. The test scenario includes a detailed
testing process due to the many associated test cases, and the tester must
consider the test cases for each scenario before performing the testing process.
In software testing, documentation plays a crucial role in ensuring the qual-
ity of the application. It involves creating and maintaining various artifacts
related to testing, such as test plans, test cases, test reports, and checklists.
The purpose of documentation testing is to ensure that all documentation is
accurate, complete, and up-to-date.One of the most critical aspects of docu-
mentation testing is the preparation of test scenarios. Testers need to put
themselves in the user’s place and test the software application from their
perspective. Preparation of scenarios requires input from various stakeholders,

Department of Electronics and Communication Engineering 26


including customers, developers, and project managers.IEEE standards define
various types of documents related to testing, including test case specifications,
test incident reports, test logs, test plans, test procedures, and test reports.
Testing all these documents falls under the purview of documentation test-
ing.Documentation testing is a cost-effective approach to testing as it can help
identify defects early in the software development life cycle. It involves various
methods, ranging from spell checks to manual reviews to remove inconsisten-
cies and ambiguities. Some of the popular documentation testing files include
test reports, plans, and checklists. These documents help outline the team’s
workload and keep track of the testing process, ensuring that all aspects of
testing are covered. The key requirement for these files is that they should
be accurate, complete, and up-to-date.The proper documentation of testing
processes is essential for understanding and optimizing these processes. Both
internal and external documentation is important, with external files being
more concise and focused on tangible results, while internal files are used
by team members to optimize the process.Unit testing is a well-established
concept that has been used since the early days of programming. Developers
and white box testers typically write unit tests to improve code quality by
verifying each unit of code used to implement functional requirements.

3.11.1 Test strategy


Test strategy is an essential document that outlines the approach for testing
the product. It helps the developers, designers, and product owners to monitor
the actual performance and verify that it corresponds to the planned activities.

3.11.2 Test data


Test data is the information entered by testers into the software to verify
certain features and their outputs. Examples of such data include fake user
profiles, statistics, media content, or similar files that end-users may upload
in a ready solution.

Department of Electronics and Communication Engineering 27


3.11.3 Test plans
Test plans are files that describe the strategy, resources, environment,
limitations, and schedule of the testing process. It is the most comprehensive
testing document, essential for informed planning, and is distributed between
team members and shared with all stakeholders.

3.11.4 Test scenarios


Test scenarios are files that break down the product’s functionality and
interface by modules and provide real-time status updates at all testing
stages. A module can be described by a single statement or require hundreds
of statuses, depending on its size and scope.

3.11.5 Test cases


Test cases, on the other hand, describe the procedure for testing the object
of testing (what) defined in the test scenario. These files cover step-by-step
guidance, detailed conditions, and current inputs of a testing task. Test cases
have their own kinds that depend on the type of testing, such as functional,
UI, physical, logical cases, and more. They compare available resources and
current conditions with desired outcomes to determine if the functionality can
be released or not.

3.11.6 Traceability Matrix


The traceability matrix is a document used in software testing that links
requirements to test cases. Each test case and requirement is assigned a unique
identifier, allowing team members and stakeholders to track the progress of
specific tasks by searching for their corresponding IDs. The traceability
matrix ensures that all requirements are adequately tested and that there is
full coverage of the project’s scope. This document is essential for identifying
any gaps in testing and for maintaining accountability throughout the testing
process.

Department of Electronics and Communication Engineering 28


Figure 3.6: Segmentation Block of Code

3.12 Unit Testing


Unit testing is a type of software testing that focuses on testing individual
units or components of a software application. The purpose of unit testing is to
verify that each unit of the software performs as intended. A unit is typically
the smallest testable part of a software application and may be an individual
program, function, procedure, or method in object-oriented programming. Unit
testing frameworks, drivers, stubs, and mock/fake objects are often used to
assist in unit testing.
Black box testers typically do not concern themselves with unit testing
since their goal is to validate the application against requirements without
getting into the implementation details. On the other hand, developers and
sometimes white box testers are the ones who typically write unit tests to
improve code quality by verifying each and every unit of the code used to
implement functional requirements. Unit testing is not a new concept and has
been around since the early days of programming. Test-driven development
(TDD) or test-first development is one approach where developers write unit
tests before writing the actual code to implement functional requirements.

Department of Electronics and Communication Engineering 29


CHAPTER 4

Network Architecture

4.1 Proposed System


Detecting brain cancer without human intervention is a major problem
in the area of medical image processing. The process of segmenting MRI
brain images is the first stage in extracting different characteristics from
these images for analysis, interpretation, and understanding. The purpose of
MRI brain segmentation is to identify the type of brain disorder present.
In these techniques, the picture threshold is determined using a Gaussian
distribution. The Gaussian distribution means that the histogram of the
picture is symmetric. If the histogram is not symmetric, a more broad
distribution must be used, such as the Gamma distribution. The purpose of
this article is to incorporate Between-Class Variance into a Neural Network
method that has been shown to be successful for picture segmentation. On MRI
brain images, the suggested method will be tested. The method of selecting
the best features from the extracted ones is known as feature selection. This
choice reduces memory area and computational time. Calculating the variance
yields the statistical global characteristics. The characteristics with the greatest
variance are chosen. Statistical distributions can be used to describe the pixel
intensity distribution in a histogram. In Gray-level format, two kinds of
distributions can be defined: symmetric and non-symmetric. To pre-process
the images, the suggested system employs MR Bias Correction and skull
stripping. Better segmentation is achieved by maximizing MRI image clarity
while minimizing noise, as brain MRI images are extremely sensitive to noise.
Denoised Gaussian blur filter Total Variation smoothing was used in our study
to reduce noise in Brain MRI, which improved segmentation performance.
Along with these, data augmentation is used to improve model performance
and prevent overfitting.Using the UNet concept instead of the conventional

30
technique to train the Convolutional Neural Network.Trained over millions of
images which not only makes the learning process faster but also makes it
more accurate while predicting while in UNet, learning new tasks is based on
previously taught tasks.Call backs can help with faster error resolution and the
creation of better models. They can help you see how your model’s training
is progressing and avoid overfitting by introducing early stops or changing the
learning rate for each iteration. In this case, we’ll use the Model Checkpoint
and Reduce LR On Plateau callback methods. We will use Global Average
Pooling D instead of MaxPooling to create a second pooling layer while using
transfer learning.

4.1.1 Advantages of Proposed System


1] Improving the skill of the model to achieve better performance.
2] The model may not be able to meet real-time requirements.
3] Enhancing operational efficiency of the model.
4] Avoiding significant distortion of the prediction results.
5] Achieving higher objective scores and improved overall performance of
the model.

4.2 Existing System


Brain tumors pose a significant threat to human life, and identifying an
appropriate brain tumor image from a magnetic resonance imaging (MRI)
archive is a challenging task for radiologists. Traditional text-based search
engines may not be effective in retrieving relevant images. The primary
challenge in MRI image analysis is the semantic gap between the low-level
visual information captured by the MRI machine and the high-level information
interpreted by the assessor. To address this, we propose a Content-Based
Medical Image Retrieval (CBMIR) system for retrieving brain tumor images
from large datasets. Our approach involves several steps. First, we apply
various filtering techniques to remove noise from MRI images. Next, we use
a feature extraction scheme that combines Gabor filtering, which focuses on
specific frequency content in the image region, and Walsh-Hadamard transform

Department of Electronics and Communication Engineering 31


(WHT), a technique that simplifies image configuration. This allows us to
extract representative features from MRI images. Finally, we use a Fuzzy C-
Means clustering Minkowski distance metric to evaluate the similarity between
the query image and database images, enabling us to retrieve accurate and
reliable images.
We tested our proposed methodology on a publicly available brain tumor
MRI image database and compared it to existing techniques such as Gabor
wavelet and Hough transform. Our experimental results demonstrate that
our approach outperforms these techniques in detecting brain tumors and is
also faster. Our proposed approach could be beneficial for radiologists and
technologists in building an automatic decision support system that produces
reproducible and objective results with high accuracy. One of the main
challenges in CBIR is to efficiently extract features such as texture and shape
from brain images and represent them in a usable form for image matching.
In our work, we implement texture feature extraction using a hybrid of Gabor
and Walsh-Hadamard Transform (GWHT) techniques. Gabor is a multi-scale,
multi-resolution filter that helps to identify features at different frequencies,
while WHT simplifies the image configuration and improves the efficiency of
feature extraction.
Recent image processing researchers face a challenge in content-based image
retrieval (CBIR). In this research article, we propose a new CBIR approach
for brain tumor retrieval based on the GWHT feature extraction technique.
Our proposed method achieved better performance with an accuracy of per-
centage for retrieving medical images. Evaluation of results shows that our
technique is superior in precision and recall compared to other existing tech-
niques. Additionally, our proposed method is faster compared to other feature
extraction methods. Retrieving the nearest image helps radiologists predict
true positive results as early as possible. One limitation of our work is that
we cannot segregate false positive images in cases of high similarity among
image pixels. Retrieving the relevant image based on features is a critical first
step in medical diagnosis. When this technique works with high accuracy,
further processing techniques can predict diseases as early as possible. In the

Department of Electronics and Communication Engineering 32


future, this limitation can be addressed by employing semantic-based similar-
ity calculation techniques. To summarize, our proposed CBIR approach based
on GWHT feature extraction has shown promising results for brain tumor
retrieval. It is faster and more accurate compared to existing techniques, and
can assist radiologists in making early and accurate diagnoses. However, the
limitation of the technique should be addressed in future research to improve
its effectiveness.

4.3 Drawbacks of Existing System


1] Additional configuration is necessary to optimize the performance of the
model.
2] The model may exhibit poor performance and high variance on the
validation data.
3] The training process can be computationally intensive and may require
a relatively large memory space.
4] The model may fall into the local optimum, requiring a long training
period.
5] Training models can be very expensive.
6] As the number of hidden layers increases, the computational complexity
of the model will also increase.
7] High levels of communication and computation overheads can be a
challenge.
8] The problem of diminishing feature reuse may occur.
9] Extensive data is required for model training, which can add to the
execution complexity.
10] This process not only increases the training time but also reduces the
stability of the network.

Department of Electronics and Communication Engineering 33


4.4 System Block Diagram

Figure 4.1: Block Diagram of System Network

4.4.1 Labeling
The act of giving a class or group to a data sample is known as labeling.
Labeling in the context of image analysis entails identifying and giving a name
to each pixel or region in an image based on the class or group to which
it belongs.This process is typically done manually by human annotators or
using automated techniques, such as computer vision algorithms. Labeling is
an important step in supervised machine learning, where labeled data is used
to train a model to make accurate predictions on new, unseen data.Labeling
is critical in the context of medical image analysis for jobs such as tumor
segmentation, where precise labeling is required to guarantee effective diagnosis
and therapy planning.Proper labeling also helps in building robust and accurate
machine learning models for medical imaging applications.

4.5 Image Segmentation using Unet


The process of dividing an image into numerous segments, each of which
corresponds to a significant area of the image, is known as image segmentation.
For picture segmentation tasks, the U-Net design is a common and efficient

Department of Electronics and Communication Engineering 34


deep learning model. The term ”U-Net” is taken from the model’s shape,
which resembles a ”U” when visualized.The U-Net design is divided into two
parts:The contracting route and the expansive path. The contracting route is
made up of several convolutional layers followed by max pooling layers, which
lower the spatial precision of the input picture while increasing the number of
feature maps.The expansive path is made up of several deconvolutional layers
that are concatenated with the appropriate feature maps from the contracting
path, progressively increasing spatial precision while reducing the number of
feature maps. The U-Net architecture uses skip connections, which allows the

Figure 4.2: How Unet works in image segmentation

expansive path to access low-level features from the contracting path. This
helps the model to better preserve spatial information and capture fine details
in the segmented regions.During training, the U-Net model is optimized using
a loss function that calculates the disparity between the expected and true
segmentation maps. The Dice coefficient loss function is the most frequently
used loss function for picture segmentation tasks.In conclusion, the U-Net design
is a strong deep learning model for image segmentation tasks, especially in
medical image analysis. Its capacity to catch fine features while preserving
spatial information makes it ideal for jobs like tumor segmentation in brain
MRI pictures.

4.5.1 Process of Image segmentation using Unet


1. Input: Medical image
2. Pre-processing: a. Image resizing b. Normalization

Department of Electronics and Communication Engineering 35


3. Contracting Path: a. Perform a series of convolutional and max pooling
operations to extract features while reducing the spatial dimensions of the
image
4. Expansive Path: A To expand the spatial dimensions of the picture
while keeping the feature representation, perform a sequence of convolutional
and up sampling operations.
5. Skip Connections:A To retain precise spatial information, concatenate
the feature maps from the contracting route with the matching feature maps
from the expansive path.
6. Output: Segmented medical image

Figure 4.3: Block diagram of Unet architecture

4.6 Feature Extraction using ResNext50


ResNeXt50 is a deep convolutional neural network that has demonstrated
good performance in picture classification challenges. It is a ResNet architecture
variant that employs a split-transform-merge approach to improve network
bandwidth without raising the number of parameters. The ResNeXt50 model
is made up of several residual blocks, each of which contains numerous parallel
pathways, referred to as cardinality, which are merged using summation.To
use ResNeXt50 for feature extraction in our brain tumor image retrieval
system, we delete the pre-trained model’s final classification layer and use
the output of the final convolutional layer as the extracted features for each
input picture.This process is commonly referred to as transfer learning.Using

Department of Electronics and Communication Engineering 36


ResNeXt50 as a feature extractor allows us to take advantage of the model’s
ability to learn high-level features from large data sets, which can be difficult
to achieve through traditional feature engineering methods. These learned
features can then be used to represent each input image as a high-dimensional
vector, which can be used in our similarity calculation and image retrieval
process. By leveraging ResNeXt50’s feature extraction capabilities, we can
improve the accuracy and efficiency of our brain tumor image retrieval system.

Figure 4.4: How does the model Faster R-CNN ResNet 50 work

4.7 Image Retrieving using Convolution Neural


Network
Convolutional Neural Networks (CNN) are neural networks that are widely
used in picture retrieval tasks. It is well-suited for image retrieval because
of its ability to automatically learn relevant features from images. In image
retrieval, a CNN is trained on a large set of images and their corresponding
labels.The network learns to identify patterns and characteristics in pictures
that correspond to the labels.A query picture is passed into the CNN during
retrieval, and the network produces a vector of features that describes the
image.A similarity measure, such as cosine similarity or Euclidean distance,
can then be used to compare this vector to the vectors of characteristics for
the pictures in the database. The pictures that are most comparable to the
query image can then be obtained.The CNN is trained using image features
taken from the ResNeXt50 model, and the cosine similarity measure is used to

Department of Electronics and Communication Engineering 37


compute the similarity between the question picture and the database images.
The pictures that are retrieved are ordered based on their similarity score,
and the best K images are given as search results. The Adam algorithm and
the binary cross-entropy loss function are used to improve the CNN model.
The mean average precision (MAP) measure is used to assess the accuracy of
the picture retrieval system.One advantage of using a CNN for image retrieval
is its ability to handle large data sets and complex images. CNNs can be
trained on millions of images and can learn to recognize subtle differences in
images that humans may not be able to perceive. Additionally, CNNs can be
fine-tuned for specific tasks, such as medical image retrieval, by training them
on relevant datasets.Overall, CNNs are a powerful tool for image retrieval
and have been used successfully in a variety of applications, including medical
image retrieval for diagnosing diseases.

Figure 4.5: Image Retrieving using Convolution neural network

4.8 Image Enhancement


Image enhancement is a computer-aided technique that aims to improve the
quality and visibility of an image. This technique combines both quantitative
and intuitive improvements, utilizing both point and local methods. Local
operations are determined by the number of input pixels in a district. There
are two main types of image enhancement methods: spatial and transform
domain techniques. Spatial methods work directly on the pixel level, while

Department of Electronics and Communication Engineering 38


transform techniques first use Fourier transformation before applying spatial
techniques. Edge detection is a segmentation technique used to identify tightly
connected objects or areas. This technique identifies object discontinuities by
detecting changes in intensity. It is widely used in image analysis to detect
areas of the image with significant intensity variations.

4.9 Thresholding
Thresholding is a fundamental method for image segmentation, which is
a nonlinear process that converts a gray scale image into a binary image by
assigning two levels to pixels based on a predetermined threshold value. In
Open CV, the cv2.threshold() method is used for thresholding, which takes in
a single-channel matrix and performs fixed-level thresholding. This function is
commonly used to create a binary image from a gray scale image to remove
noise, such as filtering out pixels with excessively small or large values.
The ”maxval” parameter is the predefined threshold value used to compare
input values. When the input value is greater than the predefined threshold
value, the output is set to the predefined maxval value. When the intensity
levels of the incoming pixels are below the threshold, the result is dark. The
tool supports several thresholding methods, and it returns both the calculated
threshold value and the thresholded image.

The cv2.threshold() method takes four arguments:


1.src - the input source, which should be a single-channel, 8-bit or 32-bit
floating point grayscale image.
2.thresh - the threshold value used to classify image values.
3.maxval - the maximum value that can be used with the thresholding
types THRESH BINARY and THRESH BINARY INV. It specifies the number
that will be assigned if the pixel value exceeds (or falls below) the cutoff
value.
4.type - the type of thresholding used. Two common types are cv2.THRESH
BINARY and cv2.THRESH BINARY INV.

Department of Electronics and Communication Engineering 39


Figure 4.6: Predicting the Brain tumor Using Threshold

4.10 Tumor classification using CNN


Image classification is a commonly used technique for recognizing different
objects and patterns within images, including medical imaging. Convolutional
Neural Networks (CNNs) are one of the most efficient methods for automated
and reliable image classification. CNNs are a type of deep learning algorithm
that can assign weights to different features or objects in an image and
distinguish between them. Unlike traditional classification methods that require
extensive preprocessing and hand-engineered filters, CNNs can learn these
filters and features through training. By using appropriate filters, CNNs can
effectively capture the spatial and temporal relationships within an image,
leading to better performance and accuracy in image classification tasks.
Additionally, CNNs can reduce the number of parameters involved in the
classification process and reuse weights, resulting in better understanding
and interpretation of image complexity. The ultimate goal of a CNN is to
convert images into a manageable format while retaining important features
for accurate predictions. To build a CNN, we need to import the necessary

Department of Electronics and Communication Engineering 40


packages, including Keras, and define the architecture of the network.
• A sequential technique is used to start the neural network.
• The convolutional network is built using Convolution2D.
• The Max Pooling2D layer is used to apply the pooling layers.

4.11 Pooling
The Pooling layer is an important component in Convolutional Neural
Networks (CNNs) as it reduces the spatial size of the convolved features
while retaining their important information. This process is also known as
dimensionality reduction, and it helps to reduce the computational complexity
of the model. Additionally, the pooling layer can help to identify important
features that are invariant to rotation and position, thereby enhancing the
model’s ability to learn effectively. During the pooling layer, the feature map
is sub sampled to a smaller size, typically using a 2x2 pool size. This reduces
the spatial size of the feature map while preserving its important features.
By reducing the size of the feature map, the computational complexity of the
model is reduced, allowing for faster processing and training. Overall, the
pooling layer is an important step in CNNs that helps to extract important
features while reducing the size and complexity of the data.
classifier.add (Max Pooling2D (pool size = (2,2)))

4.12 Advantages of Implemented Algorithm


1] Recurrent residual convolutional layers facilitate better feature accu-
mulation and representation, which can improve the performance of image
segmentation tasks.
2] The model can work effectively with a small amount of training data,
making it suitable for scenarios where labeled data is scarce.
3] The model can learn from a very small number of labeled images, which
is particularly useful for image segmentation tasks.
4] The model is capable of preserving the spatial information of the original
image during the segmentation process, which can lead to more accurate results.

Department of Electronics and Communication Engineering 41


5] The model can produce output maps for inputs of any size, although
the dimensions of the output are usually reduced by sub sampling.
6] By simplifying and accelerating the learning and inference processes of
the network, the model can significantly speed up the overall training and
prediction times.

Department of Electronics and Communication Engineering 42


CHAPTER 5

Result

5.1 Experiment results

Figure 5.1: Generated Output of semantic based image retrieval 1

The above MRI images show the generated output of semantic based image
retrieval and we can see that it shows the predictions of brain tumor im-
ages using CNN in which it shows the prediction results and the ground truth.

Figure 5.2: Generated Output of Semantic Based Image Retrieval 2

43
It detects the location of brain tumor accurately with 98.87 percentage.
where we used python language for coding .

Figure 5.3: Generated Output of Semantic Based Image Retrieval 3

We are able to successfully locate the location of the brain tumor with
semantic based similarity technique for retrieving brain tumor images.

Figure 5.4: Generated Output of Semantic Based Image Retrieval 4

Table 5.1: Performance of cluster technique for colour mode

Sr Performance criteria
No Type of clusters recall execution Time(s)
1 Fuzzy c-means cluster 94.83 1.875
2 K-means cluster 88.45 55.65

Department of Electronics and Communication Engineering 44


Table 5.2: Performance of cluster technique for accuracy

Sr Performance criteria
No Type of clusters recall execution Time(s)
1 Fuzzy c-means cluster 95.83 6.875
2 K-means cluster 84.65 17.75

5.1.1 Bar Graph of the Data set

Figure 5.5: Distribution of data by positive or negative

It shows the bar graph representing distribution of group data by diagnosis


by indicating number of positive and negative images.

Figure 5.6: detecting the tumor images using MRI

It detects the brain tumor images using MRI by feature extraction method.

Department of Electronics and Communication Engineering 45


Figure 5.7: Brain MRI Images for Brain Tumor Detection LGG Segmentation
Data set

Table 5.3: Performance of Methodology for accuracy

Sr Performance criteria
No Technique recall F1-source Precision MAP
1 Latent Semantic Analysis (LSA) 0.89 0.85 0.82 0.75
2 Convolutional Neural Networks (CNNs) 0.94 0.93 0.92 0.83
3 Transfer Learning 0.92 0.90 0.88 0.80
4 Auto encoders 0.88 0.86 0.85 0.77
5 Siamese Neural Networks 0.93 0.92 0.91 0.88

Department of Electronics and Communication Engineering 46


Figure 5.8: Coding part used for Image Segmentation of brain tumor images
using Unet

Figure 5.9: Coding part used for Feature Extraction of brain tumor images
using ResNext50

Department of Electronics and Communication Engineering 47


Figure 5.10: Retrieving brain tumor images using Convolution nerual network

Figure 5.11: Segmentation Block of Code

Department of Electronics and Communication Engineering 48


CHAPTER 6

Conclusions and Future Scope

We propose a new paradigm for multi-modal segmentation of brain tumors


using privileged pictures in this article. To be more specific, we develop a
two-step curricular disassociation model for learning that can be learned on
pictures and forecasted using unpaired image inputs. In this research, we
provided a region-based paradigm for segmenting brain tumors and measuring
their uncertainty consistently and accurately. We demonstrated that the
suggested region-based loss could generate dependable forecast confidence by
collecting evidence in the output picture and demonstrating four theoretical
characteristics. Our method produced voxel-level uncertainty maps for brain
tumor segmentation, which contributed to our understanding of segmentation
confidence for the detection of cancer. Extensive testing showed that the
proposed approach outperforms the data set and is the most efficient.

6.1 Conclusion
The creation of a semantic-based image retrieval system for brain tumor
images has the potential to revolutionize medical image analysis. It could
result in a quicker and more accurate diagnosis, better therapy planning,
and advanced medical research, despite difficulties like dataset selection and
labeling, robust algorithm development, and optimization. To completely reap
its benefits, more study is required.This system uses deep learning and natural
language processing to understand user queries, extract useful features, and
retrieve pertinent medical images from huge databases. However, it is essential
to guarantee user friendliness, adherence to moral principles, and compliance
with data protection laws. Overall, the development of a semantic-based image
retrieval system for pictures of brain tumors has the potential to significantly
advance the field of medical image analysis and patient

49
6.2 Future scope
In a semi-supervised and favored semi-paired learning environment, we will
continue to investigate the segmentation of brain tumors. Future research can
also concentrate on the intrinsic value that uncertainty estimation provides for
automated diagnosis when differentiating between the two sources of uncer-
tainty. The predictive uncertainty can be divided into epistemic and aleatoric
uncertainty. The third path involves testing this framework in additional di-
agnostic contexts, perhaps favoring the fusion of more multi modal data sources.

Department of Electronics and Communication Engineering 50


REFERENCES

[1] Neelum Noreen, Sellappan Palaniappan, and Abdul Qayyum. Iftikhar


Ahmad , Muhammad Imran and Muhammad Shoaib “A Deep Learning
Model Based on Concatenation Approach for the Diagnosis of Brain
Tumor.
[2] Abdu Gumaei, Mohammad Mehedi Hassan, and Md Rafiul Hassan.
Abdulhameed Alelaiwi and Giancarlo Fortino “A Hybrid Feature Extraction
Method With Regularized Extreme Learning Machine for Brain Tumor
Classification.
[3] H Hossam, Nancy M Sultan, and Walid Salem. Atabany “Multi-Classification
of Brain Tumor Images Using Deep Neural Network.
[4] Yun-Qian Li, Kai-Shih Chiu, Xin-Rui Liu, Tien-Yu Hsiao, Gang Zhao,
and Shan-Ji Li. Ching-Po Lin and Chia-Wei Sun “Polarization–Sensitive
Optical Coherence Tomography for Brain Tumor CChhaarraacctteerriizza-
attiioonn.
[5] Stefan T Lang, Liu Shi Gan, Cael Mclennan, Oury Monchi, and John
J P Kelly. Impact of Peritumoral Edema During Tumor Treatment Field
Therapy: A Computational Modelling SSttuuddyy.
[6] Jianxin Zhang, Zongkang Jiang, and Jing Dong. Yaqing Hou and Bin
Liu “Attention Gate ResU–Net for Automatic MRI Brain Tumor Seg-
mentation.
[7] Neil Micallef, Dylan Seychell, and Claude J Bajada. “Exploring the U-
net++ model for automatic brain tumor segmentation”. In: IEEE Access
9 (2021), pp. 125523–125539.
[8] Zhiguan Huang, Xiaohao Du, Liangming Chen, Yuhe Li, Mei Liu, Yao
Chou, and Long Jin. “Convolutional neural network based on complex
networks for brain tumor image classification with a modified activation
function”. In: IEEE Access 8 (2020), pp. 89281–89290.
[9] Kuankuan Hao and Shukuan Lin. Jianzhong Qiao and Yue Tu “A
Generalized Pooling for Brain Tumor Segmentation.
[10] Chenjie Ge, Irene Yu-Hua, Asgeir Store Gu, and Jie Jakola. Enlarged
Training Dataset by Pairwise GANs for Molecular–Based Brain Tumor.
[11] T Kalaiselvi, T Padmapriya, P Sri Rama Krishnan, and V Priyadharshini.
“‘Development of automatic glioma brain tumor detection system using
deep convolutional neural networks,”” in: Int. J. Imag. Syst. Technol
30.4 (2020), pp. 926–938.
[12] P Das and A Neelima. “‘Content-based medical visual information re-
trieval,” in: Hybrid Machine Intelligence for Medical Image Analysis.
Singapore: Springer, 2020, pp. 1–19.
[13] S Zhang, L Zhi, and T Zhou. “‘Medical image retrieval using empiri-
cal mode decomposition with deep convolutional neural network,”” in:
BioMed Res. Int 2020 (2020), pp. 1–12.

51
[14] Zar Nawab Khan Swati, Qinghua Zhao, Muhammad Kabir, Farman Ali,
Zakir Ali, Saeed Ahmed, and Jianfeng Lu. “Content-based brain tumor
retrieval for MR images using transfer learning”. In: IEEE Access 7
(2019), pp. 17809–17822.
[15] Palwinder Kaur and Rajesh Kumar Singh. “A Panoramic View of
Content-based Medical Image Retrieval system”. In: 2020 International
Conference on Intelligent Engineering and Management (ICIEM). 2020,
pp. 187–192.
[16] S Maheswaran, S Sathesh, E D Bhaarathei, and D Kavin. “‘Design
and development of chemical free green embedded weeder for row based
crops,” in: J. Green Eng 10.5 (2020), pp. 2103–2120.
[17] D Patil, S Krishnan, and S Gharge. “‘Medical image retrieval by region
based shape feature for CT images,” in: Proc. Int. Conf. Mach. Learn.,
Big Data. 2019, pp. 155–159.
[18] Senthil Kumar Sundararajan, B Sankaragomathi, and D Saravana Priya.
“Deep Belief CNN Feature Representation based content based image
retrieval for medical images”. In: J. Med. Syst. 43.6 (May 2019), p. 174.
[19] Shengcong Chen, Changxing Ding, and Minfeng Liu. “Dual-force convo-
lutional neural networks for accurate brain tumor segmentation”. en. In:
Pattern Recognit. 88 (Apr. 2019), pp. 90–100.
[20] Neelum Noreen, Sellappan Palaniappan, Abdul Qayyum, Iftikhar Ahmad,
Muhammad Imran, and Muhammad Shoaib. “A deep learning model
based on concatenation approach for the diagnosis of brain tumor”. In:
IEEE Access 8 (2020), pp. 55135–55144.
[21] Ahmed H Abdel-Gawad, Lobna A Said, and Ahmed G Radwan. “Opti-
mized edge detection technique for brain tumor detection in MR images”.
In: IEEE Access 8 (2020), pp. 136243–136259.
[22] B Deepa, M G Sumithra, R M Kumar, and M Suriya. “‘Weiner fil-
ter based Hough transform and wavelet feature extraction with neural
network for classifying brain tumor,” in: Proc. 6th Int. Conf. Inventive
Comput. Technol. (ICICT). 2021, pp. 637–641.
[23] Linwei Fan, Fan Zhang, Hui Fan, and Caiming Zhang. “Brief review
of image denoising techniques”. In: Vis. Comput. Ind. Biomed. Art 2.1
(Dec. 2019).
[24] M Gurbina, M Lascu, and D Lascu. “‘Tumor detection and classification
of MRI brain image using different wavelet transforms and support vector
machines,” in: Proc. 42nd Int. Conf. Telecommun. Signal Process. (TSP).
2019, pp. 505–508.
[25] Ming Li, Lishan Kuang, Shuhua Xu, and Zhanguo Sha. “Brain tumor de-
tection based on multimodal information fusion and convolutional neural
network”. In: IEEE Access 7 (2019), pp. 180134–180146.
[26] Amr M Abdelaty, Merna Roshdy, Lobna A Said, and Ahmed G Radwan.
“Numerical simulations and FPGA implementations of fractional-order
systems based on product integration rules”. In: IEEE Access 8 (2020),
pp. 102093–102105.

Department of Electronics and Communication Engineering 52


[27] B M Aboalnaga, L A Said, A H Madian, A S Elwakil, and A G Radwan.
“‘Cole bio-impedance model variations in Daucus Carota Sativus under
heating and freezing conditions,” in: IEEE Access 7 (2019), pp. 113254–
113263.
[28] O Li and P-L Shui. “‘Noise-robust color edge detection using anisotropic
morphological directional derivative matrix,” in: Signal Process 165
(2019), pp. 90–103.
[29] Samar M. Ismail, Lobna A. Said, Ahmed Gomaa Radwan, Ahmed H.
Madian, and Mohamed Fathy Abu Elyazeed. “A novel image encryption
system merging fractional-order edge detection and generalized chaotic
maps”. In: Signal Process. 167 (2020).
[30] L Qiusheng, F Xiaoyu, S Baoshun, and Z Xiaohua. “‘Compressed sensing
MRI based on the hybrid regularization by denoising and the epigraph
projection,” in: Signal Process 170 (2020).

Department of Electronics and Communication Engineering 53

You might also like