0% found this document useful (0 votes)
53 views8 pages

The Segmentation of Oral Cancer MRI Images Using Residual Network

The segmentation of tumour from a cancer MRI images in image processing is classic research area of interest and a tedious task
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views8 pages

The Segmentation of Oral Cancer MRI Images Using Residual Network

The segmentation of tumour from a cancer MRI images in image processing is classic research area of interest and a tedious task
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

The Segmentation of Oral Cancer MRI Images


using Residual Network
Varuna Shree. N.1, TNR Kumar2
Research scholar1, Associate Professor 2
MSRIT, Bangalore

Abstract:- The segmentation of tumour from a cancer An automatic oral tumor segmentation using MRI
MRI images in image processing is classic research area images will be able to solve this issue which provides an
of interest and a tedious task. Manually segmenting the early diagnosis and fast recovery. Early treatment can save
MRI images is very time consuming and liable to errors. the patient life by finding the exact tumor location and type.
Many researchers have done investigation using deep
neural network in segmenting the oral MRI images as In the recent years the deep leaning of neural network
they poses higher performance in segmenting the oral has gained the popularity and most of the researchers have
cancer images automatically. Owing to their gradient an outstanding perseverance with a highest accuracy in
dissemination and complexity issues, the CNN takes segmenting the image. Convolution neural network is most
more time and excess computational power in training important category of deep neural network which is capable
the images. Our aim is build an automated technique for of learning and extracting the features from the cancer MRI
the segmentation of oral cancer images using Residual images.
learning networks (ResNet) to render the complications
of gradient dissemination caused by CNN. ResNet attains In 2015(A.A Pereira et.al) the authors have designed
higher accuracy and trains the images faster compared an deep leaning CNN model containing an 3*3 convolution
to CNN. To accomplish this, ResNet counts a skip kernels that segments the tumors in cancer MRI images.
connection parallel to convolution neural network layers. They had employed a tiny kernel filters to obtain deeper
The verification accuracy of the proposed technique has CNN and cascaded few more convolution layers which had
been carried out on oral cancer (lip and tongue) images similar response in the bigger kernels. The algorithms of
dataset. The results of proposed technique shows a better segmentation were proposed to overcome the issue of
accuracy, dice co-efficient, specificity and precision of redundancy by allotting every pixel to a class label. The
0.92, 0.95, 0.94, 0.96 respectively and computational time architecture of CNN was modified to a FCN (Fully
of 63 mins. Convolution Network). This classified each local block of
an image into U-shaped model with expanding and
Keywords:- Oral cancer, Segmentation, DNN, ResNet. contracting the paths. This model required a more number of
training images to achieve a precise segmentation and
I. INTRODUCTION suffered from more computational time.

In the present, Oral cancer is considered to be greatest To overcome the problem of gradient dissemination
threat to human beings. It is an uncontrollable growth of Convolution neural network (CNN) technique and to
cells that starts from mouth and spreads to lips, tongue and improve the computational power we have implied residual
other parts of the face. Squamous cell carcinoma is most network (ResNet18).
deadly oral cancer where life span will be approximately for
five years. Early stage diagnosis may help in curing the In this research papers, section 2 contains literature
disease with less cost. Segmentation of the Oral cancer MRI survey regarding the research accomplished by the other
images depicts a crucial role in deciding an exact location of researchers and outline information obtained by their work.
the tumour. MRI (Magnetic resonance imaging) helps the Section 3 contains brief discussion of the proposed
physicians to explore the tissues and lesions of the tumours. methodology; the proposed model using Resnet18 for
Segmentation indicates the segregation of salient segmentation. In section 4 simulation set up required is
characteristics of the image background. It is representation explained. The section 5 explains the performance
and extracting of significant data from group pixels into evaluation with several parameters. The section 6 explains
similarity regions. Grouping of pixels takes place based on the result and comparison of ResNet with CNN. The
the change in their intensity accomplished by regions. conclusion is given in section 7.

Segmenting of Oral images is a more complicated and II. LITERATURE SURVEY


tedious task as it has more complex appearance and
structure of the tumors. They have a fuzzy borders and it In this paper we are mainly concentrating on
may also have spread into other nearby areas of the lip and segmentation of lesions from Oral cancer MRI images. It’s
mouth. Manually recognizing the boundary of the Oral very difficult to separate the tumour lesions from the normal
cancer tumor consumes of time and liable to lot more errors. lesions because of few similarity issues. There are many
segmentation algorithms available in image processing such
as Fuzzy C approach, histogram equalization, edge detection
approach, clustering techniques, mathematical

IJISRT23MAY1072 www.ijisrt.com 1891


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
reconstruction and nearest neighbour, etc. But in the recent The paper “Brain image segmentation based on FCM
years deep neural network has been employed such as clustering algorithm and rough set”, the author has
Convolution neural network, fully connected networks for employed Fuzzy C means clustering segmentation algorithm
the segmentation. Here we can see survey of few papers with rough set theory. Here the author constructs a feature
with different segmentation techniques used by the weight value table from the results obtained from FCM with
researchers and the result obtained. various clustering values and relating to identical relation of
features the images is split-up in to several parts. Value
The paper “Deep learning-based pixel wise lesion reduction is done which is acquired by weight values of each
segmentation on oral cancer squamous cell carcinoma feature and this act as foundation to estimate the
images” the author explains about the semantic dissimilarity among the regions and correlation is evaluated
segmentation used for separating the tumour cells from the of every region that is analysed with equal relationship
normal cell. Performance analysis is done for deep learning determined by difference in degrees. The equivalent regions
pixel wise based for segmenting lesions. They had used are merged together based on relationship to complete the
dataset of Cancer Genome Atlas to create an Oral Cancer segmentation. This method showed lesser error rates and has
Annotated dataset. achieved better accuracy in segmenting the image.
The paper “Brain tumour detection using Convolution The paper “A novel region-based active contour model
Neural network”, the author explains what will be the result via local patch similarity measure for image segmentation”,
of training large MRI brain tumour dataset using neural the author has employed a novel method considering region
network models RESNET 50, U-NET and FPN. For the as a base to build active contour model by measuring
performance evaluation they have used IoU and Dice similarities of local patches through segmentation. They
coefficient with 90% and 0.91 respectively with a loss value have used a restriction of spatial features on local regions
of 0.16. He concludes by saying that better accuracy is which controlled amplitude of both centre to the
achieved using RESNET 50 in resemblance for U-Net and neighbourhood pixels of the images. Firstly, they
FPN for segmentation of brain tumour. constructed similarity measures of local patches with the
spatial restrictions; this balanced the suppression of noise
The paper “Deep learning model for tongue cancer that reserved the details of image. Secondly, they have
diagnosis using endoscopic images”, the author has constructed a new model combining the measures of
developed a model to rectify the tongue cancer related to similarity patches with region base active contour model.
Oral endoscopic images. Different types of convolution Thirdly, they have added regularized statistical terms for the
network techniques to calculate the probability of cancer. object term ensuring the reliability of evolution of the curve
They have the compared model with CNN, VGG-16, VGG- and smoothness.
19, Mobile Net V1, Mobile Net V2. The proposed
methodology showed sensitivity of 91.7%, specificity of The paper “Deep Convolution Neural Network Using
90.9%, and accuracy of 91.7% respectively. U-Net for Automatic Brain Tumor Segmentation in
Multimodal MRI Images”, the author employs CNN and
The paper “Deep learning for automatic segmentation extracts automatically the tumor total part and inside tumor
of Oral and Oropharyngeal cancer using narrow banded regions from 3D MRI. The modified version of U-Net was
imaging”, here the author test against FCNN techniques for employed to segment MRI brain tumor images. The Cross
segmenting semantically the oral squamous cell of the Entropy and Dice loss were utilized to address the loss
oropharynx and oral cavities. An OC dataset is poised with function to check the imbalance. The mean enhanced tumor,
frames 110 and frames 116 with OP dataset. The FCCN’s for whole the tumor, dice score of 0.783, 0.868 and 0.805
U-Net 3, U-Net, and RESNET were employed for has been achieved correspondingly.
segmenting neoplastic images. The performance estimation
was done on every tested network model and compared with The paper “Brain Tumor Segmentation from MRI
golden standard. The FCCN segmentation on the OC dataset Images using Hybrid Convolutional Neural Networks” the
with median value of 0.655, and on the OP dataset with author employs hybrid of three CNN techniques, Seg-UNet,
median value of 0.760. The tested FCCN’s shows better Res-SegNet and U-SegNet for automatically segmenting
performance with high variance values and having all values MRI images. This model inherits salient features of U-Net,
minimum of all matric evaluation. SegNet and Residual network for segmenting semantically.
This hybrid technique was able to solve the issue of tiny
The paper “Brain Tumor Segmentation Using minute tumor that are vanished during down sampling
Convolution Neural Networks in MRI Images” the author because of it skip connection. The three models achieved an
explains an automated method for segmenting images using accuracy of 93.3%, 91.6% and 93.1% respectively. These
Convolution Neural networks (CNN) with tiny size of 3X3 combined architectures were composed of too many layers
kernels. The usage of tiny sized kernels helped in drafting an and variables to train so longer period was required for
enhanced architecture which showed a reasonable effect training. The system automatically segments the images in
contrary to over fitting giving lesser values of weights. They few seconds.
also inspected to use the normalization of intensities in the
pre-processing stage. This was rare in segmentation of MRI
image using CNN along with augmentation of data
confirmed to be very efficient in segmenting the brain tumor
images.

IJISRT23MAY1072 www.ijisrt.com 1892


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
III. PROPOSED METHODOLOGY collected from https://radiopaedia.org/articles/squamous-
cell-carcinoma-tongue and https://www.dicomstandard.org.
The methodology includes the application of residual We are enhancing the images based on their size, color and
network for segmentation purpose. Here we are resizing the the texture features. After pre-processing we are using
Oral cancer MRI data set into 128*128, 256*256 pixel size. ResNet 18 for the segmentation of MRI images. The
The data collection was made from Radiopaedia squamous different section includes Pre-processing, Segmentation and
cell carcinoma tongue and digital imaging and Performance evaluation.
communications in medicine dataset. The images were

Oral Cancer ResNet 18


Segmented
MRI image image

Optimization
Noise Removal using Adaptive
scheduling of
stochastic
Image gradients
Enhancement

Pre-processing

Performance
Evaluation
Segmentation
Fig. 1: Proposed methodology development and performance evaluation of Oral Cancer MRI images.

A. Pre-processing  Residual Network


This stage will remove unwanted data present in MRI The gradients dissemination issue occurs in deep CNN
images which helps the clinical researchers to diagnose while training the process. As the training process continues
properly so that tumor can be identified in early stage itself the gradients values ultimately will be lowered to zero. To
and also makes images suitable for further processing. The overcome this issue, Residual Network Learning (ResNet)
pre-processing involves mainly converting the image into was introduced. The RESNET model was proposed by He.et
grey scale, image noise removal and reconstruction of al.to to solve the issue related to accuracy training. The
image. The criterions considered to improve are signal noise results obtained from the remaining layer were coiled with
ratio, getting rid of unwanted noise and noise present in input which had to be the next layers input. Let P(z)
background and keeping all the relevant data. represents the remaining layers aligning to build up residual
block for learning as described in Fig.2. The residual
B. Segmentation network gives the value approximately to P(z) = Q(z) + z.
This section contains detailed explanation of the different These formulations are recognised by the feed forward
steps carried out to segment the Oral cancer MRI images. neural network system with shortcut connections. These
Here we have explained the two different sections of connections integrate the inside and outside of the gathered
Residual Network and ResNet 18. layers into the similar bounding operations involving no
extra variables. This helps gradients to transfer effortlessly
back, which results in rapid training and numerous layers.

IJISRT23MAY1072 www.ijisrt.com 1893


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Weight layer
Q(z) ReLu Z identity
Two stacked Layer
Weight layer

P(z) = Q(z) + z

+
ReLu
Fig. 2: Building block of Residual Network

There are two major blocks in the model of RESNET  2D convolution layer is the first constituent with 1*1 size
which is explained below: filter, a pace of (1, 1). Batch normalization is carried out
to normalise the channels and rectified linear activation
C. Identity block unit is applied for nonlinear activation units.
The identity block is described as 𝑚 = 𝑄(𝑧, {𝐾𝑖}) + 𝑧  The second element is same as the first one but with the
………………………………… (1) change in size of filter (𝑞 ∗ 𝑞).
The z and m represents the input and output layers and  The third element same as the first element but it will not
𝑄(𝑧, {𝐾𝑖}) function defines the bounding of residual contain ReLU activation function.
network. The identity block consists of same dimension of x Finally, before applying the activation function the
and Q. Fig. 3(a) explains the design of identity block shortcut and inputs are integrated together.
composed of three constituents as shown below:

Convolution 2D
Batch Convolution 2D Convolution 2D
Input Normalization Batch Normalization Batch Output
RELU RELU Normalization

Fig. 3(a): Represents an Identity Block of ResNet.

Convolution 2D
Back Normalization

Convolution 2D
Convolution 2D Convolution 2D
Batch
Normalization Batch Normalization Batch
Input Output
RELU RELU Normalization

Fig. 3(b): Represents a Convolution Block of ResNet.

IJISRT23MAY1072 www.ijisrt.com 1894


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
D. Convolution block was to control the issue of dissemination of gradients. This
In this block the shortcut connections carries out the enables the process to gain knowledge that the identity
linear projection 𝐾𝑠 to size up the dimension between 𝑧 and function assures that the higher layers will carry out an event
Q. Here the input and output of the elements are not same as that of the lower classes.
matched. The equation is explained as follows:
 ResNet 18
𝑚 = 𝑄(𝑛, {𝐾𝑖}) + 𝐾𝑠 𝑛) …..……………………. . (2) Many different models of ResNet are proposed by the
authors such as ResNet with 18, 34, 50, 101, 152 and 1202
Where Q is the output obtained from the stacked layer, layers. Each layer consists of various blocks and in these the
m and n are the inputs and merged output vectors obtained identity and convolution blocks are defined in section (2.3).
of the convolution block is shown in fig. 3(b). The design of In this proposed methodology, we have used ResNet 18 for
convolution block is same as identity block but with segmenting MRI images which is good compensation
addition of 2D convolution layer with a short cut way. The between performance and depth. Since it has less number of
input is matched with the main path with a filter size of 1*1 parameters comparing to other ResNets this leads to faster
of 2D convolution layer and stride (s,s) which depends on and accurate training period. The architecture of ResNet 18
the output layer. The shortcut altered is merged with main is shown in fig.4.
path output. The major advantage of this altered shortcut

Fig. 4: ResNet-18 Architecture

The ResNet 18 contains 4 convolution layers in each  Stage 6: An Average pooling of size 7*7 is used, the
of the module (first convolution layer and fully connected obtained output is smashed and the fully connected layer
layer = 18 layers) and is composed of 5 stages with every is declined its input to several numbers of classes employs
layer having convolution and identity blocks to persist: activation unit “softmax’.
 Stage 1: Contains 2D Convolution layer with a shape of
(7*7) size, 64 filters and a stride (2, 2). Similarities of the E. Simulation set up
channels are performed by batch normalisation and In this section, we are explaining in detail our
activation function ReLU. Max pooling is combined at simulations to verify the performance of deep residual
end with stride (2, 2). network ResNet 18 in segmenting the oral cancer MRI
 Stage 2: Contains two identity blocks with one 2D images. We have employed tensor flow for our model.
convolution block, both the blocks uses 3 filter sets (56,
The model proposed is examined and evaluated using
56, 64), with kernel size (3 * 3) and (2, 2) stride.
MRI Oral cancer images dataset. The training set
 Stage 3: Contains three identity blocks with one
compromises of 100 patients suffering from Oral Squamous
convolution block, both uses 3 filter set (28, 28, 128) with
cell Carcinoma. In the dataset we have every patient 10
kernel size - (3 * 3) , (2, 2) stride.
samples so total number of 1000 images having image size
 Stage 4: Contains four identity blocks with one of 225*225. We have resized the image into 128*128. The
convolution block, both uses 3 filter sets (14, 14, 256) hyper-parameters used in this proposed models is described
with kernel size - (3 * 3) , (2, 2) stride. in table 1.
 Stage 5: Contains five identity blocks and one convolution
block and both uses 3 filters set (7, 7, 512) and with the
size (3 * 3) and (2 * 2) stride.

IJISRT23MAY1072 www.ijisrt.com 1895


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Table 1: Hyper parameters used for the training purpose
Hyper parameters values
Optimizer ADAS
Loss function 0.121
Initial learning rule 0.0001
No. of epochs 32

(a) (b) (c) (d) (e)

(a) (b) (c) (d) (e)


Fig. 5: Represents the Oral MRI images (a), the different stages of segmentation is represented by (b), (c) and (d), here (e)
represents the segmented image.

F. Performance Evaluation TP (true positive) – this precisely identifies pixels of


The methodology proposed for Oral cancer MRI tumor.
segmentation is calculated using evaluation metrics. The FP (false positive) – this precisely distinguishes the
segmentation output is assessed by its contrast and the pixels of non-tumor.
segmentation of ground truth images having similar features FN (false negative) - this inaccurately identifies non-
that are given by the dialogists. tumor pixels.

To compare the two MRI images we have used Dice The efficiency evaluator’s specificity and sensitivity
Similarity Coefficient (DSC), Specificity (rate of true examine the robustness of our proposed methodology for
negative), Sensitivity (rate of true positive), and Accuracy segmenting MRI tumor images.
(A) and Precision (P) values.
 Specificity
 Dice Similarity Coefficient 𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
𝑇𝑁
…….…………………….. (4)
𝑇𝑁+𝐹𝑃
The Dice similarity Coefficient calculates the overlap
that occurs between the main Oral cancer MRI segmented  Accuracy
images and ground truth images. It is gives us shown below: (𝑇𝑃+𝑇𝑁)
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃+𝐹𝑁+𝑇𝑁+𝐹𝑃)
….…………………… (5)
2𝑇𝑃
𝐷𝑆𝐶 = ….…………………………… (3)
𝐹𝑃+2𝑇𝑃+𝐹𝑁  Precision
𝑇𝑃
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (𝑇𝑃+𝐹𝑃)
…………………………… (6)

Table 2: This shows comparison of different techniques with proposed methodology- performance evaluated for Dice score,
Specificity, Accuracy, Precision and computation time in minutes
Techniques Dice Score Specificity Accuracy Precision Computation Time
CNN 0.91 0.84 0.83 0.84 156 mins
VGG Net-16 0.92 0.86 0.88 0.93 360 mins
VGG Net-19 0.89 0.91 0.87 0.96 256 mins
U-Net 0.86 0.83 0.80 0.91 354 mins
UNet-Res 0.91 0.86 0.84 0.92 280 mins
ResNet 18 0.95 0.94 0.92 0.96 63 mins

IJISRT23MAY1072 www.ijisrt.com 1896


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

1.2
1
Computation Time
400
0.8 350
0.6 Dice Score 300
250
0.4 Specificity 200
150
0.2 Accuracy 100 Computation
50 Time
0 Precision 0

Graph 1: Represents the performance Evaluation Graph 2: Represents the Computational time

IV. RESULT AND DISCUSSION average computational time is calculated in segmenting the
data. The average computational time is calculated defined
This section contains the comparing of the results of as the processing time that is required for segmenting the
our proposed methodology of segmenting Oral cancer MRI Oral MRI images. Table2 also shows that the proposed
images with 3 segmentation techniques: CNN, VGG Net- model has minimal average computational comparing to the
16, VGG Net-19, U-Net and UNet-Res (residual block). other techniques. This aptly shows that the proposed
methodology has higher accuracy and minimal average
The data collection was made from Aster CMI computational time period.
Hospital Hebbal, MRI Oral cancer images related to lip and
mouth cancer. Few images were taken from Radiopaedia The ResNet model accomplishes identity mapping and
squamous cell carcinoma tongue and digital imaging and these outputs get connected to the corresponding stacked
communications in medicine database. layers without any addition of extra parameters. This
mechanism show that the layers of ResNet model will try to
from : https://radiopaedia.org/articles/squamous-cell- learn the leftover inputs and outputs while the layers of
carcinoma-tongue and https://www.dicomstandard.org. CNN, VGG-16, VGG-19, U-Net and UNet-Res learns
exclusively the true outputs. The gradients flow backwards
A. Dataset Training without any effort which results in quick processing in
The proposed methodology is compared with CNN, comparison with other techniques. The ResNet has power of
VGG Net-16, VGG Net-19, U-NET and UNet-Res over all short connections which helps in solving the issue related to
the process of training. Each and every sequence in the dissemination the gradients. ResNet also guarantees that the
model is normalised as discussed in the pre-processing higher layers execute as good as that of the lower layers.
stage. Adaptive scheduling of stochastic gradients
optimization algorithm is employed to restrict the V. CONCLUSION AND FUTURE SCOPE
optimization. It is faster comparing to other optimization at
attaining convergence. It displays low level loss with Oral cancer MRI tumor segmentation is one of the
outlined features helping in optimization. The performance needed requirements in the early treatment of Oral cancers.
of training of the model compared with CNN, VGG Net-16, Though the Deep neural networks are important strength of
VGG Net-19, U-Net and UNet-Res are shown in Graph 1. image segmentation they have one drawback of
It shows that ResNet 18 has lesser error while training and dissemination of gradients which arises during the process
shows high accuracy comparing to various techniques. of training. We have used Residual Network – ResNet to
come out of this problem. In residual network we have
The validation of proposed model uses over 32 epochs employed ResNnet 18 in our proposed methodology since it
for the process of training. This exhibits that the errors has fewer errors while training and also high accuracy as it
decreases rapidly over the training period and accuracy of contains lesser number of layers. The proposed methodology
the training rises after every epoch. performs well compared to CNN, VGG 16, VGG 19, U-Net,
UNet-Res models relating to computation time. We have
B. Dataset Testing employed Adaptive scheduling of stochastic gradients
In this process, the data is tested over the model in optimization technique. It has minimal computing execution
segmenting the tumors in Oral MRI images. The model is time comparing to all other techniques mentioned above.
evaluated against the techniques using performance metrics Our proposed methodology achieves shows a better
as defined in section 5 along with the computational time accuracy, dice co-efficient, specificity and precision of 0.92,
this gives us the results of the segmentation. We have 0.95, 0.94, 0.98 respectively and computational time of 63
calculated the evaluation metrics on each patient dataset and mins.
the average value of each data is estimated. Graph 2
displays the segmented image that displays the performance
of the proposed methodology. To represent the viability, the

IJISRT23MAY1072 www.ijisrt.com 1897


Volume 8, Issue 5, May 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
In the future we can modify the Resnet 18 with [8.] K. Anuradha, Dr. K. Sankaranayana, “Oral Cancer
different number of filter sizes or we can make hybrid Detection Using Improved Segmentation Algorithm”,
models with Residual Networks which improves the International Journal of Advanced Research in
efficiency of segmenting the tumors in Oral cancer MRI Computer Science and Software Engineering,
images. Volume 5, Issue 1, January 2015.
[9.] Pandia Rajan Jeyaraj, Edward Rajan Samuel Nadar,
Funding: Our work is not funded by any of the “Computer-assisted medical image classification for
organization. early diagnosis of oral cancer employing deep
learning algorithm”, Journal of Cancer Research and
REFERENCES Clinical Oncology, https://doi.org/10.1007/s00432-
[1.] Dinthisrang Daimary, Mayur Bhargab Bora, 018-02834-7.
Khwairakpam Amitab, Debdatta Kanda, “Brain [10.] Somaya A, Feshawy,Waleed Saad, Mona
Tumor Segmentation from MRI Images using Hybrid Shokair,Moawad Dessouky, “IoT framework for
Convolutional Neural Networks”,International brain tumor detection based on optimized modifed
Conference on Computational Intelligence and Data ResNet 18 (OMRES)”,The Journal of
Science (ICCIDS 2019). Supercomputing, 21 July 2022,
[2.] Lamia H. Shehab, Omar M. Fahmy, Safa M. Gasser https://doi.org/10.1007/s11227-022-04678.
and Mohamed S. El-Mahallawy, “An Efficient Brain [11.] Sérgio Pereira, Adriano Pinto, Victor Alves, and
Tumor Image Segmentation Based on Deep Residual Carlos A. Silva, “Brain Tumor Segmentation Using
Networks (ResNets), Journal of King Saud University Convolutional Neural Networks in MRI
-Engineering Sciences (2020), doi: Images”,IEEE transactions on medical imaging, vol.
https://doi.org/10.1016/j.jksues.2020.06.001 35, no. 5, may 2016.
[3.] C.R.Muzakkir Ahmed, M.Narayanan, S.Kalaivanan3, [12.] Kermi A., M. I, “Deep Convolutional Neural
K.Sathya Narayanan, A.K. Reshmy5, “To detect and Networks Using U-Net for Automatic Brain Tumor
classify oral cancer in MRI image using firefly Segmentation in Multimodal MRI Volumes”,
algorithm and expectation maximization algorithm”, International MICCAI Brain lesion Workshop Brain
International Journal of Pure and Applied Les, 2019,(pp. 37-48).
Mathematics, Volume 116 No. 21 2017, 149- [13.] Lang R., Z. L, “Brain Tumor Image Segmentation
154,ISSN: 1311-8080 (printed version); ISSN: 1314- Based on Convolution Neural Network”, 9th
3395 (on-line version), url: http://www.ijpam.eu International Congress on Image and Signal
[4.] Haiping Yu, Fazhi He2, Yiteng Pan. “A novel region- Processing Biomedical Engineering and Informatics,
based active contour model via local patch similarity 2016, (pp. 1402 - 1406).
measure for image segmentation”,© Springer [14.] Long J., S. E, “Fully Convolutional Networks for
Science+Business Media, LLC, part of Springer Semantic Segmentation”, IEEE Conference on
Nature 2018. Computer Vision and Pattern Recognition, 2015.
[5.] Ramalingam, P. Aurchana, P. Dhanalakshmi, K. [15.] Menze,et.al, “The Multimodal Brain Tumor Image
Vivekananadan4 and V, S. K. Venkatachalapath, Segmentation” , Benchmark (BRATS), IEEE
“Analysis of Oral Squamous Cell Carcinoma into Transaction on Medical Imaging, 34(10), 1993-2024.
Various Stages using Pre-Trained Convolutional [16.] Nyul L. G., U. J., “New Variants of a Method of MRI
Neural Networks, IOP Conf. Series: Materials Scale Standardization”, IEEE Transaction on Medical
Science and Engineering 993 (2020) 012058,https:// Imaging, 19(2), 143-150.
doi:10.1088/1757-899X/993/1/012058 [17.] Pereira S., et.al, “Deep Convolutional Neural
[6.] Jaesung Heo, June Hyuck Lim ,Hye Ran Lee, Networks for the Segmentation in Multi-sequence
JeonYeob Jang, Yoo Seob Shin, Dahee Kim, JaeYol MRI”, Brain Lesion: Glioma, Multiple Sclerosis,
Lim,Young Min Park, Yoon Woo Koh, Stroke and Traumatic Brain Injuries, 131-143, 2015.
Soon‑HyunAhn, Eun‑Jae Chung4, DohYoung Lee, [18.] Ronneberger O., “U-Net Convolution Networks for
Jungirl Seok5 & Chul‑Ho Kim, “Deep learning model Biomedical Image Segmentation”, Medical Image
for tongue cancer diagnosis using endoscopic Computing and Computer-Assisted Intervention –
images”,Scientifc Reports | (2022) 12:6281 | MICCAI 2015, (pp. 234-241), 2015.
https://doi.org/10.1038/s41598-022-10287-9. [19.] Sadegh M., K. H. “Study of Residual Networks for
[7.] Francesco Martino, Domenico D. Bloisi 2,Andrea Image Recognition, Computer Vision and Pattern
Pennisi, Mulham Fawakherji, Gennaro Ilardi, Daniela Recognition”,2018.
Russo, Daniele Nardi ,Stefania Staibano, and [20.] Xu J., L. X. , “ A Deep Convolutional Neural
Francesco Merolla, “Deep Learning-Based Pixel- Netowkr for Segmenting and Classifying Epithelial
Wise Lesion Segmentation on Oral Squamous Cell and Stromal Regions in Histopathological Images.
Carcinoma Images”, Appl. Sci. 2020, 10, 8285; ElSEVIER Neurocomputing, 214-223, 2016.
doi:10.3390/app10228285.

IJISRT23MAY1072 www.ijisrt.com 1898

You might also like