0% found this document useful (0 votes)
25 views3 pages

Transfer Learning in Polyp and Endoscopic Tool Segmentation From Colonoscopy Images v2

This document summarizes a study that assessed the use of pre-trained and not pre-trained convolutional neural networks (CNNs) to segment polyps and endoscopic tools from colonoscopy images. Two CNN models were trained, one on a dataset of 1000 colonoscopy images containing polyps and another on a dataset of 590 images containing endoscopic tools. The models were trained using different architectures, including pre-trained and not pre-trained weights. The best performing model for polyp segmentation achieved a dice score of 0.857 on the test set, while the best model for segmenting endoscopic tools achieved a dice score of 0.948 on the test set. The study found that pre-training the models improved segmentation

Uploaded by

Ahmed Achraf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views3 pages

Transfer Learning in Polyp and Endoscopic Tool Segmentation From Colonoscopy Images v2

This document summarizes a study that assessed the use of pre-trained and not pre-trained convolutional neural networks (CNNs) to segment polyps and endoscopic tools from colonoscopy images. Two CNN models were trained, one on a dataset of 1000 colonoscopy images containing polyps and another on a dataset of 590 images containing endoscopic tools. The models were trained using different architectures, including pre-trained and not pre-trained weights. The best performing model for polyp segmentation achieved a dice score of 0.857 on the test set, while the best model for segmenting endoscopic tools achieved a dice score of 0.948 on the test set. The study found that pre-training the models improved segmentation

Uploaded by

Ahmed Achraf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Nordic Machine Intelligence, MedAI 2021

https://doi.org/10.5617/nmi.9132

Transfer Learning in Polyp and


Endoscopic Tool Segmentation
from Colonoscopy Images
Nefeli Panagiota Tzavara1 and Bjørn-Jostein Singstad2
1. National Technical University of Athens, Athens, Greece
2. Oslo University Hospital, Oslo, Norway
E-mail any correspondence to: [email protected]

Abstract tools may also play a role in the development of robotic-


Colorectal cancer is one of the deadliest and most widespread assisted surgical systems [5]. A recent study showed
types of cancer in the world. Colonoscopy is the procedure that pre-trained Convolutional Neural Networks (CNN)
used to detect and diagnose polyps from the colon, but today’s improved the performance in classifying colorectal polyps
detection rate shows a significant error rate that affects
from colonoscopy images [6], but still it is not explored
diagnosis and treatment. An automatic image segmentation
whether a pre-trained segmentation models will improve
algorithm may help doctors to improve the detection rate of
pathological polyps in the colon. Furthermore, segmenting the performance of colorectal polyp segmentation. In this
endoscopic tools in images taken during colonoscopy may study, which is a part of a machine learning challenge [7],
contribute towards robotic assisted surgery. In this study, we aim to assess pre-trained and not pre-trained CNNs
we used both pre-trained and not pre-trained segmentation to detect polyps and endoscopic tools from colonoscopic
models. We trained and validated both on two different data images.
sets, containing images of polyps and endoscopic tools. Finally,
we applied the models on two separate test sets. The best
Methods
polyp model got a dice score = 0.857 and the test instrument
model got a dice score = 0.948. Moreover, we found that Two models were developed as part of the challenge;
pre-training of the models increased the performance when one model to segment polyps in images and another
segmenting polyps and endoscopic tools. to segment endoscopic tools in images. A CNN is a
data-driven type of model and thus we had to train the
Keywords: Polyp segmentation, Transfer learning, Hyperk-
vasir, Convolutional neural networks, MedAI challenge
model on some relevant data. The polyp model was
trained on the Kvasir-SEG open data set consisting of
1000 images, containing one or more polyps [8], whereas
the instrument model was trained on Kvasir-Instrument,
which is another open data set consisting of 590 images,
Introduction containing different endoscopic tools [5]. Both of the data
Colorectal cancer (CRC) was the third most common and sets also contained a corresponding annotated mask for
second most deadly cancer type worldwide in 2020 [1]. each of the images, highlighting the polyps or endoscopic
CRC is strongly associated with colorectal polyps, and tools in the images.
colonoscopy is considered to be the best method for Data preprocessing: The images and masks in the data
the detection of colorectal polyps [2, 3]. Studies have sets vary in resolution, and thus, they had to be resized in
shown that between 6% and 27% of the colorectal order to be fed to the CNN models. We selected 256x256
polyps are missed by the clinicians during the colonoscopic pixels as the size of the input image and the predicted
examination [4]. On the other hand, artificial intelligence mask.
(AI) and image segmentation have shown to be useful in Model architectures: The model architectures were
segmenting colorectal polyps [2, 3], and this may help the retrieved from a Python library; "Segmentation Mod-
endoscopists to detect the polyps that otherwise are being els" [9], that contains different CNN architectures. This
overseen. Detection of colorectal polyps and endoscopic library provides models with both untrained and pre-

© 2021 Author(s). This is an open access article licensed under the Creative Commons Attribution License 4.0.
(http://creativecommons.org/licenses/by/4.0/).
Nefeli et al.: Transfer Learning Improves Polyp- and Endoscopic Tool Segmentation. NMI, 2021

trained weights. Pre-trained weights are achieved by with the settings shown in Table 1 was finally used to train
training on ImageNet [10]. To find the best fit for our the models on the whole development set and applied
data sets we tested the following architectures provided on the test data which consisted of 300 images with
by the library: EfficientNet, MobileNet, SE-ResNet, In- endoscopic tools and 300 images with colorectal polyps.
ception, ResNet and VGG. The results of these experi- The predicted masks were used to participate in the
ments are publicly available.1 MedAI challenge. In the final training procedure, the
Augmentations: Augmentations were applied on the learning rate schedule was programmed to imitate the best
training data in order to create a more versatile data set learning rate schedule found during model selection.
and achieve better generalization. We used nine different
augmentation techniques: Random noise, gaussian blur, Parameter Polyp Instrument
random rotation, image brightness, horizontal flip, vertical
Model architecture efficientnetb1 efficientnetb1
flip, random horizontal shift, random vertical shift and
Pre-trained Yes Yes
random zoom, for which an unique integer from 1 to 9
Batch size 30 30
was assigned. For each epoch, the images and masks used
Epochs 20 35
to train the models, were given a random integer between
Initial learning rate 0.001 0.001
zero and nine. The augmentation technique with the
Optimizer Adam Adam
corresponding integer were used on the given image and
Loss function IoU IoU
mask. If the random integer was zero, no augmentation
was applied. Table 1: Model architectures, parameters and hyper-
Model selection 10-folded cross-validation on the devel- parameters used to train the final Polyp and Instrument
opment set were used to find the best model architecture model.
and model parameters. The performance were measured
using Dice similarity coefficient (DSC) and Intersection The segmentation performance of the best instrument
over Union (IoU) on the validation folds. and polyp models are summarized in Table 2.
In the model selection phase, the learning rate was
reduced during training, using a learning rate scheduler, Data set Metric Development set Test set
which was set to lower the learning rate by a factor
DSC 0.874 ± 0.011 0.857
of ten when the IoU-score did not improve over three Polyp
IoU 0.804 ± 0.013 0.800
consecutive epochs.
DSC 0.937 ± 0.015 0.948
Clinical relevance and model transparency Instrument
IoU 0.893 ± 0.020 0.911
A polyp segmentation algorithm, like the one presented
in this study, could probably be used as a decision tool Table 2: Dice similarity coefficient (DSC) and Intersec-
for endoscopists. To make the segmentation tool more tion over Union (IoU) score achieved on both the polyp
clinically relevant, and to streamline the work of the and instrument development sets and test sets. The
endoscopists, we developed a polyp counter algorithm. scores on the development sets are achieved using 10-fold
This algorithm detects the contours of the segmented cross-validation.
polyps in the masks and counts objects. The purpose
of this algorithm is to tell if or how many polyps there The same models described in Table 1 and scored in
are in each image, so the doctors only need to look at Table 2, but without pre-training on ImageNet, were
the images with detected polyps and ignore the images scored on the development sets using cross-validation.
without detected polyps. Moreover, the masks provided The model applied on the polyp data set achieved a DSC
will highlight the polyps and improve the endoscopists score of 0.653 ± 0.072 and a IoU score of 0.541 ± 0.084.
focus on the abnormalities in the colonoscopy images. The model applied on the instrument data set achieved
The polyp counter algorithm and the rest of the code a DSC score of 0.888 ± 0.028 and a IoU score of
developed in this project are publicly available on GitHub 0.822 ± 0.036.
2
.
Conclusion
Results The results of this study show that the model which
Model deployment: From our experiments, we found that performed best on the development sets, according to
efficientnetb1 outperformed the other model architectures our experiments, also generalized well to the MedAI test
tested. Furthermore, we did experimental fine tuning sets. Secondly, we found that pre-training the model on
of the hyperparameters and the settings which gave the Imagenet significantly increased the performance on both
highest mean DSC are shown in Table 1. Efficientnetb1 the polyp and instrument development sets. These results
1 https://app.neptune.ai/o/SSCP/org/HyperKvasir/ may have implications for further work within the field of
experiments polyp segmentation, but also in other image segmentation
2 https://github.com/ylefen/medai2021-polypixel tasks.

33
Nefeli et al.: Transfer Learning Improves Polyp- and Endoscopic Tool Segmentation. NMI, 2021

References
1. Xi Y and Xu P. Global colorectal cancer burden in 2020
and projections to 2040. Translational Oncology 2021;
14
2. Guo Y, Bernal J, and J. Matuszewski B. Polyp Seg-
mentation with Fully Convolutional Deep Neural Net-
works—Extended Evaluation Study. Journal of Imaging
2020 Jul; 6:69
3. Jha D, Smedsrud PH, Johansen D, Lange T de, Jo-
hansen HD, Halvorsen P, and Riegler MA. A Compre-
hensive Study on Colorectal Polyp Segmentation with
ResUNet++, Conditional Random Field and Test-Time
Augmentation. 2020
4. Ahn SB, Han DS, Bae JH, Byun TJ, Kim JP, and Eun
CS. The Miss Rate for Colorectal Adenoma Determined
by Quality-Adjusted, Back-to-Back Colonoscopies. Gut
and liver 2012 Jan; 6:64–70
5. Jha D, Ali S, Emanuelsen K, Hicks SA, Thambawita
V, Garcia-Ceja E, Riegler MA, Lange T de, Schmidt
PT, Johansen HD, Johansen D, and Halvorsen P.
Kvasir-Instrument: Diagnostic and Therapeutic Tool
Segmentation Dataset in Gastrointestinal Endoscopy.
MultiMedia Modeling. 2021
6. Kim YJ, Bae JP, Chung JW, Park DK, Kim KG,
and Kim YJ. New polyp image classification technique
using transfer learning of network-in-network structure
in endoscopic images. en. Scientific Reports 2021 Feb;
11. DOI: 10.1038/s41598-021-83199-9
7. Hicks S, Jha D, Thambawita V, Riegler M, Halvorsen P,
Singstad B, Gaur S, Pettersen K, Goodwin M, Parasa S,
and Lange T de. MedAI: Transparency in Medical Image
Segmentation. Nordic Machine Intelligence 2021
8. Borgli H, Thambawita V, Smedsrud PH, Hicks S,
Jha D, Eskeland SL, Randel KR, Pogorelov K, Lux
M, Nguyen DTD, Johansen D, Griwodz C, Stensland
HK, Garcia-Ceja E, Schmidt PT, Hammer HL, Riegler
MA, Halvorsen P, and Lange T de. HyperKvasir, a
comprehensive multi-class image and video dataset for
gastrointestinal endoscopy. Scientific Data 2020 Aug;
7:283. DOI: 10.1038/s41597-020-00622-y
9. Yakubovskiy P. Segmentation Models. https://github.
com/qubvel/segmentation_models. 2019
10. Deng J, Dong W, Socher R, Li LJ, Li K, and Fei-Fei
L. Imagenet: A large-scale hierarchical image database.
2009 IEEE conference on computer vision and pattern
recognition. Ieee. 2009 :248–55

34

You might also like