0% found this document useful (0 votes)
43 views9 pages

CS 231N Final Project Report: Cervical Cancer Screening

This document summarizes a project to develop a convolutional neural network (CNN) model to classify cervical images into three types. The authors constructed and trained two CNN models from scratch (CervixNet-1 and CervixNet-2) and adapted two existing pretrained models (ResNet v1 and Inception v2). Their best model, CervixNet-2, achieved a classification accuracy of 63% on a dataset of cervical images from Kaggle. The authors hope this project inspires future work to improve cervical image classification through better image segmentation techniques.

Uploaded by

shital shermale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views9 pages

CS 231N Final Project Report: Cervical Cancer Screening

This document summarizes a project to develop a convolutional neural network (CNN) model to classify cervical images into three types. The authors constructed and trained two CNN models from scratch (CervixNet-1 and CervixNet-2) and adapted two existing pretrained models (ResNet v1 and Inception v2). Their best model, CervixNet-2, achieved a classification accuracy of 63% on a dataset of cervical images from Kaggle. The authors hope this project inspires future work to improve cervical image classification through better image segmentation techniques.

Uploaded by

shital shermale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

CS 231N Final Project Report: Cervical Cancer Screening

Huyen Nguyen Tucker Leavitt Yianni Laloudakis


Stanford University Stanford University Stanford University
[email protected] [email protected] [email protected]

1. Abstract

The type of a patient’s cervix determines the type of pre-


cancer treatments the patient can undergo, and the medi-
cal community would benefit from an efficient method to
classify a patient by their cervix type. Kaggle and Mo-
bile ODT have published a collection of several thousand
specular photographs of cervixes, each labeled as one of
three types. We present our work in developing a convolu- Figure 1. Characteristics
tional neural network (CNN) to classify the cervix images of the three cervix types
in this dataset. We constructed and trained two models from (taken from [3])
scratch, CervixNet-1 and CervixNet-2. We also adapted
two existing pretrained models to our dataset, ResNet v1
and Inception v2. We discuss the performance of all four
of these models. Our most successful model, CervixNet-
2, achieved a classification accuracy of 63%. We hope that
this project inspires future work on the cervix classification Figure 2. sample data from the Kaggle Dataset
problem; we suspect that better image segmentation could
help improve model performance.
minimize the cross-entropy loss J for the classification:
N C
1 XX
2. Introduction J =− yij log(pij )
N i j
The earlier the signs of cervical cancer are detected,
the easier the treatment path will be for the patient. This where N is the number of testing data points, C is the
treatment path varies for women based on the physiological number of classes (3, in this case), y is the one-hot vector
differences in their cervix. Rural or understaffed clinics for the correct class, and pij is the predicted probability
would benefit from a way of quickly and accurately that data point i has class j. Our project is to use deep
classifying patients based on their cervix types. Cervical learning and computer vision to automate and improve this
cancer tends to begin in cells within the transformation important classification process.
zone, which could be completely ectocervical and visible
(Type 1), partially endocervical but visible (Type 2), or This project was inspired by a public Kaggle competi-
partially endocervical and not fully visible (Type 3) (see tion, and the dataset is provided on Kaggle’s website.
Figure 1). Cervix types 2 and 3 may require different
screening or treatment due to the placement and hidden 3. Related Works
view of precancerous lesions.
Deep learning and computer vision have proven ef-
fective in the healthcare domain for classification or
The input to our classifiers is a photograph of the cervix segmentation of medical images. Recent efforts using deep
taken through a vaginal speculum. The output is the prob- learning generally either use transfer learning with models
ability distribution over the three classes, from which we pre-trained on ImageNet or copy the architecture of these
extract the most likely class. Quantitatively, the goal is to models and train them from scratch. One recent attempt

1
at cervical cancer classification combined image features Table 1. Class distribution of the dataset
Type 1 Type 2 Type 3
from the last fully connected layer of pre-trained AlexNet
251 782 451
with biological features extracted from a Pap smear to make
16.9% 52.7% 30.4%
the prediction [4]. Another group used features computed
from images of cells from a cervix biopsy as input into
a feed-forward neural network to predict the presence of
cancer [5]. Other manual features from a Pap smear such as
Grey level, wavelet, and Grey level co-occurrence matrix some viewers. Included in Figure 2 are several thumbnail
have been used for cancer detection [6]. Deep learning has sized versions of the training data. Our training set con-
also been used for other types of cancer detection. A con- tained a total of 1481 images (see Table 1 for a breakdown
volutional neural network (CNN) following OxfordNet’s by type) while the test set contains 512 images with the la-
structure was used to detect mammographic lesions [7]. bels not publicly available. Notice that Type 2 makes up
A CNN with parameters pre-trained on a similar dataset over half of the available training data, while Type 1 only
was also used to differentiate between mammographic makes up 17%. Each image has a variable number of pixels
cysts and lesions [8]. A recent paper from a group of but all are colored images.
Stanford researchers has excited the medical community Kaggle provides additional data for training, but the ad-
and uses a pre-trained Inception-v3 model and hierarchical ditional data is of low quality. Manual inspection of the
algorithm to classify different skin malignancies with data reveals that many images are duplicated, and some im-
results comparable to expert dermatologists [9]. Another ages are not even of cervixes (e.g. we found a picture of a
study analyzed colonoscopy video footage and used a CNN woman’s face, a picture of a finger, and a picture of some
to compute image features which were then later used to newsprint). We found that training on the additional data
predict the bounding boxes for different polyps [10]. No did not improve model performance; this is likely because
pre-segmentation was used in one study of lung nodule the additional dataset is not drawn from the same distribu-
classification, which used a CNN feature extractor [11]. tion as the training dataset. We excluded the additional data
from our analysis for this reason.
Automated cervix and cervical cell segmentation is
In an attempt to visualize out data set, we performed
another important area of study. One method takes care to
PCA on the raw images values to look for clustering and
remove glare from the photo and uses K-nearest neighbors
grouping by type and also performed t-SNE analysis on the
(KNN) with images pre-segmented by a distance metric
first 3 principal components us sci-kit learn [23]. Unsurpris-
based off of the histogram of oriented gradients to locate
ingly, as can be seen in Figures 3 and 4, the cervix types do
the most similar bounding boxes and averages them [12].
not fall into clusters based on this analysis, indicating that
A model by researchers at Medical College of Georgia also
our input data points resemble each other.
used glare removal, K-means clustering, and texture fea-
tures to segment the different cell types around the cervix
[13]. A similar method fed color and cell area features
into K-means to segment the cervix [14]. Another group
performed cervix segmentation by first transforming the
image from RGB to luminosity, red-green chromaticism,
and blue-yellow chromaticism, and then ran K-means
and selected the largest region [15]. One group found
that using a CNN to segment cervical cell cytoplasm and
nuclei outperformed traditional filters and classification
methods, especially when multiple cells were in the picture
[16]. LeNet5 was used as inspiration for another group’s
epithelial cell segmentation task [17]. They coped with
dataset scarcity by extensively augmenting the dataset with Figure 3. Training data distribution over the first two principle
flips and rotations. Similarly, a LeNet-like architecture components. No obvious clusters have emerged.
was also used for segmentation of bones in x-rays using
pixel-wise classification [18].
We used either only the original dataset or with the addi-
4. Dataset tional dataset, combined with different data augmentation
methods (see Preprocessing section). We then randomly
Kaggle provides a dataset of approximately 1500 labeled chose 10% of the labeled data for validation, and the rest
cervix images. The images are graphic and may offend for training.

2
nary mask. This final modification was necessary because
the center of the cervix frequently had a redder color than
could be represented by the average color vector, causing
it to be mistakenly excluded. In practice, we used 10 for
K and 5 for M. The higher K and M are, the higher the
chance of including cervical tissue but also extraneous ob-
jects. Some successful and unsuccessful segmentations are
shown in the following figures.

Figure 4. t-SNE clustering of first four principal components

5. Methods
5.1. Preprocessing
Since the initial images provided were much to large
(more than 2000 pixels a side) as well as irregularly shaped,
the first step was to crop the initial images into a square
with the length of the shortest initial side. Then, a 160x160
or 224x224 segment of the image was cut from the center
of the larger image. The assumption, which turns out to be
true most of the time, is that the cervix will be in the center
of the image since it is the most important. Figure 5. A successful segmentation. The initial image, the K-
We attempted a variety of data set augmentation methods means patches, the KNN binary mask, and the final image. Note
to cope with the small dataset. We performed random hor- the glove and speculum are removed but the cervix remains.
izontal and vertical flipping, 90◦ and 270◦ rotations, as well
as random rotation, random cropping, and random scaling
of the inputs.
5.2. Segmentation
Although not the main focus of this project, given the at-
tention paid to segmentation in the literature, we thought it
best to make an effort to segment the cervix, which would
help with removing extraneous objects and tissues from the
input. We took inspiration from [12, 13, 14, 15], who
used K-means and KNN to aid in their segmentation pro-
cess. Our segmentation pipeline is as follows: first, the
image is run through scikit-learn’s image segmentation al-
gorithm, which uses K-means to create roughly K image
patches based on proximity and color similarity [21, 24].
Then KNN is used to determine which of these patches is
cervical tissue. While [12] used the relatively sophisticated
Figure 6. A semi-successful segmentation. The initial image, the
histogram of oriented gradients approach to find the patches K-means patches, the KNN binary mask, and the final image. Per-
closest to pre-segmented cervices, we did not have the lux- haps the average color of the plastic was close enough to cervical
ury of many pre-segmented cervices. Instead, we manually tissue to be included.
segmented 10 random cervices and computed the average
red, green, and blue values, giving us a 3 element feature
5.3. Model Architectures
vector. Then, to decide which of the K patches contained
cervical tissue, we performed KNN using the average color We built two models from scratch for this project:
vector for the patch as the feature. We took the M patches CervixNet-1, a shallow net with two convolutional layers,
with the lowest distances as well as any patches that were and CervixNet-2, a deeper net with five convolutional lay-
contained within these patches and used them to create a bi- ers.

3
Figure 7. A failed segmentation. The initial image, the K-means
patches, the KNN binary mask, and the final image. Because the
initial image was so zoomed in, the final segmentation actually lost
tissue.

5.3.1 CervixNet-1

Our first attempt was a relatively shallow convolutional net-


work that used a batchnorm layer after every convolutional
layer. The network’s architecture is described in detail in
Figure 8.
This model was inspired by Question 5 in Assignment
2, in which we built a multilayer convolutional net to clas-
sify images in the CIFAR-10 dataset. The assignment de-
scription recommended a network architecture with a batch-
normalization layer after every convolutional layer. This
confers a number of advantages:

• it reduces the dependence of the model on weight ini-


tializations

• it improves gradient flow through the network, increas-


Figure 8. Model architecture for CervixNet-1
ing training speed

• it acts as a form of regularization


to increase regularization. Additionally, we applied L2 reg-
CervixNet-1 trained noticeably faster than the model ularization to the last two fully-connected layers to discour-
we submitted for our project milestone, which used fewer age overfitting to the features learned by the previous con-
batch normalization layers. volutional layers.

CervixNet-1 contains 12,684,876 parameters, and 33.1% 5.3.2 CervixNet-2


of the total parameters are in the two fully connected-layers
at the output. Batch normalization significantly improved CervixNet-2 was intended to be a compromise between
model performance. We did not use pooling in this model, CervixNet-1 and larger pretrained models and is described
preferring instead to use strided convolutions to decrease in Figure 9. It features more convolutional layers and uses
the output size. max pooling instead of strided convolutions to reduce the
dimensions. It followed the general design principle of first
Because of the gap between our train and validation building up information with convolutional layers before
losses, we incorporated dropout in every convolutional layer losing information with pooling to avoid representational

4
• ResNet v1 [19]

• Inception v2 [20]

In many problems, retraining only the final fully-


connected layers of the model is necessary. This is because
the pretrained models are already tuned to extract mean-
ingful features from their inputs, and the job of the final
fully-connected layers is to decide which of these meaning-
ful features is relevant for the current classification problem.
However, in our case, we found that the pretrained mod-
els performed poorly unless the entire network was re-
trained. This may be because our dataset has a much
lower dimensionality than ImageNet, on which the pre-
trained models had been trained. This means the models
will produce similar features when run on our images, and
so all our input images look “the same” in feature space and
classification accuracy is poor. We must retrain the entire
model to learn a new set of features that better represents
the differences between images in our dataset.
Over-fitting becomes a significant risk when retraining
the entire pretrained models since our dataset is several or-
ders of magnitude smaller than the dataset used to train
the pretrained models (our dataset contains ∼ 103 images,
whereas ImageNet contains ∼ 107 ).

ResNet v1 ResNet is an unusually deep neural network


(containing hundreds of layers) that aims to learn ”residual
functions” with respect to the layer inputs, instead of learn-
ing unreferenced functions like most other models. It does
this by ”shortcutting” the layer inputs to the layer outputs,
so that the output of the layer is the input plus some ”resid-
ual function” learned by the model. The ”shortucts” provide
an avenue for uninterrupted gradient flow and allow for the
training of much deeper models than with conventional ar-
Figure 9. Model architecture for CervixNet-2
chitectures.

bottlenecks. Also, it made use of a 3x3 filter with a stride


Inception-v2 We choose to use the inception architecture
of two followed by another 3x3 filter with a stride of one,
[20] because of its success in [9] of transfer learning with
which is a computationally cheaper way of boosting recep-
skin cancer. The main advantage of the inception architec-
tive field size instead of using a larger filter size. Both of
ture is that it examines the input at multiple granularities
these design principles were recommended in [20], a paper
by using different filters and spans and concatenates these
which analyzed the inception architecture.
results together as show in the figure below. By concate-
nating these different granularities, less information is lost
5.3.3 Pretrained Models as the depth increases. As part of our experimentation with
this model, we trained many Inception-v2 nets with differ-
Using pretrained models as a basis for training can help ent dropout and L2 regularizations, which is described in
jump-start the training process and take advantage of known the experiments section. Additionally, we varied the num-
successful model architectures. We applied the following ber of iterations of training the full net versus training the
pretrained models to our problem, both of which have been last fully connected layer. We used the weights provided in
highly successful on the ImageNet dataset: TensorFlow-Slim [25].

5
Table 2. Results of some best experiments on CervixNet-1

Dataset Hyperparameters Val loss Test loss


orig augmented, 160 x 160 keep=0.8, l2=0.0, lr=0.0001 0.8242 0.86433
full augmented, 160 x 160 keep=0.85, l2=0.01, lr=0.0001 1.0821 0.87287
full augmented, 160 x 160 keep=1.0, l2=0.0, lr=0.0001 0.8412 0.89765
full augmented, 160 x 160 keep=0.8, l2=0.0, lr=0.0001 0.8926 0.898

Figure 10. One kind of inception module. [20]

6. Experiments
6.1. CervixNet-1 Figure 12. Loss curves for the best performing model.

With CervixNet-1, we experimented with tuning the:


6.2. CervixNet-2
• learning rate, We pursued this model after seeing the results from
CervixNet-1 and Inception v2. The best model benefited
• L2 regularization strength, from a heavy dropout of .5 and a modest L2 regularization
of 0.1. We found that batch-norm actually harmed this
• dropout probability, and model and that it attained its lowest loss without it. We
used a learning rate of .0012, which we found via grid
• number of filters in each layer search. We used the RMSProp optimizer and annealed the
learning rate every 400 iterations by .95 [22].
We also experimented with training the net on the entire
provided dataset (including the duplicate and incorrect im- This model turned out to be our best performing model,
ages) instead of only the original, higher-quality dataset. giving us a test loss of 0.81768.
Below is the learning curve for some experiments with dif-
ferent datasets. 6.3. ResNet v1
We used the 101-layer architecture presented in [19] and
used the pretrained model weights presented in the enclosed
github repo. We retrained the entire network on the cervix
dataset for:
• 500 gradient update steps,
• an initial learning rate of 0.003,
• an annealing factor of .4 every 100 steps, and
Figure 11. Training oss curves during for CervixNet-1. Original
means original data. Full means original + additional data • an l2 regularization strength of 0.0001
In the original paper, they use a higher initial learning
We can see that the val loss when trained on original data rate, a stricter annealing schedule, and the same l2 regular-
goes down much faster than when trained on full dataset. ization penalty, but they train the model for ∼ 104 iterations
This makes sense since the original dataset is much cleaner, instead of ∼ 103 . To run the optimization step, we followed
and therefore it’s easier to interfere the validation data from the paper and used SGD with Momentum, with a momen-
the train data. tum weight of 0.9.

6
To choose the learning rate and regularization strength,
we trained the model for 50 iterations at 5 different learn-
ing rates distributed logarithmically from 10−4 to 10−2 and
regularizations strengths of 10−3 , 10−4 , and 10−6 . These
cross-validation values were chosen based on the reported
hyperparameter values in the paper. To preprocess the
data, images are randomly dilated (i.e. resized), randomly
cropped, and randomly flipped [19]. This is both to aug-
ment the dataset and prevent over-fitting to irrelevant image
features related to spatial location or image size.
Figure 13 shows a representative training loss curve for
the ResNet training. The learning rate was annealed by a
factor of 0.4 at iteration 100 and 200, and the logging fre-
quency was reduced by a factor of 5 at iteration 230.
Though the training loss decreased appreciably over the Figure 14. Confusion matrix for 32 validation data points with the
first hundred iterations, the loss begins to plateau after iter- ResNet v1 model
ation 100. Due to time and resource constraints, the ResNet
model could only be trained for a limited number of gra-
dient steps. It is likely that model performance could have Table 3. ResNet predicted class distributions vs. actual Class dis-
been improved by training for more iterations. The decrease tributions
in loss over the first 100 iterations is likely due to the last
fully connected layer training to fit the data; it yields sig- Type 1 Type 2 Type 3
nificant progress relatively quickly. For the rest of the time, Average predicted
0.499 0.454 0.046
the entire model is training, and will likely take on the order Class Prevalence
of 104 iterations to fully converge. Actual Class
0.169 0.527 0.304
Prevalance

6.4. Inception

We had great difficulty attaining good results using the


pre-trained Inception v2 network. To try and get different
results, we changed the dropout keep probability, the L2
regularization, the iteration where we switched from train-
ing the full net to training the last layer, and which dataset
we used (segmented or not segmented cervices). The results
are shown in Table 2. To address the large discrepancy be-
tween train and validation losses, which can be seen in Fig-
ure 11, we tested heavy dropout and L2 regularization. The
Figure 13. Training loss curve for the ResNet v1 model.
same learning rate of .001 was used for all experiments be-
cause the train loss dropped quickly enough. TensorFlow’s
By examining the predictions ResNet makes, its clear Momentum Optimizer with a learning rate of .001 and a mo-
that the model has not yet converged. Table 3 shows the mentum of .9 was used. The learning rate was annealed by
average softmax probability assigned to images in the vali- a factor of .95 every 400 steps and gradients were clipped so
dation data set, and compares it with the actual prevalance that the maximum global norm is 2. The poor performance
of each class in the entire data set. For an accurate model, of transfer learning with Inception v2 can be ascribed to the
these two statistics should be the same in expectation. ImageNet pictures being too different from cervices and the
Clearly, for this model, they are not. The Resnet model fact that the net is too powerful for this dataset. Despite
predicts class 1 much more often than it should, and hardly heavy regularization there was still a massive gap between
ever predicts class 3, which represents almost a third of the the training and validation sets, indicating that the net was
dataset. learning noise.

7
Table 5. Performance Statistics for best performing models of each
Architecture

Validation Validation Test


Model
Loss Accuracy Loss
CervixNet-1 0.8242 61.7% 0.86433
CervixNet-2 0.83758835 65.1% 0.81768
Resnet v1 0.8971841 62.3% 1.08586
Inception v2 0.96845 52.6% 0.95646

Figure 15. Train (solid) and Validation (dashed) plots for the dif-
ferent experiments listed in Table 2.

Table 4. Performance Statistics for Inception v2 Experiments


Figure 16. Confusion matrix for 100 validation data points on
CervixNet-2 [5.4].
Exp.
Test
Num Dropout L2 Reg. Switch Dataset
Loss
1 0.5 1.0 1000 Seg. .95655
2 0.4 10.0 Never Unseg. .97413
3 0.5 1 Never Unseg. .96866
4 0.5 10.0 Never Unseg. .95646
5 0.5 5 500 Unseg. .99432
6 0.5 5 1000 Unseg. .97094
Figure 17. Saliency maps for CervixNet-1 on several validation
images.
7. Results
Table 5 summarizes the best results achieved by each of ages on the test set that is also on the given train set with
our four model architectures. Figure 16 shows a confusion labels), inconsistency (duplicated images with different la-
matrix produced by our best model on 100 validation data bels). Additionally, we didn’t have enough background in
image processing to really take advantage of the data we
points. Figure 17 shows a saliency map on several valida-
had. The results we got after cleansing and augmenting
tion images. Our model assigns a high probability to the the data are only small improvements from results obtained
correct class for the left three images and a low probability with the original, un-augmented data. Furthermore, We
to the correct class for the right three images. The maps struggled with understanding the correlation between the
indicate that the model is not successfully identifying the loss on our validation set and the loss on the Kaggle’s test
important features of the cervix, since nearly all of pixels set. For some models, lower val loss does lead to lower test
in each image impact the gradient and the saliency maps loss, but for some models, lower val loss leads to higher test
for the correctly classified and incorrectly classified images loss.
look nearly indistinguishable. If we had more time to work on this project, we would
try more and different data preprocessing techniques, such
8. Conclusion and Future Work as training a dilated convolutional network to localize the
cervix: with each pixel, the network determines whether
The cervix classification problem is a challenging one. that pixel belongs to the cervix or not. We spent some time
Our data is limited and of low quality. There are leaks (im- to get rid of low quality images, but maybe we should have

8
spent more time to rigorously cleanse the additional data. imaging 34.1 (2015): 229-245.
Batch normalization proved to increase performance on [13] Li, Wenjing, et al. “Automated image analysis of uter-
some models at the cost of lower speed. We could have ine cervical images.” Medical Imaging. International Society for
tried weight normalization instead. Optics and Photonics, 2007.
Our best training cross-enntropy loss score of 0.817 puts [14] Srinivasan, Yeshwanth, et al. “A probabilistic approach
us within the top 200 submissions on Kaggle. Given more to segmentation and classification of neoplasia in uterine cervix
time to experiment and refine, we expect this score can be images using color and geometric features.” Medical Imaging. In-
improved. ternational Society for Optics and Photonics, 2005.
We learned a lot from the project, both about image pro- [15] Das, Abhishek, Avijit Kar, and Debasis Bhattacharyya.
cessing and deep neural networks. This is also the first “Elimination of specular reflection and identification of ROI: The
Kaggle competition for all our team members, and we all first step in automated detection of Cervical Cancer using Digital
thought that it was a fun experience. This motivates us to Colposcopy.” Imaging Systems and Techniques (IST), 2011 IEEE
do not only more Kaggle competitions in the future, but to International Conference on. IEEE, 2011.
apply what we’ve learned in class to real world problems. [16] Song, Youyi, et al. “A deep learning based framework
for accurate segmentation of cervical cytoplasm and nuclei.” En-
References gineering in Medicine and Biology Society (EMBC), 2014 36th
annual international conference of the IEEE. IEEE, 2014.
[1] Ioffe, S., & Szegedy, C. “Batch Normalization : Accelerat- [17] Malon, Christopher, et al. “Identifying histological ele-
ing Deep Network Training by Reducing Internal Covariate Shift.” ments with convolutional neural networks.” Proceedings of the 5th
arXiv Preprint arXiv:1502.03167v3. (2015). international conference on Soft computing as transdisciplinary
[2] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & science and technology. ACM, 2008.
Salakhutdinov, R. Dropout: a simple way to prevent neural net- [18] Cernazanu-Glavan, Cosmin, and Stefan Holban. “Seg-
works from overfitting. J. Machine Learning Res. 15, 19291958 mentation of bone structure in X-ray images using convolutional
(2014). neural network.” Adv. Electr. Comput. Eng 13.1 (2013): 87-94.
[3] MobileODT, Intel, & Kaggle Inc. “Intel & MobileODT [19] He, K. et al. “Deep Residual Learning for Image Recog-
Cervical Cancer Screening.” www.kaggle.com/c/intel-mobileodt- nition” arXiv Prepring arXiv:1512.03385. (2015)
cervical-cancer-screening (2017). [20] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wo-
[4] Xu, Tao, et al. “Multimodal Deep Learning for Cervical jna. “Rethinking the inception architecture for computer vision.”
Dysplasia Diagnosis.” International Conference on Medical Im- arXiv preprint arXiv:1512.00567. (2015)
age Computing and Computer-Assisted Intervention. Springer In- [21] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aure-
ternational Publishing, 2016. lien Lucchi, Pascal Fua, and Sabine Suesstrunk, SLIC Superpixels
[5] Sokouti, Babak, Siamak Haghipour, and Ali Dastranj Compared to State-of-the-art Superpixel Methods, TPAMI, May
Tabrizi. “A framework for diagnosing cervical cancer dis- 2012.
ease based on feedforward MLP neural network and ThinPrep [22] Hinton, Geoffrey, NiRsh Srivastava, and Kevin Swersky.
histopathological cell image features.” Neural Computing and Ap- ”Neural Networks for Machine Learning Lecture 6a Overview of
plications 24.1 (2014): 221-232. mini-batch gradient descent.” (2012).
[6] Sukumar, P., and R. K. Gnanamurthy. “Computer aided de- [23] Pedregosa, Fabian, et al. ”Scikit-learn: Machine learning
tection of cervical cancer using PAP smear images based on hybrid in Python.” Journal of Machine Learning Research 12.Oct (2011):
classifier.” International Journal of Applied Engineering Research 2825-2830.
10.8 (2015): 21021-32. [24] Van der Walt, Stefan, et al. ”scikit-image: image process-
[7] Kooi, Thijs, et al. “Large scale deep learning for computer ing in Python.” PeerJ 2 (2014): e453.
aided detection of mammographic lesions.” Medical image analy- [25] Martn Abadi, Ashish Agarwal, Paul Barham, Eugene
sis 35 (2017): 303-312. Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis,
[8] Kooi, Thijs, et al. “Discriminating solitary cysts from soft Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfel-
tissue lesions in mammography using a pretrained deep convolu- low, Andrew Harp, Geoffrey Irving, Michael Isard, Rafal Jozefow-
tional neural network.” Medical physics 44.3 (2017): 1017-1027. icz, Yangqing Jia, Lukasz Kaiser, Manjunath Kudlur, Josh Lev-
[9] Esteva, Andre, et al. “Dermatologist-level classification of enberg, Dan Man, Mike Schuster, Rajat Monga, Sherry Moore,
skin cancer with deep neural networks.” Nature 542.7639 (2017): Derek Murray, Chris Olah, Jonathon Shlens, Benoit Steiner, Ilya
115-118. Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay
[10] Park, Sun Young, and Dusty Sargent. “Colonoscopic
Vasudevan, Fernanda Vigas, Oriol Vinyals, Pete Warden, Martin
polyp detection using convolutional neural networks.” SPIE Medi-
Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Ten-
cal Imaging. International Society for Optics and Photonics, 2016.
[11] Shen, Wei, et al. “Multi-scale convolutional neural net- sorFlow: Large-scale machine learning on heterogeneous systems,
works for lung nodule classification.” International Conference 2015. Software available from tensorflow.org.
on Information Processing in Medical Imaging. Springer Interna-
tional Publishing, 2015.
[12] Song, Dezhao, et al. “Multimodal Entity Coreference
for Cervical Dysplasia Diagnosis.” IEEE transactions on medical

You might also like