0% found this document useful (0 votes)
15 views14 pages

Sensors 23 07318

Research Paper

Uploaded by

neelamysr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views14 pages

Sensors 23 07318

Research Paper

Uploaded by

neelamysr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

sensors

Article
Breast Cancer Histopathological Images Segmentation Using
Deep Learning
Wafaa Rajaa Drioua 1, *, Nacéra Benamrane 1 and Lakhdar Sais 2

1 Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran


Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria; [email protected]
2 Centre de Recherche en Informatique de Lens, CRIL, CNRS, Université d’Artois, 62307 Lens, France;
[email protected]
* Correspondence: [email protected]

Abstract: Hospitals generate a significant amount of medical data every day, which constitute a very
rich database for research. Today, this database is still not exploitable because to make its valorization
possible, the images require an annotation which remains a costly and difficult task. Thus, the use
of an unsupervised segmentation method could facilitate the process. In this article, we propose
two approaches for the semantic segmentation of breast cancer histopathology images. On the one
hand, an autoencoder architecture for unsupervised segmentation is proposed, and on the other
hand, an improvement U-Net architecture for supervised segmentation is proposed. We evaluate
these models on a public dataset of histological images of breast cancer. In addition, the performance
of our segmentation methods is measured using several evaluation metrics such as accuracy, recall,
precision and F1 score. The results are competitive with those of other modern methods.

Keywords: semantic segmentation; histopathology; breast cancer; U-Net; convolutional autoencoder

1. Introduction
Breast cancer is currently the most common cancer in women. Breast cancer is also
Citation: Drioua, W.R.; Benamrane, the principal cause of death due to cancer in women [1]. Breast cancer tops the list of
N.; Sais, L. Breast Cancer prevalent cancer types prevalent in Algeria, with more than 14,000 new cases recorded
Histopathological Images each year (https://www.aps.dz/sante-science-technologie/128390-cancer-en-algerie-65-0
Segmentation Using Deep Learning. 00-nouveaux-cas-depuis-debut-2021, accessed on 1 December 2020). The best chances of
Sensors 2023, 23, 7318. https:// curing are based on an early diagnosis, which will in turn help to detect cancer, allowing
doi.org/10.3390/s23177318 for treatment that is generally more effective, less complex and less expensive.
Academic Editor: Sheryl Berlin Medical imaging has made significant progress in recent years. The examinations used
Brahnam to detect breast cancer are mammography [2,3], ultrasound [4] and Magnetic Resonance
Imaging (MRI) [5]. In the case of the presence of a doubtful or suspicious lesion of cancer,
Received: 26 July 2023 either a nodule [6] or a microcalcification [7], a biopsy is recommended. This biopsy
Revised: 10 August 2023
confirms the cancer diagnosis, type and stage [8].
Accepted: 18 August 2023
The digitization of tissue samples obtained by taking a sample makes it possible to
Published: 22 August 2023
convert microscopic slides into histopathological images. The automated segmentation of
cells from histopathological images of breast cancer is a crucial step for the analysis of cell
morphology, which is essential for the diagnosis of different pathologies particularly in
Copyright: © 2023 by the authors.
oncology [9].
Licensee MDPI, Basel, Switzerland. Breast cancer diagnosis has evolved through multiple research techniques over the
This article is an open access article years, such as segmentation, detection, and classification. The role of detection is to separate
distributed under the terms and and identify the different regions of the image. The most frequently used object detection
conditions of the Creative Commons models are Faster R-CNN (convolutional neural network) [10], Mask R-CNN [11], and
Attribution (CC BY) license (https:// YOLO (You Only Look Once) [12].
creativecommons.org/licenses/by/ In the literature, a number of supervised research works have been conducted that
4.0/). apply morphological operations for the segmentation of histopathological images [13].

Sensors 2023, 23, 7318. https://doi.org/10.3390/s23177318 https://www.mdpi.com/journal/sensors


Sensors 2023, 23, 7318 2 of 14

Various hybridized models have been presented in the literature [14,15]. Qu et al. pro-
posed a pixel-wise classifier Support Vector Machine (SVM)-based method for tumor
matrix segmentation and a marker-driven watershed-based method for nuclei segmen-
tation [14]. Rashmi et al. developed a segmentation technique combining a multilayer
perceptron (MLP) and SVM for the segmentation of cell images of breast cancer [15].
Abdolhoseini et al. [16] developed a segmentation technique combining multilevel thresh-
olding and the watershed algorithm to separate clustered nuclei.
Faridi et al. [17] presented an automated system for detecting and segmenting cancer
cell nuclei that is partially different from the system for segmenting healthy cell nuclei
using the level-set algorithm. Jian et al. [18] utilized a thresholding technique using OTSU.
Therefore, artificial intelligence-based applications, especially deep convolutional neural
network, were used [19].
However, some works on the segmentation of histopathological images have been
proposed. Sahasrabudhe et al. [20] proposed a supervised approach for the segmentation
of nuclei without annotations. They used a fully convolutional attention network based
on advanced filters to generate segmentation maps for nuclei in the image space. Kate
et al. [21] developed a model that is based on the particle swarm optimizer (PSO) for the
segmentation of breast cancer histopathology images. Shu et al. [22] presented a method
to segment highly clustered overlapping cores. The proposed method uses a combined
global and local thresholding method to extract foreground regions. Xu et al. [23] pro-
posed an unsupervised method, termed the tissue cluster level graph cut, for segmenting
histological images into meaningful compartments (tumor or non-tumor regions). This
approach has been evaluated on histological image sets for necrosis and melanoma de-
tection. Khan et al. [24] proposed a framework for unsupervised tumor segmentation
based on stromal organization, which was divided into two types: hypocellular stroma and
hypercellular stroma. Evaluation of the algorithm was performed using H&E-stained breast
histology images. Fouad et al. [25] presented an alternative data-independent framework
based on the unsupervised segmentation of oropharyngeal cancer tissue micro-arrays from
histological images.
Following the great success of convolutional neural networks in image analysis, we
used deep learning architectures to study the problem of nuclei segmentation in breast
cancer histopathological images. Firstly, we presented an unsupervised architecture using
an autoencoder to avoid manually calculating the characteristics of the k-means clustering
input. Secondly, we developed an improved U-Net for semantic segmentation.
The paper is organized as follows. Section 2 presents related work. Section 3 describes
the methodology and the different architectures used in this work. Section 4 presents the
results and discussion. The comparative study is described in Section 5, which is followed
by the conclusion and some futures perspectives in Section 6.

2. Related Works
In recent years, deep learning CNNs have dominated many areas of computer vision
applications. The section will review the state of the art of CNN-based methods for nuclei
segmentation from histopathological images.
Chan et al. [26] proposed a method for the semantic segmentation of histological
tissue (HistoSegNet). The authors trained a convolutional neural network on patch anno-
tations and inferred gradient-weighted class activation maps with average overlapping
pre- dictions. Cui et al. [27] introduced a nucleus-boundary model, which used a fully
convolutional neural network to simultaneously predict the nucleus and its boundaries.
The experimental results show that the proposed method outperformed prior state-of-the-
art methods. Paramanandam et al. [28] proposed a segmentation algorithm for detecting
single nuclei from breast histopathology images stained with hematoxylin and eosin. The
recognizer estimates a nuclei saliency map using boundary extraction using a Loopy Back
Propagation (LBP) algorithm on a Markov random field. Naylor et al. [29] formulated the
segmentation problem as a distance map regression problem. The authors demonstrate
Sensors 2023, 23, 7318 3 of 14

a performance of the method compared to other methods using CNN. Veta et al. [30] de-
scribed the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13).
Zemouri et al. [31] proposed a breast cancer computer-aided diagnosis based on construc-
tive deep neural network and joint variable selection. This contribution outperformed
the use of the deep learning architecture alone. Jafarbiglo et al. [32] presented an au-
tomatic diagnostic system that classifies histopathological images based on the nuclear
atypia criterion using a CNN-based method. Kang et al. [33] applied four parallel back-
bone nets, which were merged by the attention generation model. Kaushal et al. [34]
recently summarized techniques for breast cancer diagnosis using histopathological images.
Wahab et al. [35] used the concept of transfer learning by first using a pre-trained convolu-
tional neural network for segmentation and then another hybrid-CNN for the classification
of mitoses.
Qu et al. [36] used the fully connected conditional random field loss for further refine-
ment. The model did not introduce extra computational complexity during inference. Xu
et al. [37] presented a deep convolutional neural network (DCNN)-based feature learning
to automatically segment or classify epithelial and stromal regions from digitized tumor
tissue microarrays and then compared DCNN-based models with three handcraft feature
extraction-based approaches. Sohail et al. [38] proposed a CNN-based deep multiphase
mitosis detection framework for identifying mitotic nuclei in breast cancer histopathology
images. The authors developed an automatic label refiner to render weak labels with
semantic information for the purpose of training deep CNN. Cao et al. [39] presented an
automated method for breast cancer scoring in histopathology images based on computer-
extracted pixel, object, and semantic-level features derived from CNN.
Ozturk et al. [40] proposed an automatic semantic segmentation based on cell type
using the structure of novel deep convolutional networks (DCNNs). The authors presented
semantic information on four classes, including white areas in the whole-slide image, tissue
without cells, tissue with normal cells and tissue with cancerous cells. Kaushal et al. [41]
compared various state-of-the-art segmentation techniques for extracting cancer cells in
histopathology images using the triple-negative breast cancer dataset. Gour et al. [42]
developed a deep residual neural network model (DeepRNNetSeg) for automatic nucleus
segmentation on histopathological breast cancer images.
Deep learning approaches, in particular via auto-encoding architectures, make it
possible to avoid manually defining the characteristics by computing a compressed rep-
resentation of an image in a latent space via applying convolutional filters. Most of the
work has been used to perform segmentation or kernel detection. Xu et al. [43] presented
a stacked sparse autoencoder for the efficient detection of nuclei from high-resolution
histopathological images of breast cancer. Raza et al. [44] summarized various unsuper-
vised deep learning models, tools, and benchmark datasets applied to medical image
analysis. Janowczyk et al. [45] presented stain normalization using sparse autoencoders
(StaNoSA) to normalize the color distribution of test image. The architecture was applied
on digital histopathology slides. Hou et al. [46] developed an unsupervised detection
network by exploiting the properties of histopathological images. They identified nuclei in
image patches in tissue images and encoded them into a feature map encoding the location
of the nuclei.

3. Methodology
The first method is designed for the unsupervised segmentation of overlapping nuclei,
and the second method aims to segment regions of the nuclei. In this section, we present
our two methods.

3.1. Segmentation-Based Deep Learning Cluster Architecture


Our framework for learning the neural network parameters and cluster assignments
is based on a deep learning cluster architecture, as shown in Figure 1.
3.1. Segmentation-Based Deep Learning Cluster Architecture
Sensors 2023, 23, 7318 Our framework for learning the neural network parameters and cluster
4 of 14 assig
is based on a deep learning cluster architecture, as shown in Figure 1.

Figure 1. Steps of unsupervised segmentation method.

In this section, we investigate the potential of using convolutional autoencoders


clustering histopathological images. As shown in Figure 2, a convolutional autoencod
(CAE) is a deep convolutional neural network consisting of two parts: an encoder and
decoder. The main purpose of CAE is to minimize a reconstruction loss, a function eva
ating the difference between the input and the output of the CAE, as shown in Figure
Once this function is minimized, it can be assumed that the encoder part establishe
proper abstract of the input data in latent space, because the decoder part is able to reco
Figure 1.1. Steps of unsupervised segmentation method. method.
structFigure
a strongly Steps of unsupervised
similar segmentation
copy of it from this encoded representation. The CAE is describ
in Algorithm In this1.section, we investigate the potential of using convolutional autoencoders for
clusteringIn this section, we investigate
histopathological images. As shown the potential
in Figure 2,ofa using convolutional
convolutional autoencoderautoenco
clustering
(CAE) histopathological
is a deep convolutional neuralimages.
networkAs consisting
shown inofFigure 2, aanconvolutional
two parts: encoder and auto
Algorithm 1
a(CAE)
decoder. The main purpose of CAE is to minimize a reconstruction
is a deep convolutional neural network consisting of two parts: loss, a function
an encode
Input:evaluating
Image set the X = {x}, Network
difference between theNet (N,and
input C,the
Z)output of the CAE, as shown in Figure 3.
decoder. The main purpose of CAE is to minimize a reconstruction loss, a functio
Once the
Initialize this function
network is minimized,
parameters it can
N,be C,assumed
Z that the encoder part establishes a proper
ating the difference between the input
abstract of the input data in latent space, because the and the output
decoder ofisthe
part ableCAE, as shown in F
to reconstruct
Repeat
a strongly similar copy of it from this encoded representation. The CAE is describedpart
Once this function is minimized, it can be assumed that the encoder in estab
Update network
proper abstract
Algorithm 1. parameters by minimizing reconstruction loss until convergence
of the input data in latent space, because the decoder part is able t
For each
struct image x in
a strongly X, do
similar copy of it from this encoded representation. The CAE is de
Algorithm 1
Generate
inInput:reconstruction
Algorithm 1. image xR from Net (x, N, C, Z)
Image set X = {x}, Network Net (N, C, Z)
End for Initialize the network parameters N, C, Z
Output: reconstruction
Repeat
Algorithm 1 image xR representation in latent space Z
Update network parameters by minimizing reconstruction loss until convergence
Input:
For eachImage
image xset
in X,Xdo
= {x}, Network Net (N, C, Z)
Figure 2
Initialize shows our
the network
Generate reconstruction proposed
xR fromnetwork
parameters
image Net N, architecture.
C,C,
(x, N, ZZ) The autoencoder is used for
curateRepeat
image
End for segmentation. We trained this autoencoder using the encoder’s weights a
Output: reconstruction image xR representation in latent space Z
added another branch for clustering.
Update network parameters by minimizing reconstruction loss until convergence
For each image x in X, do
Generate reconstruction image xR from Net (x, N, C, Z)
End for
Output: reconstruction image xR representation in latent space Z

Figure 2 shows our proposed network architecture. The autoencoder is used


curate image segmentation. We trained this autoencoder using the encoder’s weig
added another branch for clustering.

FigureFigure
2. Proposed approach
2. Proposed approachfor
for unsupervised segmentation.
unsupervised segmentation.

Figure 2. Proposed approach for unsupervised segmentation.


Sensors 2023,23,
Sensors2023, 23,7318
x FOR PEER REVIEW 55 of 14
14

Figure 3.
Figure 3. Proposed
Proposed approach
approach for
for unsupervised
unsupervisedsegmentation.
segmentation.

The encoder
Figure 2 shows consists of an input
our proposed layer (ofarchitecture.
network the size of the The input image), which
autoencoder is used is con-
for
nected toimage
accurate N convolution
segmentation. layersWe of decreasing
trained thissize, up to an information
autoencoder bottleneck
using the encoder’s of size
weights
and addedisanother
Z, which branchspace.
called latent for clustering.
The latent space is connected to a series of layers of N
The encoder
convolutions consists size
of increasing of an input
until layer the
reaching (of size
the size
of theofinput
the input
image.image), which
This second is
part
connected
is called the to decoder.
N convolution layers of decreasing
Each convolution size, up to of
layer is composed an C information
convolutions bottleneck
and is fol- of
size
lowedZ, which
by three is called latent space.
other layers: The latent an
a normalization, space is connected
activation functionto a(ReLU),
series ofandlayers
a maxof
N convolutions
pooling of size (2,2). of increasing size until reaching the size of the input image. This second
part isThe called
firstthe
stepdecoder.
is to train Each convolution layer
an autoencoder usingisa composed
set of unlabeledof C convolutions
images. An autoen-and is
followed by three other layers: a normalization, an activation
coder consists of an encoder network and a decoder network. The encoder compresses function (ReLU), and a maxthe
pooling
input imageof sizeinto(2,2).
a lower-dimensional latent representation, while the decoder recon-
Thethe
structs first step from
image is to train an autoencoder
the latent usingThe
representation. a set of unlabeled
autoencoder images.to
is trained An autoen-
minimize
coder consists of an encoder network and a decoder network.
the difference between the original input and the reconstructed output, effectively learn- The encoder compresses the
input image into a lower-dimensional
ing to capture meaningful features in the data. latent representation, while the decoder reconstructs
the image
Once from the latent representation.
the autoencoder The autoencoder
is trained, the encoder network isis used trained to minimize
to extract the
the latent
difference between the original input and the reconstructed output,
space representation of each image in the dataset. These latent representations capture the effectively learning to
capture meaningful features in the data.
essential characteristics of the images in a compact form.
Once
To performthe autoencoder
clustering, is trained,CAE
a trained the isencoder
used tonetwork
encode each is usedparttoofextract the latent
the image. Then,
space
the coded representation in the latent space is given as input to a K-means clusteringthe
representation of each image in the dataset. These latent representations capture al-
essential
gorithm, characteristics
which assigns of it athe images
cluster. in of
One a compact
the mainform. challenges of unsupervised clustering
is to To
findperform clustering, a trained
the correspondence between CAE is usedand
a cluster to encode
a class.each partcase,
In our of thethe
image.
problem Then, is
the coded representation in the latent space is given as input to a K-means clustering
expressed as a two-class problem: tumor or no-tumor.
algorithm, which assigns it a cluster. One of the main challenges of unsupervised clustering
The next step is to apply the K-means clustering algorithm to the extracted latent
is to find the correspondence between a cluster and a class. In our case, the problem is
representations. K-means clustering is an iterative algorithm that aims to partition data
expressed as a two-class problem: tumor or no-tumor.
points into two clusters. The algorithm finds the cluster centers that minimize the sum of
The next step is to apply the K-means clustering algorithm to the extracted latent
squared distances between the data points and their respective cluster centers. In this case,
representations. K-means clustering is an iterative algorithm that aims to partition data
the latent representations serve as the input data for the K-means algorithm.
points into two clusters. The algorithm finds the cluster centers that minimize the sum of
Once the K-means algorithm has converged, each latent representation is assigned to
squared distances between the data points and their respective cluster centers. In this case,
one of the 2 clusters based on its proximity to the cluster centers. This assignment deter-
the latent representations serve as the input data for the K-means algorithm.
mines the segmentation labels for each input image.
Once the K-means algorithm has converged, each latent representation is assigned
To perform segmentation on a new image, it is first passed through the trained en-
to one of the 2 clusters based on its proximity to the cluster centers. This assignment
coder
determinesto obtain its latent spacelabels
the segmentation representation.
for each input Then, the representation is assigned to one
image.
of theTo2perform
clusterssegmentation
using the K-means on a new algorithm.
image, it Finally, the corresponding
is first passed through the trainedclusterencoder
label is
assigned to each pixel in the image, resulting in a segmented image.
to obtain its latent space representation. Then, the representation is assigned to one of the 2
clusters using the K-means algorithm. Finally, the corresponding cluster label is assigned
3.2.
to Segmentation
each pixel in the Based
image,on Improved
resultingU-Net
in a segmented image.
A framework is proposed to automatically segment nuclei regions and overlapping
3.2. Segmentation
nuclei regions. Based on Improved U-Net
In order to train
A framework our model,
is proposed to we need a significant
automatically segment amount
nuclei of data. The
regions and quantity
overlapping and
qualityregions.
nuclei of our dataset will play an important role in the development of a good model; the
data augmentation will be of great use to us. The principle of data augmentation is based
Sensors 2023, 23, 7318 6 of 14

Sensors
Sensors 2023,
2023, 23,
23, xx FOR
FOR PEER
PEER REVIEW
REVIEW 6 of 14
In order to train our model, we need a significant amount of data. The quantity6 and of 14

quality of our dataset will play an important role in the development of a good model;
the data augmentation will be of great use to us. The principle of data augmentation is
on
on the
based principle
theon of
of artificially
the principle
principle augmenting
of artificially
artificially our
our data
augmenting
augmenting our
data by applying
bydata transformations.
by applying
applying We
We will
transformations.
transformations. will
be
We able
will to
be increase
able to the diversity
increase the and,
diversitythus,
and, increase
thus, the learning
increase the
be able to increase the diversity and, thus, increase the learning domain of our model. domain
learning of our
domain model.
of our
model.
There There
There are
are are several
several
several techniques
techniques
techniques which
whichwhich
are are
are most
most most often
often
often used,
used,
used, suchas
such
such asasrotation,
rotation, saturation,
rotation, saturation,
saturation,
brightness,
brightness,and
brightness, andnoise.
and noise.In
noise. Inthis
In thiswork,
this work,we
work, weused
we used
used thethe
therotation
rotation
rotationtechnique
technique
technique with
withdifferent
with angles,
different
different as
angles,
angles,
seen
as in Figure
as seen
seen in 4; this
in Figure
Figure 4; stepstep
4; this
this was was
step carried
was out by
carried
carried out
out thebyRoboflow
by the library
the Roboflow
Roboflow [47]. This
library
library [47].provided
[47]. This most
This provided
provided
of the of
most
most tools
of theneeded
the tools to convert
tools needed
needed to raw images
to convert
convert raw into a custom-trained
raw images
images into computer
into aa custom-trained
custom-trained vision model
computer
computer vision
vision
and
model deploy
and them
deploy for
themuse in
for our
use applications.
in our
model and deploy them for use in our applications. applications.

Figure 4.
Figure4.
Figure Data
4.Data augmentation.
Dataaugmentation.
augmentation.

In
In histopathological
Inhistopathological images,
histopathologicalimages, semantic
images,semantic segmentation
semanticsegmentation
segmentationaimsaims
aimstoto label
tolabel each
labeleach pixel
eachpixel with
pixelwith one
withone
one
of
of two
oftwo diagnoses
twodiagnoses (cancerous/non-cancerous).
diagnoses(cancerous/non-cancerous).
(cancerous/non-cancerous).These These methods
methods
These methods include
include
include slider-based
slider-based meth-
methods
slider-based meth-
ods
ods that
that that train
train and
and predict
and predict
train at theat
predict the
the pixel
atpixel of theof
pixel the
the slider
ofslider patchpatch
slider to
to obtain
to obtain
patch predictions.
predictions.
obtain predictions.
To
To validate the
the proposed
proposed model, firstly,
firstly, we
To validate the proposed model, firstly, we prepared the data (data
validate we prepared the data (data preprocessing).
preprocessing).
This
This consisted
consisted of
of dividing
dividing each
each image
image into
into patches
patches 32
32 ××
This consisted of dividing each image into patches 32 × 32 in size in theU-Net
32 in size in the U-Net model.
U-Netmodel. This
model.This
This
operation
operation classified
operationclassified
classifiedour our data
data
our data into
into two
into two groups,
groups,
two cancer
cancer
groups, or
or non-cancer,
or non-cancer,
cancer basedbased
non-cancer, on
on ground
on ground
based truth.
ground
Figure
truth. 5 illustrates
truth. Figure
Figure an example
55 illustrates
illustrates an of this of
an example
example processing.
of this
this processing.
processing.

Figure
Figure 5. Data preprocessing.
Figure 5.
5. Data
Data preprocessing.
preprocessing.
The U-Net was proposed by [48] for the segmentation of biomedical images, where
The
The U-Net
U-Net was
was proposed
proposed by by [48]
[48] for
for the
the segmentation
segmentation ofof biomedical
biomedical images,
images, where
where
training data are often scarce. The encoder network and decoder network form a U-
training
training data
data are
are often
often scarce.
scarce. The
The encoder
encoder network
network and
and decoder
decoder network
network form
form aa U-
U-
shaped architecture. In the encoder path, convolutional/max pooling layers reduce spatial
shaped architecture. In the encoder path, convolutional/max pooling layers reduce
shaped architecture. In the encoder path, convolutional/max pooling layers reduce spatialspatial
information
information while
while increasing
increasing feature
feature information.
information. In
In the
the decoder
decoder path,
path, feature
feature maps
maps and
and
spatial
spatial information
information are
are combined
combined withwith high-resolution
high-resolution features
features from
from the
the decoder
decoder path
path
Sensors 2023, 23, 7318 7 of 14

Sensors 2023, 23, x FOR PEER REVIEW 7 of 14

information while increasing feature information. In the decoder path, feature maps and
spatial information are combined with high-resolution features from the decoder path
through
through aa series
series ofof up-convolutions
up-convolutions and and concatenations.
concatenations. TheThe proposed
proposed method
method consists
consists of
of
re-designing a U-Net network structure with new layers. To validate the
re-designing a U-Net network structure with new layers. To validate the proposed model, proposed model,
firstly,
firstly,we
weprepared
preparedthe thedata
data(data preprocessing),
(data preprocessing), andand
then, the the
then, model waswas
model trained. As the
trained. As
output, the final vector allowed for a fine-grained prediction of binary classes
the output, the final vector allowed for a fine-grained prediction of binary classes (tumor (tumor or
non-tumor).
or non-tumor).
The
The U-Net
U-Net network
network has
has aa deconvolution
deconvolution part part symmetrical
symmetrical to to the
the convolution
convolution part,
part,
which
which makes it possible to obtain feature maps whose sizes are compatible between the
makes it possible to obtain feature maps whose sizes are compatible between the
two
two parts
parts of
of the
the network.
network. Thus,
Thus, the
the feature
feature maps
maps extracted
extracted in
in the
the convolution
convolution partpart could
could
be
be concatenated
concatenated to to those
those reconstructed
reconstructed in in the
the deconvolution
deconvolution part,
part, thus
thus transforming
transforming moremore
important
important spatial information and allowing for better reconstruction. The addition
spatial information and allowing for better reconstruction. The addition ofof new
new
layers
layers in
in the
the encoder
encoder andand decoder
decoder parts
parts (as
(as seen
seen in
in Figure
Figure 6)
6) allows
allows for
for better
better collaboration
collaboration
between
between thethe different
different feature
feature maps
maps andand improves
improves thethe recognition
recognition capacity
capacity of of the
the network.
network.

Figure 6.
Figure 6. U-Net
U-Net architecture.
architecture.

Encoder: The
Encoder: The encoder
encoder part
part of
of aa U-Net
U-Net typically
typically consists
consists of
of convolutional
convolutional layers,
layers, max
max
pooling,
pooling, and
and non-linear
non-linear activation
activation functions.
functions. TheThe number
number of of these
these layers
layers can
can suit
suit specific
specific
requirements
requirementsby, by,for
forexample,
example,increasing
increasingthe thedepth
depthor orchanging
changingthe thefilter
filtersizes.
sizes.
Decoder:
Decoder: TheThedecoder
decoderpartpartofofU-Net
U-Net performs
performs up-sampling
up-sampling andand concatenates
concatenates the
the cor-
corresponding
responding encoderencoder features
features withwith
the the up-sampled
up-sampled features.
features. TheThe goalgoal
is toisrecover
to recover the
the spa-
spatial resolution lost during the encoding process. We can modify the decoder
tial resolution lost during the encoding process. We can modify the decoder architecture architecture
by
by changing
changing thethe up-sampling
up-sampling method,
method, adjusting
adjusting the the number
number andand type
type ofof layers,
layers, or
or using
using
skip connections for better feature fusion.
skip connections for better feature fusion.
Output
Output layer:
layer: The
The U-Net
U-Net typically uses aa 11 ×× 11 convolutional
typically uses convolutional layerlayer with
with aa softmax
softmax
activation
activation function
function to
to generate
generate the
the final
final pixel-wise
pixel-wisesegmentation
segmentationoutput.
output.
Skip
Skip connections:
connections: U-Net
U-Net uses
uses skip
skip connections
connections to to propagate
propagate information
information from from thethe
encoder to the decoder, aiding in better feature
encoder to the decoder, aiding in better feature fusion. fusion.
Loss
Loss function:
function: AnAn appropriate
appropriateloss lossfunction
functionisisselected
selectedforforsemantic
semantic segmentation.
segmentation. A
A
commonly used loss function is pixel-wise softmax loss. The choice of lossloss
commonly used loss function is pixel-wise softmax loss. The choice of function
function de-
depends
pends onon thethe specific
specific characteristics
characteristics ofof
thethe dataset.
dataset.
!
k
==
pkp(x)(x) exp
exp ))/ / ∑ exp
(aka(X(X) (a(a
exp k0 (X(X))
)) (1)
(1)
k0 =1

where
whereppkk(x)
(x) denotes
denotes the
the activation
activation in
in feature
feature channel
channel kk at
at the
the pixel
pixel position
position x.
x.
The models used in our work were trained over 200 epochs using a TensorFlow
framework in an environment with GPU (Nvidia 1080-ti), 16 GB of RAM, and CPU-i7
Sensors 2023, 23, 7318 8 of 14

Sensors 2023, 23, x FOR PEER REVIEW 8 of 14

The models used in our work were trained over 200 epochs using a TensorFlow
framework in an environment with GPU (Nvidia 1080-ti), 16 GB of RAM, and CPU-i7 intel.
intel. The figures
The figures illustrate
illustrate the performance
the performance of the network
of the network on training
on training datasets. datasets. Figure 7
Figure 7 shows
shows theofplot
the plot lossof loss
and and accuracy
accuracy over theover the training
training epoch ofepoch
U-Net.of U-Net.

(a) (b)
Figure
Figure7.7.Performance
Performanceofofthe
theU-Net
U-Net network
network onontraining
trainingdatasets:
datasets:(a)
(a)plot
plotofof loss,
loss, (b)(b) plot
plot of of accuracy
accuracy
over
overthe
thetraining
trainingepoch.
epoch.U-Net
U-Net architecture.
architecture.

4.4.Results
Resultsand
andDiscussion
Discussion
The data used in this work were from the breast cancer histopathology image dataset
4.1. Dataset
(BNS), which was introduced in [49]. The annotated dataset provides images grouped
The dataEach
by patient. usedpatient
in thishas
work were from the
histopathology breast
data cancerwith
annotated histopathology
their groundimage
truth.dataset
The
(BNS),
size ofwhich was introduced
this dataset 512 × 512 in [49].label
pixels The annotated dataset
data; see Figure provides
8 for imagesThis
this dataset. grouped
study by
patient.
analyzedEach patient
publicly has histopathology
available data
datasets. These annotated
data with here:
can be found their https://github.com/
ground truth. The size
ofwafaadrioua/Hystopathological
this dataset 512 × 512 pixels label data; see Figure 8 for this
(accessed on 17 August 2023). dataset. This study analyzed
publicly available datasets. These data can be found here:
Experimental Tests
https://github.com/wafaadrioua/Hystopathological (accessed on 17 August 2023).
The results for each approach are given in Table 1, which contains the results of the
segmentation methods as well as the results of the FCN on the same data, which was
carried out as a reference base in order to assess the relevance of the improved U-Net
network on one side and the data encoding on the other side (CAE).

Table 1. Metrics results.

Models Accuracy Recall Precision F1 Score IoU


Image1 0.986 0.911 0.896 0.909 0.861
U-Net
Image2 0.891 0.870 0.811 0.910 0.857
proposed
Image3 0.809 0.899 0.840 0.902 0.834
Image1 0.857 0.902 0.955 0.893 0.807
Unsupervised
Image2 0.860 0.875 0.866 0.907 0.848
approach
Image3 0.822 0.823 0.853 0.834 0.810
Image1 0.805 0.891 0.855 0.823 0.723
FCN Image2 0.855 0.838 0.806 0.893 0.803
Image3 0.819 0.861 0.825 0.823 0.792
The data used in this work were from the breast cancer histopathology image da
(BNS), which was introduced in [49]. The annotated dataset provides images groupe
patient. Each patient has histopathology data annotated with their ground truth. The
of this dataset 512 × 512 pixels label data; see Figure 8 for this dataset. This study anal
Sensors 2023, 23, 7318 publicly available datasets. These data can be 9found
of 14
https://github.com/wafaadrioua/Hystopathological (accessed on 17 August 2023).

(a) (b)
Figure 8. Example of datasets, (a) the original data, (b) the ground truth.

The FCN (fully convolutional network) was proposed by [50], which adopts a pre-
trained CNN for image classification as the encoder module of the network. A fully
connected layer was connected to a convolutional layer by reusing its weights and biases.
A decoder module with transposed convolutional layers was added to upscale feature
maps to obtain full-resolution segmentation maps. Here, AlexNet is the basic network of
the FCN model.
First of all, we note that the FCN applied to the data obtained results of lower quality
than the two proposed approaches, which confirms that the addition of a layer in the
U-Net network as well as the clustering from the encoded data have a positive effect
on segmentation.
The main objective of this study was to design new models within the framework of
histopathological image segmentation based on three CNN architectures.
Figure 9 shows three visual examples of breast cancer histological images. In Figure 9,
row (a) presents original images from the dataset, row (b) presents ground truth segmen-
tations, (c) presents the results obtained for proposed approach 1, (d) presents the results
obtained for the proposed approach, and (e) presents the obtained FCN results. Rows
3 and 4 compare tumor and non-tumor cells using two techniques, where yellow indicates
automatically segmented tumor cell regions and blue indicates a background.
Figure 9 shows three visual examples of breast cancer histological images. In Figure
9, row (a) presents original images from the dataset, row (b) presents ground truth seg-
mentations, (c) presents the results obtained for proposed approach 1, (d) presents the
results obtained for the proposed approach, and (e) presents the obtained FCN results.
Sensors 2023, 23, 7318 Rows 3 and 4 compare tumor and non-tumor cells using two techniques, where yellow 10 of 14
indicates automatically segmented tumor cell regions and blue indicates a background.

Sensors 2023, 23, x FOR PEER REVIEW 10 of 14

(a) (b) (c) (d) (e)


Figure
Figure 9.
9. The
Theobtained
obtainedresults.
results. (a)(a)
The original
The data,
original (b) the
data, (b) ground truth,truth,
the ground (c) the
(c)U-Net results,
the U-Net (d)
results,
the deep clustering results, (e) the FCN results.
(d) the deep clustering results, (e) the FCN results.

For
For the
the comparison
comparisonexperiments,
experiments,different
differentmetrics
metricswere
wereused
usedsuch
suchasas
precision, recall,
precision, recall,
accuracy, F1 score and IoU (intersection over union) [51] to evaluate the proposed models.
accuracy, F1 score and IoU (intersection over union) [51] to evaluate the proposed models.
TP
Precision = TP (2)
Precision = TP + FP (2)
TP + FP
TP
Recall = TP (3)
Recall = TP + FN (3)
TP + FN
Recall ∗ Precision
F1 = 2 ∗ Recall ∗ Precision (4)
F1 = 2 ∗ Recall + Precision (4)
Recall + Precision
TP + TN
Accuracy = TP + TN (5)
Accuracy = TP + FP + TN + FN (5)
TP + FP + TN + FN
TP
IoU = TP (6)
IoU = TP + FP + FN (6)
TP + FP + FN
where TP, FP, FN, and TN are the true positive, false positive, false negative, and the true
where TP, FP, FN, and TN are the true positive, false positive, false negative, and the true
negative, respectively.
negative, respectively.
As can be observed in Table 1, the proposed U-Net model trained with manual anno-
As can be observed in Table 1, the proposed U-Net model trained with manual
tations and an unsupervised segmentation provided a comparative performance on the
annotations and an unsupervised segmentation provided a comparative performance on
dataset, with IoU values of 86.1% and 84.8%, respectively.
the dataset, with IoU values of 86.1% and 84.8%, respectively.
From these evaluations, it can be concluded that the proposed unsupervised method
From these evaluations, it can be concluded that the proposed unsupervised method
can provide a comparable cell segmentation performance compared to the modified U-
can provide a comparable cell segmentation performance compared to the modified U-Net
Net model. Furthermore, as collecting manual histological annotations is time-consuming
model. Furthermore, as collecting manual histological annotations is time-consuming and
and expensive, unattended methods can potentially be used to create histological annota-
tions in order to train supervised segmentation methods.

5. Comparison Study with the Existing Works


We conducted comparative studies with other works in the literature. We selected
Sensors 2023, 23, 7318 11 of 14

expensive, unattended methods can potentially be used to create histological annotations


in order to train supervised segmentation methods.

5. Comparison Study with the Existing Works


We conducted comparative studies with other works in the literature. We selected
some works from the deep learning-based literature (as seen in Table 2). The experimental
results show the efficiency of our proposed architecture compared to other works.

Table 2. Metrics results.

Approaches Accuracy Recall Precision F1 Score IoU


PangNet [49] 0.924 0.665 0.814 0.676 0.722
DeconvNet [49] 0.954 0.773 0.864 0.805 0.814
Ensemble [49] 0.944 0.900 0.741 0.802 0.804
DCNN/U-Net [52] 0.94 0.60 0.90 0.70 0.55
NucSeg-N [53] N/A 0.910 0.910 0.909 N/A
NucSeg-P [53] N/A 0.886 0.893 0.887 N/A
NucSeg-NP [53] N/A 0.889 0.912 0.899 N/A
Unsupervised
0.860 0.875 0.866 0.907 0.848
approach
U-Net proposed 0.986 0.911 0.896 0.909 0.861
FCN 0.819 0.861 0.825 0.823 0.792

Our method outperformed the current state-of-the-art methods using the dataset
(described in the dataset section) in terms of completeness and segmentation accuracy for
single-nuclei segmentation, especially when segmenting overlapping regions of the nuclei.
We compared our method with several deep learning-based methods listed in Table 1, such
as the modified U-Net and unsupervised model.

6. Conclusions
This paper presents a novel unsupervised technique for histological image segmen-
tation. The proposed unsupervised approach segments nuclei into two clusters based on
an autoencoder model. Both autoencoders and U-Net have shown promising results in
histopathology image segmentation. Autoencoders have the advantage of unsupervised
learning, allowing them to leverage unlabeled data for representation learning. On the
other hand, the improved U-Net is a supervised approach that requires labeled training
data but offers a more specialized architecture for segmentation tasks.
The experimental results evaluated on histopathology image sets with different color
staining methods show that the unsupervised method can effectively segment the tested
histological image into tumor or non-tumor cells. In particular, it provides a comparative
segmentation performance with a proposed U-Net model in terms of nuclei segmentation.
Unattended methods are a general image segmentation framework that can be extended
to solve various image segmentation problems. Furthermore, due to their unsupervised
nature, when histological annotations are difficult to collect, unsupervised methods can
be used to generate image annotations to train supervised segmentation models such as
U-Net. The results of our work show that our models are more robust in comparison with
the methods in the literature. In the future, we will explore more advanced methods with
state-of-the-art segmentation techniques.

Author Contributions: Conceptualization, W.R.D.; methodology, W.R.D. and N.B.; software, W.R.D.;
validation, W.R.D., N.B. and L.S.; formal analysis, W.R.D., N.B. and L.S.; investigation, W.R.D., N.B.
and L.S.; resources, W.R.D., N.B. and L.S.; data curation, W.R.D.; writing—original draft preparation,
W.R.D.; writing—review and editing, W.R.D., N.B. and L.S.; visualization, W.R.D.; supervision, N.B.
and L.S; project administration, N.B. and L.S. All authors have read and agreed to the published
version of the manuscript.
Funding: This research received no external funding.
Sensors 2023, 23, 7318 12 of 14

Institutional Review Board Statement: Not applicable.


Informed Consent Statement: Not applicable.
Data Availability Statement: Dataset is available online at GitHub: https://github.com/wafaadrioua/
Hystopathological (accessed on 17 August 2023).
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Lagree, A. A review and comparison of breast tumor cell nuclei segmentation performances using deep convolutional neural
networks. Sci. Rep. 2021, 11, 8025. [CrossRef] [PubMed]
2. Boutaouche, F.; Benamrane, N. Diagnosis of breast lesions using the local Chan-Vese model, hierarchical fuzzy partitioning and
fuzzy decision tree induction. Iran. J. Fuzzy Syst. 2017, 14, 15–40.
3. Belgrana, F.Z.; Benamrane, N. Mammographic images interpretation using Neural-Evolutionary approach. Int. J. Comput. Sci.
2012, 9, 1.
4. Huang, R. Boundary-rendering Network for Breast Lesion Segmentation in Ultrasound Images. Med. Image Anal. 2022, 80, 102478.
[CrossRef]
5. Ren, T.; Lin, S.; Huang, P.; Duong, T.Q. Convolutional neural network of multiparametric MRI accurately detects axillary lymph
node metastasis in breast cancer patients with PR neoadjuvant chemotherapy. Clin. Breast Cancer 2022, 22, 170–177. [CrossRef]
6. Evain, E. Breast nodule classification with two-dimensional ultrasound using Mask-RCNN ensemble aggregation. Diagn. Interv.
Imaging 2021, 102, 653–658. [CrossRef]
7. Touami, R.; Benamrane, N. Microcalcification Detection in Mammograms Using Particle Swarm Optimization and Probabilistic
Neural Network. Comput. Sist. 2021, 25, 369–379. [CrossRef]
8. Tun, S.M.; Alluri, S.; Rastegar, V.; Visintainer, P.; Mertens, W.; Makari-Judson, G. Mode of Detection of Second Events in Routine
Surveillance of Early Stage Breast Cancer Patients. Clin. Breast Cancer 2022, 22, e818–e824. [CrossRef]
9. Gurcan, M.N.; Boucheron, L.E.; Can, A.; Madabhushi, A.; Rajpoot, N.M.; Yener, B. Histopathological image analysis: A review.
IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [CrossRef]
10. Mahmood, F. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Trans. Med. Imaging
2019, 39, 3257–3267. [CrossRef]
11. Sohail, A.; Mukhtar, M.A.; Khan, A.; Zafar, M.M.; Zameer, A.; Khan, S. Deep object detection-based mitosis analysis in breast
cancer histopathological images. arXiv 2020, arXiv:2003.08803.
12. Drioua, W.R.; Benamrane, N.; Sais, L. Breast Cancer Detection from Histopathology Images Based on YOLOv5. In Proceedings of
the 2022 7th International Conference on Frontiers of Signal Processing, Paris, France, 7–9 September 2022; pp. 30–34.
13. Faridi, P.; Danyali, H.; Helfroush, M.S.; Jahromi, M.A. An automatic system for cell nuclei pleomorphism segmentation in
histopathological images of breast cancer. In Proceedings of the 2016 IEEE Signal Processing in Medicine and Biology Symposium
(SPMB), Philadelphia, PA, USA, 3 December 2016; pp. 1–5.
14. Qu, A. Two-step segmentation of Hematoxylin-Eosin stained histopathological images for prognosis of breast cancer. In
Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Belfast, UK, 2–5 November 2014;
pp. 218–223.
15. Rashmi, R.; Prasad, K.; Udupa, C.B.K.; Shwetha, V. A comparative evaluation of texture features for semantic segmentation of
breast histopathological images. IEEE Access 2020, 8, 64331–64346. [CrossRef]
16. Abdolhoseini, M.; Kluge, M.G.; Walker, F.R.; Johnson, S.J. Segmentation of heavily clustered nuclei from histopathological images.
Sci. Rep. 2019, 9, 4551. [CrossRef] [PubMed]
17. Faridi, P.; Danyali, H.; Helfroush, M.S.; Jahromi, M.A. Cancerous nuclei detection and scoring in breast cancer histopathological
images. arXiv 2016, arXiv:1612.01237.
18. Jian, T.X.; Mustafa, N.; Mashor, M.Y.; Ab Rahman, K.S. Hyperchromatic nucleus segmentation on breast histopathological images
for mitosis detection. J. Telecommun. Electron. Comput. Eng. 2018, 10, 27–30.
19. Acs, B.; Rantalainen, M.; Hartman, J. Artificial intelligence as the next step towards precision pathology. J. Intern. Med. 2020, 288,
62–81. [CrossRef]
20. Sahasrabudhe, M. Self-supervised nuclei segmentation in histopathological images using attention. In Proceedings of the
International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020;
pp. 393–402.
21. Kate, V.; Shukla, P. Image segmentation of breast cancer histopathology images using PSO-based clustering technique. In Social
Networking and Computational Intelligence; Springer: Singapore, 2020; pp. 207–216.
22. Shu, J.; Fu, H.; Qiu, G.; Kaye, P.; Ilyas, M. Segmenting overlapping cell nuclei in digital histopathology images. In Proceedings
of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan,
3–7 July 2013; pp. 5445–5448.
23. Xu, H.; Liu, L.; Lei, X.; Mandal, M.; Lu, C. An unsupervised method for histological image segmentation based on tissue cluster
level graph cut. Comput. Med. Imaging Graph. 2021, 93, 101974. [CrossRef]
Sensors 2023, 23, 7318 13 of 14

24. Khan, A.M.; El-Daly, H.; Simmons, E.; Rajpoot, N.M. HyMaP: A hybrid magnitude-phase approach to unsupervised segmentation
of tumor areas in breast cancer histology images. J. Pathol. Inform. 2013, 4, 1. [CrossRef]
25. Fouad, S.; Randell, D.; Galton, A.; Mehanna, H.; Landini, G. Unsupervised morphological segmentation of tissue compartments
in histopathological images. PLoS ONE 2017, 12, e0188717. [CrossRef]
26. Chan, L.; Hosseini, M.S.; Rowsell, C.; Plataniotis, K.N.; Damaskinos, S. Histosegnet: Semantic segmentation of histological tissue
type in whole slide images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Thessaloniki, Greece,
23–25 September 2019; pp. 10662–10671.
27. Cui, Y.; Zhang, G.; Liu, Z.; Xiong, Z.; Hu, J. A deep learning algorithm for one-step contour aware nuclei segmentation of
histopathology images. Med. Biol. Eng. Comput. 2019, 57, 2027–2043. [CrossRef]
28. Paramanandam, M. Automated segmentation of nuclei in breast cancer histopathology images. PLoS ONE 2016, 11, 162053.
[CrossRef] [PubMed]
29. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Segmentation of nuclei in histopathology images by deep regression of the distance map.
IEEE Trans. Med. Imaging 2018, 38, 448–459. [CrossRef] [PubMed]
30. Veta, M. Assessment of algorithms for mitosis detection in breast cancer histopathology images. Med. Image Anal. 2015, 20,
237–248. [CrossRef] [PubMed]
31. Zemouri, R. Breast cancer diagnosis based on joint variable selection and constructive deep neural network. In Proceedings of the
IEEE 4th Middle East Conference on Biomedical Engineering, Tunis, Tunisia, 28–30 March 2018; pp. 159–164.
32. Jafarbiglo, S.K.; Danyali, H.; Helfroush, M.S. Nuclear atypia grading in histopathological images of breast cancer using convo-
lutional neural networks. In Proceedings of the 4th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS),
Tehran, Iran, 25–27 December 2018; pp. 89–93.
33. Kang, Q.; Lao, Q.; Fevens, T. Nuclei segmentation in histopathological images using two- stage learning. In Proceedings of the
International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October
2019; pp. 703–711.
34. Kaushal, C.; Bhat, S.; Koundal, D.; Singla, A. Recent trends in computer assisted diagnosis (CAD) system for breast cancer
diagnosis using histopathological images. Irbm 2019, 40, 211–227. [CrossRef]
35. Wahab, N.; Khan, A.; Lee, Y.S. Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer
histopathological images. Microscopy 2019, 68, 216–233. [CrossRef]
36. Qu, H. Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images. IEEE Trans. Med.
Imaging 2020, 39, 3655–3666. [CrossRef]
37. Xu, J.; Luo, X.; Wang, G.; Gilmore, H.; Madabhushi, A. A deep convolutional neural network for segmenting and classifying
epithelial and stromal regions in histopathological images. Neurocomputing 2016, 191, 214–223. [CrossRef]
38. Sohail, A.; Khan, A.; Wahab, N.; Zameer, A.; Khan, S. A multi-phase deep CNN based mitosis detection framework for breast
cancer histopathological images. Sci. Rep. 2021, 11, 6215. [CrossRef]
39. Cao, J.; Qin, Z.; Jing, J.; Chen, J.; Wan, T. An automatic breast cancer grading method in histopathological images based on pixel-,
object-, and semantic-level features. In Proceedings of the IEEE 13th International Symposium on Biomedical Imaging (ISBI),
Prague, Czech Republic, 13–16 April 2016; pp. 1151–1154.
40. Öztürk, Ş.; Akdemir, B. Cell-type based semantic segmentation of histopathological images using deep convolutional neural
networks. Int. J. Imaging Syst. Technol. 2019, 29, 237–246. [CrossRef]
41. Kaushal, C.; Koundal, D.; Singla, A. Comparative analysis of segmentation techniques using histopathological images of breast
cancer. In Proceedings of the 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC),
Erode, India, 27–29 March 2019; pp. 261–266.
42. Gour, M.; Jain, S.R. Deeprnnetseg: Deep residual neural network for nuclei segmentation on breast cancer histopathological
images. In Proceedings of the International Conference on Computer Vision and Image Processing, Jaipur, India, 27–29 September
2019; pp. 243–253.
43. Xu, J. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans. Med. Imaging
2015, 35, 119–130. [CrossRef]
44. Raza, K.; Singh, N.K. A tour of unsupervised deep learning for medical image analysis. Curr. Med. Imaging 2021, 17, 1059–1077.
[PubMed]
45. Janowczyk, A.; Basavanhally, A.; Madabhushi, A. Stain normalization using sparse autoencoders (StaNoSA): Application to
digital pathology. Comput. Med. Imaging Graph. 2017, 57, 50–61. [CrossRef] [PubMed]
46. Hou, L. Sparse autoencoder for unsupervised nucleus detection and representation in histopathology images. Pattern Recognit.
2019, 86, 188–200. [CrossRef] [PubMed]
47. Roboflow 2021. Available online: https://roboflow.com/ (accessed on 14 November 2022).
48. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings
of the International Conference on Medical Image Computing and Computer- Assisted Intervention, Munich, Germany,
5–9 October 2015; pp. 234–241.
49. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Nuclei segmentation in histopathology images using deep neural networks. In Proceedings
of the IEEE 14th International Symposium on Biomedical Imaging, Melbourne, VIC, Australia, 18–21 April 2017; pp. 933–936.
Sensors 2023, 23, 7318 14 of 14

50. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
51. Sun, J.; Peng, Y.; Guo, Y.; Li, D. Segmentation of the multimodal brain tumor image used the multi-pathway architecture method
based on 3D FCN. Neurocomputing 2021, 423, 34–45. [CrossRef]
52. Mercadier, D.S.; Besbinar, B.; Frossard, P.; Mercadier, D.S.; Besbinar, B.; Frossard, P. Automatic segmentation of nuclei in
histopathology images using encoding-decoding convolutional neural networks. In Proceedings of the ICASSP 2019—2019 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1020–1024.
53. Jung, H.; Lodhi, B.; Kang, J. An automatic nuclei segmentation method based on deep convolutional neural networks for
histopathology images. BMC Biomed. Eng. 2019, 1, 24. [CrossRef] [PubMed]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like