0% found this document useful (0 votes)
34 views9 pages

ResNet-SCDA-50 For Breast Abnormality Classification

This document presents a deep learning model called ResNet-SCDA-50 for classifying breast abnormalities in mammograms. The model is based on ResNet-50 and improves it with a new data augmentation technique called SCDA (Scaling and Contrast limited adaptive histogram equalization Data Augmentation). SCDA scales images and applies contrast limited adaptive histogram equalization to generate more training data. In testing, ResNet-SCDA-50 achieved an overall accuracy of 95.74% for binary classification of masses and microcalcifications, outperforming previous methods.

Uploaded by

mariam askar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views9 pages

ResNet-SCDA-50 For Breast Abnormality Classification

This document presents a deep learning model called ResNet-SCDA-50 for classifying breast abnormalities in mammograms. The model is based on ResNet-50 and improves it with a new data augmentation technique called SCDA (Scaling and Contrast limited adaptive histogram equalization Data Augmentation). SCDA scales images and applies contrast limited adaptive histogram equalization to generate more training data. In testing, ResNet-SCDA-50 achieved an overall accuracy of 95.74% for binary classification of masses and microcalcifications, outperforming previous methods.

Uploaded by

mariam askar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

94 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO.

1, JANUARY/FEBRUARY 2021

ResNet-SCDA-50 for Breast


Abnormality Classification
Xiang Yu, Cheng Kang, David S. Guttery , Seifedine Kadry , Yang Chen , and Yu-Dong Zhang

Abstract—(Aim) Breast cancer is the most common cancer in women and the second most common cancer worldwide. With the rapid
advancement of deep learning, the early stages of breast cancer development can be accurately detected by radiologists with the help of
artificial intelligence systems. (Method) Based on mammographic imaging, a mainstream clinical breast screening technique, we present
a diagnostic system for accurate classification of breast abnormalities based on ResNet-50. To improve the proposed model, we created
a new data augmentation framework called SCDA (Scaling and Contrast limited adaptive histogram equalization Data Augmentation).
In its procedure, we first conduct the scaling operation to the original training set, followed by applying contrast limited adaptive histogram
equalisation (CLAHE) to the scaled training set. By stacking the training set after SCDA with the original training set, we formed a new
training set. The network trained by the augmented training set, was coined as ResNet-SCDA-50. Our system, which aims at a binary
classification on mammographic images acquired from INbreast and MINI-MIAS, classifies masses, microcalcification as “abnormal”,
while normal regions are classified as “normal”. (Results) We present the first attempt to use the image contrast enhancement method
as the data augmentation method, resulting in an averaged 98.55 percent specificity and 92.83 percent sensitivity, which gives our best
model an overall accuracy of 95.74 percent. (Conclusion) Our proposed method is effective in classifying breast abnormality.

Index Terms—Breast cancer, ResNet-50, contrast limited adaptive histogram equalization, classification

1 INTRODUCTION
S one of the most aggressive cancers worldwide, breast an image upon on which they can make diagnostic decisions
A cancer caused more than 2 million new cases in 2018 [1],
[2]. Specifically, the incidence of breast cancer in the UK had
accordingly. However, the time-consuming interpretation
and complexity of mammograms can result in a high false-
risen to 46,109 in 2017, accounting for 15.1 percent of all can- positive rate and more importantly, misdiagnosis (i.e., false-
cer cases and was the most common cancer diagnosed in that negatives). Therefore, efficient and accurate computerised
year [3]. It is widely recognized that prevention and early auxiliary diagnostic systems are becoming increasingly
diagnosis significantly reduce cancer mortality. While the important both for radiologists and clinical practice.
primary prevention strategies for breast cancer are currently The advent of deep learning has come to the fore as a
under exploration, early detection considerably improves method for concurrent computer-aided (CAD) systems while
prognosis. For early detection of breast cancer, mammogra- the performance of conventional CAD systems being faraway
phy is one of the most commonly used screening techniques, from satisfaction. Unlike traditional breast cancer CAD sys-
reported to be responsible for a 20-40 percent reduction in tems that heavily rely on manually designed components for
fatalities [4]. Furthermore, it can provide radiologists with specific purposes and are hindered by a lack of generality,
modern CAD systems incorporating deep learning have
been improved both on accuracy and efficiency [5], [6], [7],
 X. Yu and C. Kang are with the Department of Informatics, University of
Leicester, Leicester LE1 7RH, U.K. E-mail: {xy144, ck254}@le.ac.uk. [8]. Compared with the multiple-step traditional CAD sys-
 D. S. Guttery is with the Leicester Cancer Research Centre, University of tems, CAD systems based on deep CNN generally solely con-
Leicester, Leicester LE2 7LX, U.K. E-mail: [email protected]. sist of candidate detection and classification components.
 S. Kadry is with the Department of mathematics and computer science,
Faculty of science, Beirut Arab University, Beirut, Lebanon.
To achieve an end-to-end detection system of the lesion,
E-mail: [email protected]. Lotter et al. proposed to train deep CNN with patch-based
 Y. Chen is with the Laboratory of Image Science and Technology, Southeast lesion regions in the DDSM dataset [9]. Subsequently, candi-
University, Nanjing 210096, China also with the School of Computer Science date areas, determined by a scanning model, are fed to the
and Engineering, Southeast University, Nanjing 210096, China also with
the School of Cyber Science and Engineering, Southeast University, Nanjing pre-trained classifier [10]. To minimise the intervention of the
210096, China also with the Key Laboratory of Computer Network and user while using the detection system, an automated CAD
Information Integration (Ministry of Education), Southeast University, system that integrates detection, segmentation, and classifica-
Nanjing 210096, China. E-mail: [email protected]. tion of breast masses has been proposed in [6]. In that study,
 Y.-D. Zhang is with the Department of Informatics, University of Leicester,
Leicester LE1 7RH, U.K. and also with the Department of Information cascaded deep learning methods for detection were applied
Systems, Faculty of Computing and Information Technology, King to select hypotheses, which were subsequently refined
Abdulaziz University, Jeddah 21589, Saudi Arabia. by Bayesian optimization. Furthermore, the segmentation
E-mail: [email protected].
breaks down into two steps: gross segmentation is obtained
Manuscript received 18 Oct. 2019; revised 28 Dec. 2019; accepted 9 Feb. 2020. by deep structured output learning and then improved by a
Date of publication 13 Apr. 2020; date of current version 3 Feb. 2021.
(Corresponding author: Y.-D. Zhang.) level set method [11], [12]. Classification is realized by a deep
Digital Object Identifier no. 10.1109/TCBB.2020.2986544 learning classifier pre-trained with hand-crafted features and
1545-5963 ß 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See ht_tps://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 95

fine-tuned on annotated breast mass classification datasets. feature extractors with new classifier layer concatenated. We
The approach was tested on the CBIS-DDSM dataset and show here that ResNet-50 is superior as a feature extractor
achieved a true positive rate (TPR) of 0.98  0.02 at 1.67 false towards classification. Therefore, after cascading all of the
positives per mammogram on testing data. In another study, optimized blocks, we found our ResNet-SCDA-50 achieved at
45,000 mammogram images from a private database were 95.71 percent, which outperformed the state-of-the-art by a
used to compare the performance of traditional CAD systems big margin.
with CNN-based approaches [13]. Usually, the supervised
deep CNN models have to be trained with a large cohort of
2 BACKGROUND
images before eliminating overfitting and achieving high-
level performance. However, large numbers of annotated The development of image processing technique can be
images are sometimes inaccessible, primarily due to the high divided into pre-deep-learning era and deep learning era. In
potential expenditure on collection and maintenance. Trans- pre-deep learning era, image processing tasks like segmenta-
fer learning, therefore, is widely employed in various studies tion, clustering, enhancement are solved by constructing spe-
because it allows the knowledge learned from one domain to cific models [21], [22], [23], which are later on termed as
be transferred into another. Depending on the specific scenar- classical methods. Initially designed for large scale image rec-
ios of the utilisation of transfer learning, pre-trained deep ognition [24], [25], [26], [27], [28], [29], [30], deep CNNs have
CNN models can be repurposed for detection, classification been widely utilized in a range of fields including natural lan-
tasks by fine-tuning or can purely be used as feature extrac- guage processing [31], [32], speech recognition [33], segmen-
tors. As justified by Tajbakhsh et al., the pre-trained deep tation [34], [35], [36] and object localization [37], [38] thanks to
CNNs, after sufficient fine-tuning, performed no worse than the rapid development of firmware and software. Also, recent
deep CNNs trained from scratch. Also, pre-trained deep years witnessed the great advancement of deep CNNs includ-
CNNs showed higher robustness to the size of the dataset for ing the depth, connections between layers and even more
fine-tuning [14]. However, deep tuning and shallow tuning advanced training methods [27], [28], [29], [39], [40], [41].
are difficult as no standard criteria exist to determine which It is difficult to tell which contributes most to the success
one is superior but depends on specific scenarios. Further- of the advanced deep CNNs but it is of certain that connec-
more, Azizpour et al. pointed out that the high similarity tions between layers would be too profound to be neglected.
between databases for pre-training and targeted databases In GoogLeNet [27], a novel architecture termed inception,
gives rise to the success of the transfer learning approach. For which increases the depth and width of the network but
transfer learning in the field of breast cancer CAD systems, keeps the computational cost low at the same time, was
there are numerous works that lead to successful conclusions. developed. InceptionV3 [39], an advanced version of Goo-
A seven-layer CNN model, four convolutional layers and gLeNet, improves inception model by replacing big kernels
three fully connected layers, achieved an AUC of 0.81 in a of filters with smaller ones and introducing new connec-
breast tomosynthesis (DBT) training set after being trained tions. Inspired by Inception block, Xception architecture
by regions of interest (ROIs) extracted DBT dataset. However, [40], which forms of a depthwise convolution and the fol-
the AUC increased to 0.90 after transferring with DBT [15]. In lowing pointwise convolution, was proposed. Compare to
another study, the capability of different CNNs for tumour canonical convolution, depthwise and pointwise convolu-
feature extraction when used as feature extractors was com- tion is more parameter efficient. Given the size of feature-
pared. The performance of support vector machines trained maps MN by W channels, the size of kernels DF  DF , the
on CNN-extracted features and computer-extracted features size of features-maps after convolution M 0  N 0 by W’ chan-
showed that transfer learning improves the performance of nels, then the total number of parameters of canonical con-
computer-aided-diagnosis (CADx) systems [16]. volution is DF  DF  W  W 0 with bias term neglected.
In this paper, we presented a new breast abnormality diag- For depthwise convolution, where the group of filters is 1,
nostic system that achieved high accuracy on MINI-MIAS the number of parameters is DF  DF  W ; For pointwise
and INbreast datasets [17], [18]. In these two datasets, convolution, where the size of kernel is 11, the number
abnormalities in mammograms (mainly microcalcification of parameters is W  W 0 ; which leads to a total number of
and mass) are annotated by experienced radiologists and parameters of ðDF  DF þ W 0 Þ  W . Therefore the rate of
extracted according to location information. The backbone of parameters between depthwise and pointwise convolution
our model is ResNet-SCDA-50, which is based on ResNet-50 and canonical convolution is:
trained by augmented data given by our SCDA, which applies
scaling operation and CLAHE to form the augmented train- ðDF  DF þ W 0 Þ  W
RD ¼
ing dataset. CLAHE was performed for two reasons: 1) DF  DF  W  W 0
(1)
CLAHE is widely used for medical image enhancement [19], 1 1
¼ 0þ ;
[20] and the contrast of breast images is greatly enhanced after W DF 2
applying CLAHE. Therefore, higher quality images for the
subsequent components in our system can be obtained. and 2) As can be seen in Eq. (1), the reduction rate 1- RD could be
it can be repurposed as a data augmenter by stacking the meaningful if provided proper W’ and DF. To solve the prob-
enhanced images with original images. Scaling is involved lem with training deep CNNs, residual learning, a new
because we found that our CNN models trained by dataset short-cut connection method, was presented in [28]. In resid-
being scaled and enhanced by CLAHE showed better perfor- ual learning, given the identity X and the mapping of F ðXÞ,
mance. The classifiers designed in this work were based on a suppose that the underlying mapping of features HðXÞ after
transfer learning approach that utilizes the CNN models as a sequence of stacked layers is related to F ðXÞ and x by:

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
96 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021

Fig. 2. Dense connection block.


Fig. 1. Residual block.

F ðX Þ ¼ H ðXÞ  X: (2) 3 METHODOLOGY


3.1 Dataset
This section provides with a brief introduction to dataset
Then HðXÞ can be reformulated as: MINI-MIAS and INbreast. MINI-MIAS is a widely used digi-
talized mammogram dataset that contains 322 grayscale
H ðX Þ ¼ F ðXÞ þ X: (3) images from 161 patients, with both left and right breasts.
Each image is 1024 pixels  1024 pixels. According to the cate-
It is assumed the optimization of original HðXÞ would be gories given, there are mainly four types of abnormalities pre-
easier if it can be optimized by the identity mapping sented: microcalcification, mass (well defined, spiculated, ill-
F ðXÞ þ X. The identity mapping was realized in the residual defined), architectural distortion, and asymmetry. According
block by shortcut connections which skip one or more to the severity of abnormality, all 322 images can be divided
layers as presented in Fig. 1. ReLU is the Rectified Linear into three categories that are normal without any lesions
Unit [42]. found, benign, and malignant. The corresponding number of
DenseNet [29] was another state-of-the-art network which images for the three categories as mentioned above were 207,
allowed subsequent layers access to the features extracted by 64, and 51, respectively. As the location information of lesions
previous layers by concatenating those feature-maps. Sup- in each mammogram image is provided, we selected the
pose that lth layer takes all of feature-maps x0 ; . . . ; xl1 pro- patches of interest from the original mammogram images
duced in previous layers as input, then feature-maps xl in lth directly. For mammogram images without lesions, we chose
can be obtained by: the patches by following the criteria that each patch is filled
with breast covering more than 80 percent area of the entire
xl ¼ Hl ð½x0 ; . . . ; xl1 Þ; (4) patch. Also, for mammogram images containing two or more
lesions, each lesion is centred and cropped out of the image.
By doing so, 330 patches are acquired where 207, 69, and 54 of
where ½x0 ; . . . ; xl1  denotes the concatenation operation of
them are classified as normal, benign, and malignant, respec-
feature-maps in previous layers. Hl ðÞ is a composite func-
tively. The details of the dataset are presented in Table 1. Simi-
tion comprised of three sequenced operations: batch normal-
larly, we acquired 2292 abnormal patches and 12171 normal
ization (BN), rectified linear unit (ReLU), and convolution
patches from INbreast dataset. To mitigate the problem with
with the kernel size 3 by 3. The illustration of the dense con-
the imbalance between normal samples and abnormal sam-
nection used in DenseNet is given in Fig. 2.
ples, we randomly selected 2292 patches from 12171 normal
In Fig. 2, Wi, Hi is the width and height of feature maps,
samples. The information about MINI-MIAS is presented
M denotes the channels of feature maps. k is the growth
in Table 1 while the details of the merged dataset are shown
rate to control the number of new feature maps produced in
in Table 2.
each layer. In summary, the performance of deep CNNs can
be improved by solely stacking more convolutional layers
to a certain depth, architectural innovations such as shortcut 3.2 Contrast Limited Adaptive Histogram
connection, however, remains to be most desirable in Equalization
advancing deep CNNs. Also, the performance of the state- In medical images, one of the challenges for automated sys-
of-the-art CNNs is closely related to the practical applica- tems is the low contrast, which leads to failures of algorithms
tion scenarios. We then explored the most appropriate in processing or analyzing. To address the issue of low con-
structure of CNNs by transferring different CNNs into our trast, digitized image contrast enhancement methods were
classification tasks here. proposed.

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 97

TABLE 1
Details of MINI-MIAS

Categories Images Patches


Normality 207 207
Benign 64 69
Malignant 51 54
Overall 322 330

Ordinary histogram equalization was performed in the


following way. Given a discrete grayscale image [X], let Ni
be the frequency for grayscale i to appear in the image hav-
ing N pixels in total, then the possibility of pixel x belongs
to grayscale i in range of 0 to L is:
Fig. 3. Bilinear interpolation.
px ðiÞ ¼ pðx ¼ iÞ
Ni (5) X
i
¼ ð0  i  LÞ; fxn ðiÞ ¼ pnx ðjÞ
N
j¼0 (7)
where L is the highest intensity over the image [X]. There- ð0  i  255; 1  n  M N Þ;
fore, the px ðiÞ is the normalized probability that ranges from
0 to 1 with respect to grayscale i. Hence, the cumulative dis- where pnx is as the same form as Eq. (5) but was the possibil-
tribution function (CDF) can be expressed as: ity of pixel x in subregion n. Initially, the summation of pixel
intensity that is above the pre-determined clip limit CL is
X
i acquired, normalized and can be noted as Tn, which is later
fcd x ðiÞ ¼ p x ð jÞ : (6) evenly distributed to all intensity levels by the average
j¼0 increase AIn. Therefore AIn can be simply denoted as:

Conversion of the CDF into an equalized one is to map Tn


A In ¼ : (8)
fcd x ðiÞ into a new function by multiplying fcd x ðiÞ with the 256
desired constant pixel value, which is generally set to be 255.
However, for regions that are significantly lighter or darker Then the adjusted histogram can be expressed in the
than most regions in the image, sufficient contrast enhance- following way:
ment is unlikely to be obtained. To improve the details in 
CL if pnx ðiÞ
CL  AI n
local regions of the image, adaptive histogram equalization pnx ði Þ0 ¼ : (9)
pnx ðiÞ þ AI n if pnx ðiÞ < CL  AI n
turns out to be the alternative method. Unlike histogram
equalization that transforms pixels according to image histo-
After the adjustment of image histogram, block linearly
grams globally, adaptive histogram equalization transforms
interpolation was applied to reduce the computational cost.
each pixel by using a transformation function derived from a
As shown in Fig. 3, for pixel A’ in subregion i, the value is
local region. However, the slow computational speed and
determined by the four pixels in neighbourhood subregions
tendency to overamplify noises in the relatively homoge-
and can be calculated by:
neous area are two main shortcomings to be solved. As a var-
iant of adaptive histogram equalization, CLAHE has shown PA0 ¼ x0 y0 PA þ ð1  x0 Þ y0 PB
to overcome these problems efficiently [19].
The enhancement procedure of CLAHE can be divided þ x0 ð1  y0 Þ PC þ ð1  x0 Þ ð1  y0 Þ PD ;
into two phases: histogram equalization with contrast limit (10)
applied and bilinear interpolation algorithm. Histogram
where PA ; PB ; PC ; PD , and PA0 denotes the pixel value of
equalization first performs to provide an adjusted histogram
pixel A, B, C, D, and A’ respectively. x0 and y0 are normal-
while bilinear interpolation algorithm reduces the computa-
ized by 2 Width.
tional costs on forming a new image. Given an image I, the
A pair of breast patches before enhancement and after is
M by N subregions by dividing I, the process of histogram
shown in Fig. 4.
equalization can be divided into the following steps:
We used mean squared error (MSE), Peak Signal-to-Noise
Ratio (PSNR), and Absolute Mean Brightness Error (AMBE)
TABLE 2 [43] to measure the quality changes of the image before and
Overall Composition of the Merged Dataset after enhancement. Let I and J be the image before and after
enhancement respectively, the size of which is m rows by n
Categories MINI-MIAS INbreast Overall columns. Then MSE can be expressed as:
Normality 207 2292 2499
Abnormality 123 2292 2415 1 X m X n

Overall 330 4584 4914 MSE ¼ ðJ ði; jÞ  I ði; jÞÞ2 : (11)


m  ni ¼ 1j ¼ 1

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
98 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021

TABLE 4
Contrast of Patches Before and After Enhancement

ID OF
ORIGINAL CDA-ENHANCED DIFFERENCE
PATCHES
1 6.61 8.11 1.50
2 8.11 80.99 72.88
3 4.55 55.80 51.25
... ... ... ...
4914 73.69 131.60 57.91
10.42 (mean) 60.59 (mean) 50.17 (mean)

reflecting the ratio between signal and noise, implies the bet-
ter depression of noise.
According to the above equations, we were able to calcu-
late corresponding values with respect to MSE, PSNR, and
AMBE. We received MSE with an average value of 218.67
while the maintained PSNR to be desirable. Moreover,
mean AMBE is 8.87, which is the increase in intensity.
Detailed results are presented in Table 3. As contrast is
another valuable indicator of image quality, we compared
our enhanced image with the original image and found the
Fig. 4. Breast patches and corresponding enhanced patches. enhanced image is superior to the original image, as can be
seen from Table 4.
The calculation of PSNR was based on MSE and can be
written as:
3.3 Architecture
!
ðL  1Þ2 3.3.1 Architecture of CNN
PSNR ¼ 10  log 10 ; (12)
MSE However, in practice, object categories in classification tasks
are generally no more than hundreds while training a net-
where L stands for the value of the highest grey level in the work with a large number of parameters from scratch is
original image. AMBE, which is the absolute value between strenuous and time-consuming, especially when the size of
the expectation of enhanced image and the expectation of the training data is small. Therefore, transfer learning, an effec-
original image, can then be presented in the following form: tive way to adapt classifiers trained in one domain to
another, can be more feasible though slight changes in archi-
AMBE ¼ jE½Y   E½Xj; (13) tecture generally need to be made. There are two ways of
transferring a pre-trained CNN to a new task including com-
where j  j indicates absolute operation while the E[X] and bining the pre-trained CNN as a feature extractor with a cor-
E[Y] are calculated by: responding new classifier or fine-tuning the pre-trained
CNNs by adjusting the architectures. When used as feature
1 X m X n extractors, the parameters in the pre-trained CNNs remain
E½ X ¼ I ði; jÞ (14) unchanged whilst features extracted in a certain depth of
m  ni ¼ 1j ¼ 1
CNN were taken as the input of the concatenated classifiers.
When pre-trained CNNs are fine-tuned, the architecture of
1 X m X n
base CNNs has to be changed slightly. For example, the
E½ Y  ¼ J ði; jÞ: (15)
m  ni ¼ 1j ¼ 1 number of outputs in CNNs designed for large scale image
classification is 1,000. However, when applying CNNs to
The bigger MSE means the more significant difference binary classification, the last 1000-neuron fully connected
between the image before and after enhancement, which sig- layer should be replaced with a 2-neuron fully connected
nals the effectiveness of enhancement. The greater PSNR, layer, which turns out to be the new classifier. Then the num-
ber of parameters in the overall CNN need to be updated
TABLE 3 outnumbers that of the network combining CNN-based fea-
Measurements of Enhancement ture extractor with adjoining classifier. Thus it takes longer
time to train the classifiers in the former form compared to
ID OF PATCHES MSE PSNR AMBE the one in the latter form.
1 306.79 16.93 13.62 To reduce the computational costs of our system in this
2 242.81 20.39 9.23 work, we used pre-trained CNNs as feature extractor and
3 266.10 18.03 12.07 therefore designed new classifiers that take the extracted fea-
... ... ... ... tures as input. The dimensions of features output by CNNs
4914 513.87 20.39 10.80
218.67 (mean) 20.21(mean) 8.87(mean) after fully-connected layer are generally more than thou-
sands. Directly feeding the new binary-classification classifier

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 99

Fig. 5. Architectures of proposed classifiers.

with those features would lead to overfitting, which can be


effectively mitigated by introducing dropout layer between
the fully-connected layers and the final classification layer.
The gross architectures of combined classifiers are presented
in Fig. 5. Thereafter, to avoid confusion, when the state-
of-the-art networks in the remainder of this paper are
mentioned, they are actually referring to the networks experi-
enced the same adjustment in Fig. 5.
Fig. 6. Image scaling.

3.3.2 Architecture of SCDA


SCDA is a sequenced framework that consists of augmenta- set and testing set are roughly close to 1:1. CLAHE was then
tion and concatenation. For augmentation, image scaling is applied to the training set while testing set remained
first applied while CLAHE is applied to the scaled image unchanged. As a result, a new training set that combines the
subsequently. In this research, we chose the scale factor that original training set and the enhanced training set is formed.
is greater than 1, which aims at providing refined images. Later on, different CNN models were trained by training
Scaling is implemented by resizing the images in the train- datasets before and after enhancement. Note that the size of
ing set to Sf times of original size, followed by cropping the input image is correspondingly adjusted to meet the
operation that crops out the patches having the same size as input requirement of different models.
original images from the centre of the scaled images (Fig. 6),
where blue circular item denotes the lesion region in the 3.4.1 Configurations
original image. Scaling factor Sf specifies the ratio between All of the experiments were carried out on a personal com-
the original width and the width of the scaled image. After puter with a single Geforce GTX 1060 GPU and a memory
the acquisition of the cropped image, CLAHE is applied of 8G. The framework used in this research was the deep
then. The detailed flows of SCDA are presented below. learning toolbox provided by Matlab with pre-trained deep
Given the original training set X, the augmented training set CNN models. With no specification, the following parame-
X’ can be expressed as: ters were applied: 0.5 was set as the dropout probability of
the fully connected layer, the optimization algorithm was
X0 ¼ H ½X; T ðX Þ; (16) adaptive moment estimation, 64 for mini-batch size. By trial
and error, we set the number of maximum epochs to be 20
where H½ remains as concatenation, T ðXÞ is the transfor- and the initial learning rate to be 0.01. The learning rate
mation of X and can be further extended as: dropped to 0.1 of the previous one every 10 epochs.
T ð XÞ ¼ CLAðS ðXÞÞ: (17)
3.4.2 Model Evaluations
CLAðÞ is the operation of CLAHE while SðXÞ stands for In this paper, we followed the standardised accuracy calcu-
the scaling transformations. lation criteria. The accuracy comes from two parts, which
are specificity and sensitivity that can be defined as:
3.4 Experiment design
TP
To avoid the unexpected experimental results due to the RTP ¼ (18)
inappropriate partition of the dataset, we utilised 5-fold TP þ FN
cross-validation to examine our proposed data augmenta- TN
tion method and the classification models. At the beginning RTN ¼ ; (19)
TN þ FP
of the experiment, 80 percent, namely 4 out of 5 folds, of the
dataset is partitioned into training set while the ratios where TP, FN, TN, FP stands for true positive, false nega-
between negative samples and positive samples in training tive, true negative, and false positive, respectively.RTP and

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
100 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021

TABLE 5 TABLE 6
Averaged Performance of Classifier Before Results of ResNet-50 Trained by the Augmented Training Set
and After Enhancement
Augmentation Sensitivity Specificity Accuracy
Models CDA Sensitivity (%) Specificity (%) Accuracy (%) method (%) (%) (%)

GoogLeNet
N 88.41 96.73 92.64 Scaling 93.24 98.10 95.71
Y 90.84 98.10 94.53 Vertical shift 92.78 98.33 95.60
N 89.66 97.41 93.61 Horizontal shift 92.52 98.48 95.55
ResNet-50 Rotation 92.17 98.61 95.45
Y 92.39 98.50 95.50
Vertical Flip 92.44 98.70 95.62
N 87.35 96.53 92.02 Horizontal Flip 92.80 98.44 95.67
DenseNet201
Y 90.72 97.95 94.40
SCDA (our method) 92.83 98.55 95.74
N 88.85 96.65 92.82
Xception
Y 92.38 97.56 95.01
N 88.81 96.57 92.76 TABLE 7
Inceptionv3
Y 92.80 97.53 95.21 Method Comparison

Models Method Averaged Best


Accuracy (%) Accuracy (%)
RTN correspond to true positive rate (the sensitivity), and
Nguyen [44] HMI þ FNN 73.50  1.35 -
true negative rate (the specificity), respectively. Specificity Gorgel Sertbas [45] SWT-SVM - 90.10
and sensitivity, therefore, lead to the accuracy by: Liu [46] WFRFT þ PCA þ SVM 92.16  3.60 -
Wu [47] FRFE þ CAR-BBO - 92.52
Our method ResNet-SCDA-50 95.74  0.24 96.11
TN þ TP
RAC ¼ ; (20)
TN þ FN þ TP þ FP
The scaling factor is 1.2. Vertical shift and horizontal shift
To examine the performance of CNN models, the sensitiv- translate the images by 20 pixels downward and rightward
ity, specificity and accuracy are calculated for 5 folds testing respectively. For rotation, we choose the random rotation angle
sets respectively and then be averaged to eliminate the fluctu- ranging from 0 to 360. Vertical flip and horizontal flip generate
ation of performance due to the random partition of datasets. new images by flipping images vertically and horizontally. In
our method, we set the scaling rate to be 1.2 because scaling
(when the scaling factor is 1.2) not only gives best-trained
4 EXPERIMENT RESULTS
ResNet-50 amongst all of the traditional methods but also
4.1 Performance of CNN Models Before and After shows high sensitivity. The possible explanation could be fine-
Contrast Enhancement grained structures in scaled images turn out to be more clear
As the enhancement experiment showed, the contrast, as and recognizable. However, the reason why vertical flip pro-
well as the quality of enhanced images, is greatly improved. vides the highest specificity remains to be explored. Neverthe-
To validate the effectiveness of utilizing CLAHE as the data less, our method performed best in terms of accuracy though
augmentation method, we trained the different CNN mod- sensitivity and specificity are continuing to be improved.
els by original training set and augmented training dataset
and the results are shown in Table 5. 4.3 Comparison With State-of-the-Art Approaches
CDA stands for whether CLAHE was applied as a data In this section, we showed the comparison results in our
augmentation method (Y) or not (N). The bold font denotes best model with state-of-the-art methods. The best perfor-
the more preferable results. As can be seen from Table 5, all mance of our model achieved an accuracy of 96.11 while the
of the models have higher specificity than the sensitivity averaged accuracy of our model on 5 runs of cross-valida-
while many of them showed similar performance in terms tion is 95.74 with a deviation of 0.24. As can be seen, our
of final accuracy when these models are trained by the origi- model outperforms all of the approaches by a significant
nal training set. However, ResNet-50 achieves the excep- margin in terms of accuracy (Table 7).
tional accuracy given the fact that it was the mean accuracy
of 5 folds. It seems that residual learning is more effective in
giving more useful representations of high-level features in 5 DISCUSSION
this classification scenario. Moreover, the specificity and Due to the powerful performance of deep CNNs on feature
sensitivity of CNN models trained by the augmented train- extraction, some traditional image preprocessing techniques
ing set are consistently higher than that of models trained are becoming unnecessary for CNN based diagnostic
by the original training set. As a result, the higher specificity systems. However, we deployed CLAHE image contrast
and the sensitivity contribute to a nearly 2.5 percent accu- enhancement as a new data augmentation method and found
racy increase, which shows the feasibility of the proposed that the performance of the system greatly improved. From
data augmentation method. the experimental results of CLAHE, all of the metrics showed
significant increases, which means the quality and the con-
4.2 Performance of SCDA trast of image are improved. The purposes of data augmenta-
To validate the performance of SCDA, we compared tion are mainly two-fold. One is to mitigate the problem
SCDA with other traditional data augmentation methods brought by lack of data; another is to solve the problem of
on ResNet-50. Results are shown in Table 6. overfitting. As can be seen from the results, CLAHE perfectly

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 101

fulfils these two objectives and results in an averaged increase Exchanges Cost Share 2018, UK (RP202G0230), Hope Founda-
in accuracy of all of the models. Therefore, CLAHE can be tion for Cancer Research, UK (RM60G0680), Medical Research
used as a new data augmentation method besides an image Council Confidence in Concept Award, UK (MC_PC_17171).
contrast enhancement method. Xiang Yu holds a China Scholarship Council studentship with
Based on this, the proposed SCDA method, which com- the University of Leicester.
bines traditional image scaling and CLAHE, performs even
better on data augmentation than sole CDA or scaling as REFERENCES
can be proved by experimental results. Also, there is of [1] W. C. R. Fund Breast Cancer Statistics, May 12, 2018. [Online].
great potential in exploring multiple combinations, which Available: [Online]. Available: https://www.wcrf.org/dietandcancer/
integrates two or more traditional data augmentation meth- cancer-trends/breast-cancer-statistics
ods with our proposed method. [2] WHO, Cancer, May 12, 2019. [Online]. Available: https://www.who.
int/news-room/fact-sheets/detail/cancer
[3] J. B. Sarah Caul, Cancer registration statistics, England: 2017, May 26,
2019 [Online]. Available: https://www.ons.gov.uk/peoplepopulationand
6 CONCLUSION AND FUTURE WORK community/healthandsocialcare/conditionsanddiseases/bulletins/cancer
registrationstatisticsengland/2017
In this paper, we proposed a new data augmentation method [4] A. C. Society. May 12, 2016. [Online]. Available: https://www.cancer.
termed as SCDA and developed a diagnostic system for accu- org/content/dam/cancer-org/research/cancer-facts-and-statistics/breast-
rate classification of breast abnormalities. Before inputting the cancer-facts-and-figures/breast-cancer-facts-and-figures-2015-2016.pdf.
[5] K. P. Michiel Kallenberg, “Unsupervised deep learning applied to
patches acquired from original mammogram images to our breast density segmentation and mammographic risk scoring,”
CNN network, we improved data preprocessing and data IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1322–1331, May 2016.
augmentation by applying the CLAHE contrast enhancement [6] N. Dhungel, G. Carneiro, and A. P. Bradley, “A deep learning
method on patches with the enlarged ROIs by scaling. Mea- approach for the analysis of masses in mammograms with minimal
user intervention,” Med. Image Anal., vol. 37, pp. 114–128, Apr. 2017.
surement of image contrast shows that the quality of image [7] L. Shen, “End-to-end training for whole image breast cancer diag-
patches has been considerably improved. To specify the CNN nosis using an all convolutional design,” 2017, arXiv:1711.05775.
models showing best performance on the binary classification [8] R. Agarwal, O. Diaz, X. Llad o, M. H. Yap, and R. Martı,
task, we explored the models with state-of-the-art connection “Automatic mass detection in mammograms using deep convolu-
tional neural networks,” J. Medical Imag., vol. 6, p. 031409, 2019.
methods including inception (GoogLeNet, Inception v3), [9] M Heath, K. Bowyer, D Kopans, R Moore, “The digital database
residual learning (ResNet), dense connection (DenseNet), for screening mammography,” in Proc. 5th Int. Workshop Digit.
depthwise and pointwise convolution (Xception). The experi- Mammography, 2000, pp. 212–218.
[10] W. Lotter, G. Sorensen and C. David, “A multi-scale CNN and cur-
mental results show that ResNet-50 gives the best perfor- riculum learning strategy for mammogram classification,” in Deep
mance amongst all of those models. Therefore, we propose to Learning in Medical Image Analysis and Multimodal Learning for Clinical
provide ResNet-50 with the augmented training set formed Decision Support. ed., Berlin, Germany: Springer, 2017, pp. 169–177.
by SCDA method and the performance of ResNet-50 achieved [11] R. Fedkiw and S. Osher, Level Set Methods and Dynamic Implicit
Surfaces, Berlin, Germany: Springer, vol. 153, 2006.
averaged accuracy of 95.74 percent after the 5-fold cross-vali- [12] T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE
dation. Therefore, we believe our system has great potential Trans. Image Process., vol. 10, no. 2, pp. 266–277, Feb. 2001.
in the field of diagnosing breast abnormalities. But there are [13] T. Kooi et al., “Large scale deep learning for computer aided
detection of mammographic lesions,” Med. Image Anal., vol. 35,
some aspects of our system to be enhanced. As can be seen pp. 303–312, Jan. 2017.
from the experiment results, the sensitivity, which is an [14] N. Tajbakhsh et al., “Convolutional neural networks for medical
important reference to the performance of the CAD system, image analysis: full training or fine tuning?,” IEEE Trans. Med.
could be further improved. This problem could be partly Imag., vol. 35, no. 5, pp. 1299–1312, May, 2016.
[15] R. K. Samala, H. P. Chan, L. Hadjiiski, M. A. Helvie, J. Wei, and
solved by further exploration of the combination of classical K. Cha, “Mass detection in digital breast tomosynthesis: Deep
data augmentation and our proposed one, which will be one convolutional neural network with transfer learning from
of the future works. Moreover, the performance of the diagno- mammography,” Med. Phys., vol. 43, Dec. 2016, Art. no. 6654.
sis system relies highly on the performance of the detection [16] B. Q. Huynh, H. Li, and M. L. Giger, “Digital mammographic
tumor classification using transfer learning from deep convolu-
system. Therefore, we will focus on developing automatic tional neural networks,” J. Med. Imag., vol. 3, 2016, Art. no. 034501.
detection systems for breast abnormalities by integrating our [17] A. G. Gale, “The mammographic image analysis society digital
improved diagnositic system, and therefore complete an end- mammogram database,” in Proc. 2nd Int. Workshop Digit. Mam-
mography, 1994, vol. 1069, pp. 375–378.
to-end CAD system for the full mammographic image. Also, [18] I. C. Moreira, I. Amaral, I. Domingues, A. Cardoso, M. J. Cardoso,
the size of the dataset indirectly determined the performance and J. S. Cardoso, “INbreast: Toward a full-field digital mammo-
of CAD systems. So, it’s more reasonable for us to work on a graphic database,” Acad. Radiol., vol. 19, pp. 236–248, Feb. 2012.
larger dataset, based on which can be meaningful to improve [19] Z. S. Pisano et al., “Contrast limited adaptive histogram equalization
image processing to improve the detection of simulated spiculations in
the performance of the current system. dense mammograms,” J. Digit. Imag., vol. 11, pp. 193–200, Nov. 1998.
[20] N. M. Sasi and V. Jayasree, “Contrast limited adaptive histogram
equalization for qualitative enhancement of myocardial perfusion
ACKNOWLEDGMENTS images,” Engineering, vol. 5, 2013, Art. no. 326.
[21] Y. Jiang et al., “Collaborative fuzzy clustering from multiple
This work was supported by the State’s Key Project of Research weighted views,” IEEE Trans. Cybern., vol. 45, pp. 688–701, 2014.
and Development Plan (2017YFA0104302, 2017YFC0109202, [22] Y. Jiang, J. Zheng, X. Gu, J. Xue, and P. Qian, “A novel synthetic CT
2017YFC0107900), National Natural Science Foundation of generation method using multitask maximum entropy clustering,”
China (61602250, 81530060, 81471752,), Henan Key Research IEEE Access, vol. 7, pp. 119644–119653, 2019.
[23] P. Qian et al., “mDixon-based synthetic CT generation for PET
and Development Project (182102310629), National Key attenuation correction on abdomen and pelvis jointly using trans-
Research and Development Plan (2017YFB1103202), Guangxi fer fuzzy clustering and active learning-based classification,”
Key Laboratory of Trusted Software (kx201901), International IEEE Trans. Med. Imag., vol. 39, no. 4, pp. 819–832, Apr. 2020.

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
102 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021

[24] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei Xiang Yu received the bachelor’s and master’s
ILSVRC-2012, 2012. [Online]. Available: http://www.image-net. degrees from Huanggang Normal University and
org/challenges/LSVRC/2012/ Xiamen University, P.R. China, in 2014 and 2018,
[25] A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classifica- respectively. Currently, he is Working toward the
tion with deep convolutional neural networks,” in Proc. Int. Conf. PhD degree in the Department of Informatics, Uni-
Neural Inf. Process. Syst., 2012, pp. 1097–1105. versity of Leicester, U.K. Also, he was sponsored
[26] K. Simonyan and A. Zisserman, “Very deep convolutional net- by CSC and by the University of Leicester as a
works for large-scale image recognition,” 2014, arXiv:1409.1556. graduate teaching assistant (GTA). His research
[27] C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE interests include medical image segmentation,
Conf. Comput. Vis. Pattern Recognit., 2015, pp. 1–9. machine learning, and deep learning.
[28] K. He, X. Zhang, R. Shaoqing, and S. Jian, “Deep residual learning
for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., 2016, pp. 770–778. Cheng Kang received the master’s degree from
[29] H. Gao, L. Zhuang, L. V. D. Maaten, and K. Q. Weinberger, Shenzhen University, he is currently working
“Densely connected convolutional networks,” in Proc. IEEE Conf. toward the PhD degree from the Department of
Comput. Vis. Pattern Recognit., 2017, pp. 4700–4708. Informatics, University of Leicester, U.K. His
[30] P. Qian et al., “SSC-EKE: Semi-supervised classification with exten- research interests focus on EEG signal processing,
sive knowledge exploitation,” Inf. Sci., vol. 422, pp. 51–76, 2018. deep learning. Currently, he is currently working on
[31] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and the early detection of breast cancer by artificial
P. Kuksa, “Natural language processing (almost) from scratch,” J. intelligence.
Mach. Learn. Res., vol. 12, pp. 2493–2537, 2011.
[32] A. Bordes, S. Chopra, and J. Weston, “Question answering with
subgraph embeddings,” in Proc. Conf. Empir. Methods Natural Lang.
Process., 2014, pp. þ615–620.
David Guttery He is currently a co-investigator on
[33] G. Hinton et al., “Deep neural networks for acoustic modeling in
an integrated, collaborative programme of clinical
speech recognition,” IEEE Signal Process. Mag., vol. 29, no. 6.
and translational research between the University
pp. 82–97, Nov. 2012.
and Leicester and Imperial College funded by Can-
[34] K. Xia, H. Yin, P. Qian, Y. Jiang, and S. Wang, “Liver semantic seg-
cer Research U.K. his research interests are inter-
mentation algorithm based on improved deep adversarial net-
twined with those of professor Jacqui Shaw (see
works in combination of weighted loss function on abdominal CT
Professor Jacqui Shaw’s webpage), which focus
images,” IEEE Access, vol. 7, pp. 96349–96358, 2019.
on the utility of circulating nucleic acids and other
[35] K.-J. Xia, H.-S. Yin, and Y.-D. Zhang, “Deep semantic segmenta-
circulating biomarkers for early detection and moni-
tion of kidney and space-occupying lesion area based on SCNN
toring of cancer. He is currently a co-investigator on
and ResNet models combined with SIFT-flow algorithm,” J. Med.
an integrated, collaborative programme of clinical
Syst., vol. 43, 2019, Art. no. 2.
and translational research between the University and Leicester and Impe-
[36] P. Qian et al., “Cross-domain, soft-partition clustering with diver-
rial College funded by Cancer Research UK.
sity measure and knowledge reference,” Pattern Recognit., vol. 50,
pp. 155–177, 2016.
[37] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierar- Seifedine Kadry (Senior Member, IEEE) received
chies for accurate object detection and semantic segmentation,” in the bachelor’s degree in applied mathematics,
Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 580–587. from Lebanese University, in 1999, the MS degree
[38] S. Ren, K. He, R. Girshick, and S. Jian, “Faster R-CNN: Towards in computation from Reims University, France, and
real-time object detection with region proposal networks,” in EPFL (Lausanne), in 2002, and the PhD degree in
Proc. Int. Conf. Neural Inf. Process. Syst., 2015, pp. 91–99. applied statistics, in 2007 from Blaise Pascal Uni-
[39] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, versity, France, HDR degree, in 2017 from Rouen
“Rethinking the inception architecture for computer vision,” in Proc. University. At present his research focuses on edu-
IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2818–2826. cation using technology, system prognostics, sto-
[40] F. Chollet, “Xception: Deep learning with depthwise separable chastic systems, and probability and reliability
convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., analysis. He is ABET program evaluator.
2017, pp. 1251–1258.
[41] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, Yang Chen (Senior Member, IEEE) received the
inception-resnet and the impact of residual connections on MS and PhD degrees in biomedical engineering
learning,” in Proc. 31st AAAI Conf. Artif. Intell., 2017, pp. 4278–4284. from First Military Medical University, China, in
[42] V. Nair and G. E. Hinton, “Rectified linear units improve 2004 and 2007, respectively. Since 2008, he has
restricted boltzmann machines,” in Proc. 27th Int. Conf. Mach. been a faculty member with the Department of
Learn., 2010, pp. 807–814. Computer Science and Engineering, Southeast
[43] R. Porwal and S. Gupta, “Appropriate contrast enhancement University, China. His recent work concentrates
measures for brain and breast cancer images,” Int. J. Biomed. Imag., on the medical image reconstruction, image anal-
vol. 2016, 2016, Art. no. 4710842. ysis, pattern recognition, and computerized-aid
[44] X. Zhang, J. Yang, and E. Nguyen, “Breast cancer detection via Hu diagnosis.
moment invariant and feedforward neural network,” in Proc. AIP
Conf., 2018, Art. no. 030014.
[45] P. G€orgel, A. Sertbas, and O. Uçan, “Computer-aided classifica-
Yu-Dong Zhang (Senior Member, IEEE) received
tion of breast masses in mammogram images based on spherical
the PhD degree from Southeast University, in
wavelet transform and support vector machines,” Expert Syst.,
2010. He worked as a postdoctoral from 2010 to
vol. 32, pp. 155–164, 2015.
2012 in Columbia University, USA, and as an assis-
[46] G. Liu, “Computer-aided diagnosis of abnormal breasts in mam-
tant research scientist from 2012 to 2013 at
mogram images by weighted-type fractional Fourier transform,”
Research Foundation of Mental Hygiene (RFMH),
Adv. Mech. Eng., vol. 8, Feb. 2016, Art. no. 11.
USA. He served as a full professor from 2013 to
[47] X. Wu, “Smart detection on abnormal breasts in digital mammogra-
2017 in Nanjing Normal University, where he was
phy based on contrast-limited adaptive histogram equalization and
the director and founder of Advanced Medical
chaotic adaptive real-coded biogeography-based optimization,”
Image Processing Group in NJNU. Now he serves
Simulation, vol. 92, pp. 873–885, Sep. 12, 2016.
as professor with the Department of Informatics,
University of Leicester, U.K.
" For more information on this or any other computing topic,
please visit our Digital Library at www.computer.org/csdl.

Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.

You might also like