ResNet-SCDA-50 For Breast Abnormality Classification
ResNet-SCDA-50 For Breast Abnormality Classification
1, JANUARY/FEBRUARY 2021
Abstract—(Aim) Breast cancer is the most common cancer in women and the second most common cancer worldwide. With the rapid
advancement of deep learning, the early stages of breast cancer development can be accurately detected by radiologists with the help of
artificial intelligence systems. (Method) Based on mammographic imaging, a mainstream clinical breast screening technique, we present
a diagnostic system for accurate classification of breast abnormalities based on ResNet-50. To improve the proposed model, we created
a new data augmentation framework called SCDA (Scaling and Contrast limited adaptive histogram equalization Data Augmentation).
In its procedure, we first conduct the scaling operation to the original training set, followed by applying contrast limited adaptive histogram
equalisation (CLAHE) to the scaled training set. By stacking the training set after SCDA with the original training set, we formed a new
training set. The network trained by the augmented training set, was coined as ResNet-SCDA-50. Our system, which aims at a binary
classification on mammographic images acquired from INbreast and MINI-MIAS, classifies masses, microcalcification as “abnormal”,
while normal regions are classified as “normal”. (Results) We present the first attempt to use the image contrast enhancement method
as the data augmentation method, resulting in an averaged 98.55 percent specificity and 92.83 percent sensitivity, which gives our best
model an overall accuracy of 95.74 percent. (Conclusion) Our proposed method is effective in classifying breast abnormality.
Index Terms—Breast cancer, ResNet-50, contrast limited adaptive histogram equalization, classification
1 INTRODUCTION
S one of the most aggressive cancers worldwide, breast an image upon on which they can make diagnostic decisions
A cancer caused more than 2 million new cases in 2018 [1],
[2]. Specifically, the incidence of breast cancer in the UK had
accordingly. However, the time-consuming interpretation
and complexity of mammograms can result in a high false-
risen to 46,109 in 2017, accounting for 15.1 percent of all can- positive rate and more importantly, misdiagnosis (i.e., false-
cer cases and was the most common cancer diagnosed in that negatives). Therefore, efficient and accurate computerised
year [3]. It is widely recognized that prevention and early auxiliary diagnostic systems are becoming increasingly
diagnosis significantly reduce cancer mortality. While the important both for radiologists and clinical practice.
primary prevention strategies for breast cancer are currently The advent of deep learning has come to the fore as a
under exploration, early detection considerably improves method for concurrent computer-aided (CAD) systems while
prognosis. For early detection of breast cancer, mammogra- the performance of conventional CAD systems being faraway
phy is one of the most commonly used screening techniques, from satisfaction. Unlike traditional breast cancer CAD sys-
reported to be responsible for a 20-40 percent reduction in tems that heavily rely on manually designed components for
fatalities [4]. Furthermore, it can provide radiologists with specific purposes and are hindered by a lack of generality,
modern CAD systems incorporating deep learning have
been improved both on accuracy and efficiency [5], [6], [7],
X. Yu and C. Kang are with the Department of Informatics, University of
Leicester, Leicester LE1 7RH, U.K. E-mail: {xy144, ck254}@le.ac.uk. [8]. Compared with the multiple-step traditional CAD sys-
D. S. Guttery is with the Leicester Cancer Research Centre, University of tems, CAD systems based on deep CNN generally solely con-
Leicester, Leicester LE2 7LX, U.K. E-mail: [email protected]. sist of candidate detection and classification components.
S. Kadry is with the Department of mathematics and computer science,
Faculty of science, Beirut Arab University, Beirut, Lebanon.
To achieve an end-to-end detection system of the lesion,
E-mail: [email protected]. Lotter et al. proposed to train deep CNN with patch-based
Y. Chen is with the Laboratory of Image Science and Technology, Southeast lesion regions in the DDSM dataset [9]. Subsequently, candi-
University, Nanjing 210096, China also with the School of Computer Science date areas, determined by a scanning model, are fed to the
and Engineering, Southeast University, Nanjing 210096, China also with
the School of Cyber Science and Engineering, Southeast University, Nanjing pre-trained classifier [10]. To minimise the intervention of the
210096, China also with the Key Laboratory of Computer Network and user while using the detection system, an automated CAD
Information Integration (Ministry of Education), Southeast University, system that integrates detection, segmentation, and classifica-
Nanjing 210096, China. E-mail: [email protected]. tion of breast masses has been proposed in [6]. In that study,
Y.-D. Zhang is with the Department of Informatics, University of Leicester,
Leicester LE1 7RH, U.K. and also with the Department of Information cascaded deep learning methods for detection were applied
Systems, Faculty of Computing and Information Technology, King to select hypotheses, which were subsequently refined
Abdulaziz University, Jeddah 21589, Saudi Arabia. by Bayesian optimization. Furthermore, the segmentation
E-mail: [email protected].
breaks down into two steps: gross segmentation is obtained
Manuscript received 18 Oct. 2019; revised 28 Dec. 2019; accepted 9 Feb. 2020. by deep structured output learning and then improved by a
Date of publication 13 Apr. 2020; date of current version 3 Feb. 2021.
(Corresponding author: Y.-D. Zhang.) level set method [11], [12]. Classification is realized by a deep
Digital Object Identifier no. 10.1109/TCBB.2020.2986544 learning classifier pre-trained with hand-crafted features and
1545-5963 ß 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See ht_tps://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 95
fine-tuned on annotated breast mass classification datasets. feature extractors with new classifier layer concatenated. We
The approach was tested on the CBIS-DDSM dataset and show here that ResNet-50 is superior as a feature extractor
achieved a true positive rate (TPR) of 0.98 0.02 at 1.67 false towards classification. Therefore, after cascading all of the
positives per mammogram on testing data. In another study, optimized blocks, we found our ResNet-SCDA-50 achieved at
45,000 mammogram images from a private database were 95.71 percent, which outperformed the state-of-the-art by a
used to compare the performance of traditional CAD systems big margin.
with CNN-based approaches [13]. Usually, the supervised
deep CNN models have to be trained with a large cohort of
2 BACKGROUND
images before eliminating overfitting and achieving high-
level performance. However, large numbers of annotated The development of image processing technique can be
images are sometimes inaccessible, primarily due to the high divided into pre-deep-learning era and deep learning era. In
potential expenditure on collection and maintenance. Trans- pre-deep learning era, image processing tasks like segmenta-
fer learning, therefore, is widely employed in various studies tion, clustering, enhancement are solved by constructing spe-
because it allows the knowledge learned from one domain to cific models [21], [22], [23], which are later on termed as
be transferred into another. Depending on the specific scenar- classical methods. Initially designed for large scale image rec-
ios of the utilisation of transfer learning, pre-trained deep ognition [24], [25], [26], [27], [28], [29], [30], deep CNNs have
CNN models can be repurposed for detection, classification been widely utilized in a range of fields including natural lan-
tasks by fine-tuning or can purely be used as feature extrac- guage processing [31], [32], speech recognition [33], segmen-
tors. As justified by Tajbakhsh et al., the pre-trained deep tation [34], [35], [36] and object localization [37], [38] thanks to
CNNs, after sufficient fine-tuning, performed no worse than the rapid development of firmware and software. Also, recent
deep CNNs trained from scratch. Also, pre-trained deep years witnessed the great advancement of deep CNNs includ-
CNNs showed higher robustness to the size of the dataset for ing the depth, connections between layers and even more
fine-tuning [14]. However, deep tuning and shallow tuning advanced training methods [27], [28], [29], [39], [40], [41].
are difficult as no standard criteria exist to determine which It is difficult to tell which contributes most to the success
one is superior but depends on specific scenarios. Further- of the advanced deep CNNs but it is of certain that connec-
more, Azizpour et al. pointed out that the high similarity tions between layers would be too profound to be neglected.
between databases for pre-training and targeted databases In GoogLeNet [27], a novel architecture termed inception,
gives rise to the success of the transfer learning approach. For which increases the depth and width of the network but
transfer learning in the field of breast cancer CAD systems, keeps the computational cost low at the same time, was
there are numerous works that lead to successful conclusions. developed. InceptionV3 [39], an advanced version of Goo-
A seven-layer CNN model, four convolutional layers and gLeNet, improves inception model by replacing big kernels
three fully connected layers, achieved an AUC of 0.81 in a of filters with smaller ones and introducing new connec-
breast tomosynthesis (DBT) training set after being trained tions. Inspired by Inception block, Xception architecture
by regions of interest (ROIs) extracted DBT dataset. However, [40], which forms of a depthwise convolution and the fol-
the AUC increased to 0.90 after transferring with DBT [15]. In lowing pointwise convolution, was proposed. Compare to
another study, the capability of different CNNs for tumour canonical convolution, depthwise and pointwise convolu-
feature extraction when used as feature extractors was com- tion is more parameter efficient. Given the size of feature-
pared. The performance of support vector machines trained maps MN by W channels, the size of kernels DF DF , the
on CNN-extracted features and computer-extracted features size of features-maps after convolution M 0 N 0 by W’ chan-
showed that transfer learning improves the performance of nels, then the total number of parameters of canonical con-
computer-aided-diagnosis (CADx) systems [16]. volution is DF DF W W 0 with bias term neglected.
In this paper, we presented a new breast abnormality diag- For depthwise convolution, where the group of filters is 1,
nostic system that achieved high accuracy on MINI-MIAS the number of parameters is DF DF W ; For pointwise
and INbreast datasets [17], [18]. In these two datasets, convolution, where the size of kernel is 11, the number
abnormalities in mammograms (mainly microcalcification of parameters is W W 0 ; which leads to a total number of
and mass) are annotated by experienced radiologists and parameters of ðDF DF þ W 0 Þ W . Therefore the rate of
extracted according to location information. The backbone of parameters between depthwise and pointwise convolution
our model is ResNet-SCDA-50, which is based on ResNet-50 and canonical convolution is:
trained by augmented data given by our SCDA, which applies
scaling operation and CLAHE to form the augmented train- ðDF DF þ W 0 Þ W
RD ¼
ing dataset. CLAHE was performed for two reasons: 1) DF DF W W 0
(1)
CLAHE is widely used for medical image enhancement [19], 1 1
¼ 0þ ;
[20] and the contrast of breast images is greatly enhanced after W DF 2
applying CLAHE. Therefore, higher quality images for the
subsequent components in our system can be obtained. and 2) As can be seen in Eq. (1), the reduction rate 1- RD could be
it can be repurposed as a data augmenter by stacking the meaningful if provided proper W’ and DF. To solve the prob-
enhanced images with original images. Scaling is involved lem with training deep CNNs, residual learning, a new
because we found that our CNN models trained by dataset short-cut connection method, was presented in [28]. In resid-
being scaled and enhanced by CLAHE showed better perfor- ual learning, given the identity X and the mapping of F ðXÞ,
mance. The classifiers designed in this work were based on a suppose that the underlying mapping of features HðXÞ after
transfer learning approach that utilizes the CNN models as a sequence of stacked layers is related to F ðXÞ and x by:
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
96 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 97
TABLE 1
Details of MINI-MIAS
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
98 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021
TABLE 4
Contrast of Patches Before and After Enhancement
ID OF
ORIGINAL CDA-ENHANCED DIFFERENCE
PATCHES
1 6.61 8.11 1.50
2 8.11 80.99 72.88
3 4.55 55.80 51.25
... ... ... ...
4914 73.69 131.60 57.91
10.42 (mean) 60.59 (mean) 50.17 (mean)
reflecting the ratio between signal and noise, implies the bet-
ter depression of noise.
According to the above equations, we were able to calcu-
late corresponding values with respect to MSE, PSNR, and
AMBE. We received MSE with an average value of 218.67
while the maintained PSNR to be desirable. Moreover,
mean AMBE is 8.87, which is the increase in intensity.
Detailed results are presented in Table 3. As contrast is
another valuable indicator of image quality, we compared
our enhanced image with the original image and found the
Fig. 4. Breast patches and corresponding enhanced patches. enhanced image is superior to the original image, as can be
seen from Table 4.
The calculation of PSNR was based on MSE and can be
written as:
3.3 Architecture
!
ðL 1Þ2 3.3.1 Architecture of CNN
PSNR ¼ 10 log 10 ; (12)
MSE However, in practice, object categories in classification tasks
are generally no more than hundreds while training a net-
where L stands for the value of the highest grey level in the work with a large number of parameters from scratch is
original image. AMBE, which is the absolute value between strenuous and time-consuming, especially when the size of
the expectation of enhanced image and the expectation of the training data is small. Therefore, transfer learning, an effec-
original image, can then be presented in the following form: tive way to adapt classifiers trained in one domain to
another, can be more feasible though slight changes in archi-
AMBE ¼ jE½Y E½Xj; (13) tecture generally need to be made. There are two ways of
transferring a pre-trained CNN to a new task including com-
where j j indicates absolute operation while the E[X] and bining the pre-trained CNN as a feature extractor with a cor-
E[Y] are calculated by: responding new classifier or fine-tuning the pre-trained
CNNs by adjusting the architectures. When used as feature
1 X m X n extractors, the parameters in the pre-trained CNNs remain
E½ X ¼ I ði; jÞ (14) unchanged whilst features extracted in a certain depth of
m ni ¼ 1j ¼ 1
CNN were taken as the input of the concatenated classifiers.
When pre-trained CNNs are fine-tuned, the architecture of
1 X m X n
base CNNs has to be changed slightly. For example, the
E½ Y ¼ J ði; jÞ: (15)
m ni ¼ 1j ¼ 1 number of outputs in CNNs designed for large scale image
classification is 1,000. However, when applying CNNs to
The bigger MSE means the more significant difference binary classification, the last 1000-neuron fully connected
between the image before and after enhancement, which sig- layer should be replaced with a 2-neuron fully connected
nals the effectiveness of enhancement. The greater PSNR, layer, which turns out to be the new classifier. Then the num-
ber of parameters in the overall CNN need to be updated
TABLE 3 outnumbers that of the network combining CNN-based fea-
Measurements of Enhancement ture extractor with adjoining classifier. Thus it takes longer
time to train the classifiers in the former form compared to
ID OF PATCHES MSE PSNR AMBE the one in the latter form.
1 306.79 16.93 13.62 To reduce the computational costs of our system in this
2 242.81 20.39 9.23 work, we used pre-trained CNNs as feature extractor and
3 266.10 18.03 12.07 therefore designed new classifiers that take the extracted fea-
... ... ... ... tures as input. The dimensions of features output by CNNs
4914 513.87 20.39 10.80
218.67 (mean) 20.21(mean) 8.87(mean) after fully-connected layer are generally more than thou-
sands. Directly feeding the new binary-classification classifier
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 99
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
100 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021
TABLE 5 TABLE 6
Averaged Performance of Classifier Before Results of ResNet-50 Trained by the Augmented Training Set
and After Enhancement
Augmentation Sensitivity Specificity Accuracy
Models CDA Sensitivity (%) Specificity (%) Accuracy (%) method (%) (%) (%)
GoogLeNet
N 88.41 96.73 92.64 Scaling 93.24 98.10 95.71
Y 90.84 98.10 94.53 Vertical shift 92.78 98.33 95.60
N 89.66 97.41 93.61 Horizontal shift 92.52 98.48 95.55
ResNet-50 Rotation 92.17 98.61 95.45
Y 92.39 98.50 95.50
Vertical Flip 92.44 98.70 95.62
N 87.35 96.53 92.02 Horizontal Flip 92.80 98.44 95.67
DenseNet201
Y 90.72 97.95 94.40
SCDA (our method) 92.83 98.55 95.74
N 88.85 96.65 92.82
Xception
Y 92.38 97.56 95.01
N 88.81 96.57 92.76 TABLE 7
Inceptionv3
Y 92.80 97.53 95.21 Method Comparison
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
YU ET AL.: RESNET-SCDA-50 FOR BREAST ABNORMALITY CLASSIFICATION 101
fulfils these two objectives and results in an averaged increase Exchanges Cost Share 2018, UK (RP202G0230), Hope Founda-
in accuracy of all of the models. Therefore, CLAHE can be tion for Cancer Research, UK (RM60G0680), Medical Research
used as a new data augmentation method besides an image Council Confidence in Concept Award, UK (MC_PC_17171).
contrast enhancement method. Xiang Yu holds a China Scholarship Council studentship with
Based on this, the proposed SCDA method, which com- the University of Leicester.
bines traditional image scaling and CLAHE, performs even
better on data augmentation than sole CDA or scaling as REFERENCES
can be proved by experimental results. Also, there is of [1] W. C. R. Fund Breast Cancer Statistics, May 12, 2018. [Online].
great potential in exploring multiple combinations, which Available: [Online]. Available: https://www.wcrf.org/dietandcancer/
integrates two or more traditional data augmentation meth- cancer-trends/breast-cancer-statistics
ods with our proposed method. [2] WHO, Cancer, May 12, 2019. [Online]. Available: https://www.who.
int/news-room/fact-sheets/detail/cancer
[3] J. B. Sarah Caul, Cancer registration statistics, England: 2017, May 26,
2019 [Online]. Available: https://www.ons.gov.uk/peoplepopulationand
6 CONCLUSION AND FUTURE WORK community/healthandsocialcare/conditionsanddiseases/bulletins/cancer
registrationstatisticsengland/2017
In this paper, we proposed a new data augmentation method [4] A. C. Society. May 12, 2016. [Online]. Available: https://www.cancer.
termed as SCDA and developed a diagnostic system for accu- org/content/dam/cancer-org/research/cancer-facts-and-statistics/breast-
rate classification of breast abnormalities. Before inputting the cancer-facts-and-figures/breast-cancer-facts-and-figures-2015-2016.pdf.
[5] K. P. Michiel Kallenberg, “Unsupervised deep learning applied to
patches acquired from original mammogram images to our breast density segmentation and mammographic risk scoring,”
CNN network, we improved data preprocessing and data IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1322–1331, May 2016.
augmentation by applying the CLAHE contrast enhancement [6] N. Dhungel, G. Carneiro, and A. P. Bradley, “A deep learning
method on patches with the enlarged ROIs by scaling. Mea- approach for the analysis of masses in mammograms with minimal
user intervention,” Med. Image Anal., vol. 37, pp. 114–128, Apr. 2017.
surement of image contrast shows that the quality of image [7] L. Shen, “End-to-end training for whole image breast cancer diag-
patches has been considerably improved. To specify the CNN nosis using an all convolutional design,” 2017, arXiv:1711.05775.
models showing best performance on the binary classification [8] R. Agarwal, O. Diaz, X. Llad o, M. H. Yap, and R. Martı,
task, we explored the models with state-of-the-art connection “Automatic mass detection in mammograms using deep convolu-
tional neural networks,” J. Medical Imag., vol. 6, p. 031409, 2019.
methods including inception (GoogLeNet, Inception v3), [9] M Heath, K. Bowyer, D Kopans, R Moore, “The digital database
residual learning (ResNet), dense connection (DenseNet), for screening mammography,” in Proc. 5th Int. Workshop Digit.
depthwise and pointwise convolution (Xception). The experi- Mammography, 2000, pp. 212–218.
[10] W. Lotter, G. Sorensen and C. David, “A multi-scale CNN and cur-
mental results show that ResNet-50 gives the best perfor- riculum learning strategy for mammogram classification,” in Deep
mance amongst all of those models. Therefore, we propose to Learning in Medical Image Analysis and Multimodal Learning for Clinical
provide ResNet-50 with the augmented training set formed Decision Support. ed., Berlin, Germany: Springer, 2017, pp. 169–177.
by SCDA method and the performance of ResNet-50 achieved [11] R. Fedkiw and S. Osher, Level Set Methods and Dynamic Implicit
Surfaces, Berlin, Germany: Springer, vol. 153, 2006.
averaged accuracy of 95.74 percent after the 5-fold cross-vali- [12] T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE
dation. Therefore, we believe our system has great potential Trans. Image Process., vol. 10, no. 2, pp. 266–277, Feb. 2001.
in the field of diagnosing breast abnormalities. But there are [13] T. Kooi et al., “Large scale deep learning for computer aided
detection of mammographic lesions,” Med. Image Anal., vol. 35,
some aspects of our system to be enhanced. As can be seen pp. 303–312, Jan. 2017.
from the experiment results, the sensitivity, which is an [14] N. Tajbakhsh et al., “Convolutional neural networks for medical
important reference to the performance of the CAD system, image analysis: full training or fine tuning?,” IEEE Trans. Med.
could be further improved. This problem could be partly Imag., vol. 35, no. 5, pp. 1299–1312, May, 2016.
[15] R. K. Samala, H. P. Chan, L. Hadjiiski, M. A. Helvie, J. Wei, and
solved by further exploration of the combination of classical K. Cha, “Mass detection in digital breast tomosynthesis: Deep
data augmentation and our proposed one, which will be one convolutional neural network with transfer learning from
of the future works. Moreover, the performance of the diagno- mammography,” Med. Phys., vol. 43, Dec. 2016, Art. no. 6654.
sis system relies highly on the performance of the detection [16] B. Q. Huynh, H. Li, and M. L. Giger, “Digital mammographic
tumor classification using transfer learning from deep convolu-
system. Therefore, we will focus on developing automatic tional neural networks,” J. Med. Imag., vol. 3, 2016, Art. no. 034501.
detection systems for breast abnormalities by integrating our [17] A. G. Gale, “The mammographic image analysis society digital
improved diagnositic system, and therefore complete an end- mammogram database,” in Proc. 2nd Int. Workshop Digit. Mam-
mography, 1994, vol. 1069, pp. 375–378.
to-end CAD system for the full mammographic image. Also, [18] I. C. Moreira, I. Amaral, I. Domingues, A. Cardoso, M. J. Cardoso,
the size of the dataset indirectly determined the performance and J. S. Cardoso, “INbreast: Toward a full-field digital mammo-
of CAD systems. So, it’s more reasonable for us to work on a graphic database,” Acad. Radiol., vol. 19, pp. 236–248, Feb. 2012.
larger dataset, based on which can be meaningful to improve [19] Z. S. Pisano et al., “Contrast limited adaptive histogram equalization
image processing to improve the detection of simulated spiculations in
the performance of the current system. dense mammograms,” J. Digit. Imag., vol. 11, pp. 193–200, Nov. 1998.
[20] N. M. Sasi and V. Jayasree, “Contrast limited adaptive histogram
equalization for qualitative enhancement of myocardial perfusion
ACKNOWLEDGMENTS images,” Engineering, vol. 5, 2013, Art. no. 326.
[21] Y. Jiang et al., “Collaborative fuzzy clustering from multiple
This work was supported by the State’s Key Project of Research weighted views,” IEEE Trans. Cybern., vol. 45, pp. 688–701, 2014.
and Development Plan (2017YFA0104302, 2017YFC0109202, [22] Y. Jiang, J. Zheng, X. Gu, J. Xue, and P. Qian, “A novel synthetic CT
2017YFC0107900), National Natural Science Foundation of generation method using multitask maximum entropy clustering,”
China (61602250, 81530060, 81471752,), Henan Key Research IEEE Access, vol. 7, pp. 119644–119653, 2019.
[23] P. Qian et al., “mDixon-based synthetic CT generation for PET
and Development Project (182102310629), National Key attenuation correction on abdomen and pelvis jointly using trans-
Research and Development Plan (2017YFB1103202), Guangxi fer fuzzy clustering and active learning-based classification,”
Key Laboratory of Trusted Software (kx201901), International IEEE Trans. Med. Imag., vol. 39, no. 4, pp. 819–832, Apr. 2020.
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.
102 IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 18, NO. 1, JANUARY/FEBRUARY 2021
[24] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei Xiang Yu received the bachelor’s and master’s
ILSVRC-2012, 2012. [Online]. Available: http://www.image-net. degrees from Huanggang Normal University and
org/challenges/LSVRC/2012/ Xiamen University, P.R. China, in 2014 and 2018,
[25] A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classifica- respectively. Currently, he is Working toward the
tion with deep convolutional neural networks,” in Proc. Int. Conf. PhD degree in the Department of Informatics, Uni-
Neural Inf. Process. Syst., 2012, pp. 1097–1105. versity of Leicester, U.K. Also, he was sponsored
[26] K. Simonyan and A. Zisserman, “Very deep convolutional net- by CSC and by the University of Leicester as a
works for large-scale image recognition,” 2014, arXiv:1409.1556. graduate teaching assistant (GTA). His research
[27] C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE interests include medical image segmentation,
Conf. Comput. Vis. Pattern Recognit., 2015, pp. 1–9. machine learning, and deep learning.
[28] K. He, X. Zhang, R. Shaoqing, and S. Jian, “Deep residual learning
for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., 2016, pp. 770–778. Cheng Kang received the master’s degree from
[29] H. Gao, L. Zhuang, L. V. D. Maaten, and K. Q. Weinberger, Shenzhen University, he is currently working
“Densely connected convolutional networks,” in Proc. IEEE Conf. toward the PhD degree from the Department of
Comput. Vis. Pattern Recognit., 2017, pp. 4700–4708. Informatics, University of Leicester, U.K. His
[30] P. Qian et al., “SSC-EKE: Semi-supervised classification with exten- research interests focus on EEG signal processing,
sive knowledge exploitation,” Inf. Sci., vol. 422, pp. 51–76, 2018. deep learning. Currently, he is currently working on
[31] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and the early detection of breast cancer by artificial
P. Kuksa, “Natural language processing (almost) from scratch,” J. intelligence.
Mach. Learn. Res., vol. 12, pp. 2493–2537, 2011.
[32] A. Bordes, S. Chopra, and J. Weston, “Question answering with
subgraph embeddings,” in Proc. Conf. Empir. Methods Natural Lang.
Process., 2014, pp. þ615–620.
David Guttery He is currently a co-investigator on
[33] G. Hinton et al., “Deep neural networks for acoustic modeling in
an integrated, collaborative programme of clinical
speech recognition,” IEEE Signal Process. Mag., vol. 29, no. 6.
and translational research between the University
pp. 82–97, Nov. 2012.
and Leicester and Imperial College funded by Can-
[34] K. Xia, H. Yin, P. Qian, Y. Jiang, and S. Wang, “Liver semantic seg-
cer Research U.K. his research interests are inter-
mentation algorithm based on improved deep adversarial net-
twined with those of professor Jacqui Shaw (see
works in combination of weighted loss function on abdominal CT
Professor Jacqui Shaw’s webpage), which focus
images,” IEEE Access, vol. 7, pp. 96349–96358, 2019.
on the utility of circulating nucleic acids and other
[35] K.-J. Xia, H.-S. Yin, and Y.-D. Zhang, “Deep semantic segmenta-
circulating biomarkers for early detection and moni-
tion of kidney and space-occupying lesion area based on SCNN
toring of cancer. He is currently a co-investigator on
and ResNet models combined with SIFT-flow algorithm,” J. Med.
an integrated, collaborative programme of clinical
Syst., vol. 43, 2019, Art. no. 2.
and translational research between the University and Leicester and Impe-
[36] P. Qian et al., “Cross-domain, soft-partition clustering with diver-
rial College funded by Cancer Research UK.
sity measure and knowledge reference,” Pattern Recognit., vol. 50,
pp. 155–177, 2016.
[37] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierar- Seifedine Kadry (Senior Member, IEEE) received
chies for accurate object detection and semantic segmentation,” in the bachelor’s degree in applied mathematics,
Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 580–587. from Lebanese University, in 1999, the MS degree
[38] S. Ren, K. He, R. Girshick, and S. Jian, “Faster R-CNN: Towards in computation from Reims University, France, and
real-time object detection with region proposal networks,” in EPFL (Lausanne), in 2002, and the PhD degree in
Proc. Int. Conf. Neural Inf. Process. Syst., 2015, pp. 91–99. applied statistics, in 2007 from Blaise Pascal Uni-
[39] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, versity, France, HDR degree, in 2017 from Rouen
“Rethinking the inception architecture for computer vision,” in Proc. University. At present his research focuses on edu-
IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2818–2826. cation using technology, system prognostics, sto-
[40] F. Chollet, “Xception: Deep learning with depthwise separable chastic systems, and probability and reliability
convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., analysis. He is ABET program evaluator.
2017, pp. 1251–1258.
[41] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, Yang Chen (Senior Member, IEEE) received the
inception-resnet and the impact of residual connections on MS and PhD degrees in biomedical engineering
learning,” in Proc. 31st AAAI Conf. Artif. Intell., 2017, pp. 4278–4284. from First Military Medical University, China, in
[42] V. Nair and G. E. Hinton, “Rectified linear units improve 2004 and 2007, respectively. Since 2008, he has
restricted boltzmann machines,” in Proc. 27th Int. Conf. Mach. been a faculty member with the Department of
Learn., 2010, pp. 807–814. Computer Science and Engineering, Southeast
[43] R. Porwal and S. Gupta, “Appropriate contrast enhancement University, China. His recent work concentrates
measures for brain and breast cancer images,” Int. J. Biomed. Imag., on the medical image reconstruction, image anal-
vol. 2016, 2016, Art. no. 4710842. ysis, pattern recognition, and computerized-aid
[44] X. Zhang, J. Yang, and E. Nguyen, “Breast cancer detection via Hu diagnosis.
moment invariant and feedforward neural network,” in Proc. AIP
Conf., 2018, Art. no. 030014.
[45] P. G€orgel, A. Sertbas, and O. Uçan, “Computer-aided classifica-
Yu-Dong Zhang (Senior Member, IEEE) received
tion of breast masses in mammogram images based on spherical
the PhD degree from Southeast University, in
wavelet transform and support vector machines,” Expert Syst.,
2010. He worked as a postdoctoral from 2010 to
vol. 32, pp. 155–164, 2015.
2012 in Columbia University, USA, and as an assis-
[46] G. Liu, “Computer-aided diagnosis of abnormal breasts in mam-
tant research scientist from 2012 to 2013 at
mogram images by weighted-type fractional Fourier transform,”
Research Foundation of Mental Hygiene (RFMH),
Adv. Mech. Eng., vol. 8, Feb. 2016, Art. no. 11.
USA. He served as a full professor from 2013 to
[47] X. Wu, “Smart detection on abnormal breasts in digital mammogra-
2017 in Nanjing Normal University, where he was
phy based on contrast-limited adaptive histogram equalization and
the director and founder of Advanced Medical
chaotic adaptive real-coded biogeography-based optimization,”
Image Processing Group in NJNU. Now he serves
Simulation, vol. 92, pp. 873–885, Sep. 12, 2016.
as professor with the Department of Informatics,
University of Leicester, U.K.
" For more information on this or any other computing topic,
please visit our Digital Library at www.computer.org/csdl.
Authorized licensed use limited to: Government of Egypt - SPCESR - (EKB). Downloaded on April 06,2022 at 15:17:55 UTC from IEEE Xplore. Restrictions apply.