0% found this document useful (0 votes)
106 views7 pages

Multi-Model Deep Neural Network Based Features Extraction and Optimal Selection Approach For Skin Lesion Classification

Uploaded by

shayhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views7 pages

Multi-Model Deep Neural Network Based Features Extraction and Optimal Selection Approach For Skin Lesion Classification

Uploaded by

shayhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Multi-Model Deep Neural Network based Features

Extraction and Optimal Selection Approach for


Skin Lesion Classification
Muhammad Attique Khan Muhammad Younus Javed Muhammad Sharif
Department of CS Department of CS Department of CS
EHITEC University EHITEC University COMSATS University
Museum Road, Taxila, Pakistan Museum Road, Taxila, Pakistan Islamabad, Wah Cantt, Pakistan
[email protected] [email protected] [email protected]

Tanzila Saba Amjad Rehman


College of Computer and Information Sciences, College of Business Administration
Prince Sultan University Al Yamamah University
Riyadh, Saudi Arabia Riyadh, Saudi Arabia
[email protected] [email protected]

Abstract—Melanoma skin cancer is one of the most deadly Since 2018, an approximated 178,560 new cases of
forms of cancer which are responsible for thousands of deaths. melanoma are recorded in the US which comprises
The manual process of melanoma diagnosis is a time taking and
difficult task, therefore researchers introduced several noninvasive (87,290) and invasive (91,270). In addition, the
computerized methods for recognition. Through computational deaths occurred due to melanoma is 9,320 which includes
methods, improves the accuracy of diagnostics process which is 3330 women and 5990 men [3], [4].
helpful for dermatologists. In this paper, we proposed an
automated system for skin lesion classification through transfer
learning based deep neural network (DCNN) features extraction In skin cancer, the most common skin lesions are Actinic
and kurtosis controlled principle component (KcPCA) based Keratoses (akiec), basal cell carcinoma (bcc), benign keratosis
optimal features selection. The pre-trained ResNet deep neural (bkl), Dermatofibroma (df), Nevi (nv), melanoma (mel), and
network such as RESNET-50 and RESNET-101 are utilized for
features extraction. Then fused their information and selects the Vascular skin (vasc). The doctors have produced several
best features which later fed to supervised learning method such methods for diagnosis of skin lesions including ABCD rule, 7-
as SVM of radial basis function (RBF) for classification. Three point checklist, and CASH [5]. These methods have shown the
datasets name HAM10000, ISBI 2017, and ISBI 2016 are utilized limited accuracy, therefore it is strong motivations for
for experimental results and achieved an accuracy of 89.8%,
95.60%, and 90.20%, respectively. The overall results show that researchers to develop new computerized systems for
the performance of the proposed system is reliable as compared automatic diagnosis of skin lesions into their relevant
to existing techniques. categories. The deep learning architectures are recently used
Keywords—Skin cancer, preprocessing, deep features, optimal
features
in this area and give a significant performance for lesion
classification [6], [7].
I. INTRODUCTION
Mahbod et al. [8] introduced a fusion framework of intra
Among various types of cancers, skin cancer is one of the and inter architecture network. The distinct convolutional
offensive forms of cancer. In the last decade, the yearly neural network (CNN) architectures extract the different
number of skin cancer (i.e. melanoma) has been raised up to number of features and each architecture includes several
53%. In the USA, one in 52 women and one in 32 men are numbers of pretrained networks. The features from each
diagnosed by melanoma and estimated 10,000 peoples have network are used to train distinct SVM classifiers and finally
killed due to melanoma. Recent research revealed that 98% of average prediction value is selected for final classification.
the cases are survived when melanoma diagnosed at an early Khawara et al. [9] presented a deep CNN features based
stage and only 17% of cases are survived when melanoma left approach for skin lesion classification. They extract features
undiagnosed at their initial stage [1], [2]. through pre-trained CNN model without any preprocessing
and segmentation step. A linear classifier is trained using deep
features for classification results and show improved
performance. Harangi et al. [10] described an ensemble of
CNN model for improves the individual accuracies of lesions

978-1-5386-8125-1/19/$31.00 ©2019 IEEE


into relevant categories such as melanoma, benign, and Fig 1. Proposed architecture of automated skin lesion recognition.
seborrheic keratosis. They fused output layers of four distinct A. Image preprocessing
CNN models. The extracted deep features are finally classified
The preprocessing is an essential step in any image
based on a sum of maximal probabilities and achieved
processing approach which is used for contrast enhancement
significant performance.
of images for the given database. The major purpose of the
Through deep learning, automatic features are extracted
preprocessing step in this work is to highlight the lesion region
from specific layers and no needs to any preprocessing step.
as compared to the background/ healthy region. A hybrid
But when we dealing with complex images such as skin
preprocessing technique is implemented for highlighting the
cancer which contains noise in background regions, artifacts,
lesion region which is based on the fusion of complement
and similarity among lesion and healthy regions, then the
operation and dehaze reduction. Later, LAB transformation is
preprocessing is an essential step. Without the perfect
applied which highlight the lesion area as shown in Figure. 2.
preprocessing step, the prominent features are not produced.
In Fig. 2, (a) shows original image, (b) shows fusion of
Therefore, it is necessary to perform preprocessing and later
complement and dehaze operation image, and (c) represents
prominent features are extracted which in turn good
the LAB transformation image. Mathematically formulation of
classification accuracy and less testing/training time [7]. In the
fusion process and LAB conversion are given below: Let
existing methods, the preprocessing is not performed which
φ(u,v) denote input
caused a major problem in features extraction step. The
existence of noise and irrelevant pixels produce unuseful
features which later degrades the overall system accuracy.
Moreover, the selection of best features is another major
challenge for best system accuracy, therefore it is essential to
select the best subset of features for classification.
In this article, a sigmoid function based CNN features
extraction and kurtosis controlled maximal feature (KcMF)
selection approach name KcPCA is proposed. In the proposed,
in the first step, preprocessing is perform to improves the
contrast of lesion images and extract CNN features through
sigmoid function. The CNN features are extracted through
fully connected (FC) layers of two pre-trained models as
ResNet 50 and ResNet-101. In the second step, a Kurtosis
controlled PCA (KcPCA) approach is implemented on both
models and select the top 70% best features from each of them Fig 2. Proposed architecture of automated skin lesion recognition.
based on the higher score of principal component analysis
(PCA). The selected 70% features from both models are fused RGB dermoscopic image of dimension 512 × 512 where
by a parallel approach name maximal probability (MP). φ(u,v) ∈R. As we know that the total area of both original
Finally, the fused features are fed to SVM and perform image and complement image is equal to 1, formulated as
classification. follows:
P(φ(u,v)) + P(φ(u,v))’ = 1 (1)
II. PROPOSED METHOD P(φ(u,v))’ = 1 − P(φ(u,v)) (2)
The proposed classification system comprised two primary Where P(φ(u,v)) denotes the probability of occurrences of
steps- deep neural network features extraction and optimal pixel values. Later, an existing haze reduction algorithm is
features selection and fusion. The detailed description of each employed to increase the visibility of abnormal regions in the
step including visual results are presented in this section and image. In this work, we utilized dark channel prior approach
visual flow is given in Figure 1. for increases the visibility of the lesion region. The concept of
a dark channel is formulated as follows:

min
Where, y ∈ ∆(x) is minimum local patch of image and φc is
color channel. The notation R, G, and B denotes the red,
green, and blue channels, respectively. This dark channel
problem is removed through dark channel prior approach
(DCP) [15]. The DCP approach consists of five major steps
including transmission estimation, soft matting, estimation of
atmospheric light, scene radiance recovering, and patch size.
Finally, fused the output of DCP along complement operation
through simple multiplication and effects are shown in Figure.
2(b). Finally, LAB transformation is performed on the fused (6)
image which clearly highlights the lesion region as shown in
Figure. 2(c). where, Kf1(i) denotes kurtosis vector, Kf2(j), denotes mean
features of original feature vector f1(i), and denotes mean
B. Deep features extraction feature of original vector f2(j). After obtaining kurtosis
Lately, deep learning showing significant improvement in the vectors, PCA is employed on both vectors Kf1(i) and Kf2(j).
computer vision community using the huge number of Through PCA, finds the score value of all features of both
imaging datasets. Though deep learning a significant number vectors. The score values of both vectors, sorted into
of features are extracted through different layers [20], [21]. ascending order and later selects the last 70% features from
Several deep learning models are recently proposed such as both of them. The selected features are finally fused through
VGG [16], AlexNet [17], COCO [18], YOLO [19], ResNet serial based concatenation method an obtained an fused
[22], and few more [23]. In this work, we utilize two selected vector f3(k) of dimensional N × 2133. The selected
pretrained models of ResNet series like ResNet-50 and vector finally fed to multi-class support vector machine of
ResNet101 for deep features extraction. ResNet-50 includes a kernel RBF. The labeled classification results are shown in
total of 175 layers (convolutional, ReLu, pooling, batch Figure 3.
normalization, sum, and fully connected layer). From those,
we perform sigmoid activation function on probe layer (pool5
which returns an output of size N × 2048. Later, we also
utilized ResNet101 pre-trained model which contains total of
347 layers (convolutional, ReLu, pooling, batch
normalization, addition layer, FC, and a classification layer).
From this model, we utilized FC layer (fc1000) and returns an
output of size N × 2048. The formulation of activation
function is defined as:

(4)

(5)

C. Optimal Features Selection & Fusion


After deep features extraction step, optimal features are Fig 3. Proposed labeled classification results.
selected from both models. The purpose of optimal features
selection is to remove the redundancy between features,
reduce the prediction rate, and increases the classification III. RESULTS AND DISCUSSION
accuracy. Lately, in the domain of machine learning various Three publically available dermoscopic datasets including
techniques are introduced for features selection such as ISBI 2016, ISBI 2017, and HAM10000 are utilized for
entropy, Genetic Algorithm, and few more [24], [25], [26], experimental results and analysis of the proposed approach.
[27]. In this work, we proposed a new method for optimal The ISBI 2016 dataset consists of total of 1279 dermoscopic
features selection which is based on kurtosis controlled PCA RGB images (273 malignant, 1006 benign). The ISBI 2017
(KcPCA) formulation. Let f1(i) denotes feature vector of dataset contains a total of total 2750 dermoscopic images (517
ResNet-50 pretrained model of dimension N ×2048, f2(j) malignant and 2233 benign). The HAM10000 dataset involves
denotes feature vector of ResNet-101 pretrained model of total 10,000 dermoscopic images of six different classes
dimensional n × 1000. Then compute kurtosis feature vectors including Actinic Keratoses (akiec), basal cell carcinoma
denoted by f3(k) and f4(l) from both vectors f1(i) and f2(j) as (bcc), benign keratosis (bkl), Dermatofibroma (df), Nevi (nv),
follows: melanoma (mel), and Vascular skin (vasc). In this work, we
utilized RBF kernel function of SVM and compare
performance with six other kernels as well linear, cubic,
quadratic, fine gaussian (FGSVM), medium gaussian
(5) (MGSVM), and Coarse gaussian (CGSVM) on all three
datasets. Six well-known performance metrics are
implemented for analysis of proposed results such as
sensitivity rate (Sen), specificity rate (Spec), average precision
(Prec), the AUC, FP rate, and accuracy (Acc). All simulations
are performed on MATLAB 2018a using Personal Computer
Corei7, 16GB of RAM and 8GB Graphics card.

TABLE I: PROPOSED CLASSIFICATION ACCURACY OF ISBI2016 DATASET

SVM Kernels Performance Metrics


Classifier Sen Spec Prec Acc
Linear Quadratic Cubic FGSVM MGSVM CGSVM RBF AUC FPR
(%) (%) (%) (%)
 80.5 73.0 0.82 81.0 0.185 80.5
 85.5 81.0 0.93 85.5 0.145 85.3

Support  86.5 83.0 0.95 87.0 0.135 86.8


Vector  84.5 80.0 0.94 84.5 0.155 84.3
Machine
 81.0 72.0 0.81 82.5 0.190 81.4
 87.0 83.0 0.96 87.5 0.130 87.2
 90.5 99.2 0.89 92.1 0.090 90.2 Fig 4:
Confus
A. Results ion matrices for various SVM kernels using ISBI 2016 dataset.
The proposed classification results are presented in both
numerical and graphical forms are presented in this section. A
training and testing ratio is set as a 70:30 for HAM10000
dataset whereas for ISBI2016 and ISBI2017 dataset, the
training and testing samples are separately provided. Then,
testing classification results are validated through K-fold cross
validation where K=10. This step is performs for all three
datasets. In Table. I, proposed classification results are
described for ISBI 2016 dataset. The best achieved
classification accuracy, sensitivity rate, and specificity on
various SVM kernels such as linear, quadratic, cubic,
FGSVM, MGSVM, CGSVM, and RBF are (80.5%, 80.5%,
73%), (85.3%, 85.5%, 81%), (86.8%, 86.5%, 83.0%), (84.3%,
84.5%, 80%), (81.4%, 81%, 72%), (87.2%, 87%, 83%), and
(90.2%, 90.5%, 99.2%), respectively. The SVM outperforms
through RBF kernel function whereas the worst accuracy is
80.5% which is achieved on a linear kernel of SVM. The The classification results of ISBI 2017 dataset are described
classification accuracy of all SVM kernels is also proved by in Table. II. The RBF SVM kernel outperforms as compared
Figure. 4. The Fig. 4 described the confusion matrices that are to other kernel methods and achieved the best accuracy of
utilized for the computation of accuracy of each class. 95.6%, sensitivity rate of 95.5%, specificity of 95.0%, AUC is
0.98, the precision rate of 95.5%, and FP rate of 0.04,
respectively. Whereas the worst accuracy of 89.5% through
linear SVM kernel function. The accuracy through other
kernel methods are 92.8% (quadratic), 95.3% (cubic), 94%
(FGSVM), 94.8% (MGSVM), and 90.1% (CGSVM),
respectively. The accuracy of all kernels is also proved by
confusion matrices presented in Figure. 5.
Fig 5: Confusion matrices for various SVM kernels using ISBI 2017 dataset.

Finally, the classification is performed on newly


challenging dataset name HAM10000 which consists of 7
different categories as discussed in Section III. Due to the
large difference and variation among category classes, this
dataset is more challenging. The best-achieved accuracy on
this dataset of 89.8% which is achieved through RBF kernel
function whereas the sensitivity, AUC, precision, and FP rate
are 89.71%, 0.978, 90.14%, and 0.017, respectively as
presented in Table III. On the other side, the worst accuracy of
85.2% which is obtained on quadratic and FGSVM kernels.

TABLE II: PROPOSED CLASSIFICATION ACCURACY OF ISBI2017 DATASET

Classifier SVM Kernels Performance Metrics


Sen Spec Prec Acc
Linear Quadratic Cubic FGSVM MGSVM CGSVM RBF AUC FPR
(%) (%) (%) (%)
 89.5 85.0 0.90 89.5 0.105 89.5

 93.0 92.0 0.98 93.0 0.070 92.8

Support  95.5 95.0 0.98 95.5 0.045 95.3


Vector  94.0 96.0 0.93 94.5 0.060 94.0
Machine
 95.0 94.0 0.98 95.0 0.050 94.8

 90.0 88.0 0.96 90.5 0.10 90.1

 95.5 95.0 0.98 95.5 0.04 95.6

TABLE III: PROPOSED CLASSIFICATION ACCURACY OF ham10000 DATASET

Classifier SVM Kernels Performance Metrics


Sen Prec Acc
Linear Quadratic Cubic FGSVM RBF AUC FPR
(%) (%) (%)
 89.14 0.972 89.57 0.018 89.2

Support  85.14 0.964 86.57 0.024 85.2


Vector  85.42 0.967 87.00 0.022 85.4
Machine
 85.14 0.964 86.57 0.024 85.2

 89.71 0.978 90.14 0.017 89.8

B. Discussion
In this section, the detailed discussion is conducted in terms
of numerical accuracy and visual results. The proposed
classification system consists of two primary steps as shown
in Figure 1. The preprocessing is performed before CNN
based features extraction because of low quality and low-
resolution lesion effects on the prominent features extraction.
The preprocessing sample results are shown in Figure 2. After
that, a CNN feature is extracted and selects the best features
through KcPCA method and fed to SVM of kernel function
RBF. The RBF kernel function classification accuracy is
compared with few other well-known kernel functions such as
Figure 6: Confusion matrices for RBF SVM kernel using HAM10000 dataset
linear, cubic, quadratic, FGSVM, MGSVM, and CGSVM on
all three datasets. The results are presented in Table. I, II, and ACKNOWLEDGMENT
III. The classification performance of RBF kernel is also This work was supported by AI and Data Analytics (AIDA)
verified through a confusion matrix given in Figure. 4, 5, and Lab Prince Sultan University Riyadh Saudi Arabia
6. Further, an extensive comparison with existing techniques
is also conducted In Table. III-B, the comparison is presented REFERENCES
of proposed system results with existing well known state-of- [1] Burdick, Jack, Oge Marques, Janet Weinthal, and Borko Furht.
the-art techniques. Heller et al. [12] presented morphological ”Rethinking skin lesion segmentation in a convolutional classifier.”
Journal of digital imaging 31, no. 4 (2018): 435-440.
features based skin lesion classification approach and [2] Miller, Kimberly D., Rebecca L. Siegel, Chun Chieh Lin, Angela B.
achieved an accuracy of 88.20% using HAM10000 dataset. Mariotto, Joan L. Kramer, Julia H. Rowland, Kevin D. Stein, Rick
Later, Nasiri et al. [11] presented a CNN network and Alteri, and Ahmedin Jemal. ”Cancer treatment and survivorship
statistics, 2016.” CA: a cancer journal for clinicians 66, no. 4 (2016):
achieved an accuracy of 88.57% which is improved as 271-289.
compared to [12]. Khan et al. [13] described a method for skin [3] T. Akram, M. A. Khan, M. Sharif, and M. Yasmin, Skin lesion
lesion segmentation and recognition through normal segmentation and recognition using multichannel saliency estimation
and M-SVM on selected serially fused features, Journal of Ambient
distribution and entropy controlled approach. They utilized Intelligence and Humanized Computing, pp. 1-20, 2018.
ISBI 2016 and ISBI 2017 datasets and achieved an accuracy [4] C.F.a.F.A.C.S.https://www.cancer.org/content/dam/cancerorg/research/c
of 83.20% and 88.20%, respectively. Later, the accuracy on ancer-facts-and-statistics/annual-cancer-facts-andfigures/2018/cancer-
ISBI 2016 dataset is improved by Cristina et al. [14] up to facts-and-figures-2018.pdf. Accessed May 3, 2018.
[5] Nasir, Muhammad, Muhammad Attique Khan, Muhammad Sharif,
83.60%. Whereas, our method achieved an accuracy of Ikram Ullah Lali, Tanzila Saba, and Tassawar Iqbal. ”An improved
90.20%, 95.60%, and 89.80% using ISBI 2016, ISBI 2017, strategy for skin lesion detection and classification using uniform
and HAM10000 which is significantly good as existing segmentation and feature selection based approach.” Microscopy
research and technique (2018).
approaches. [6] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S:
Table IV.COMPARISON WITH EXISTING TECHNIQUES FOR ALL DATASETS Dermatologist-level classification of skin cancer with deep neural
networks. Nature 542(7639):115118, 2017
Accuracy [7] Oliveira, Roberta B., E. Mercedes Filho, Zhen Ma, Joo P. Papa, Aledir
Method Year Dataset
(%) S. Pereira, and Joo Manuel RS Tavares. ”Computational methods for the
[11] 2018 HAM10000 88.57 image segmentation of pigmented skin lesions: a review.” Computer
[12] 2018 HAM10000 88.20 methods and programs in biomedicine 131 (2016): 127-141.
[13] 2018 ISBI 2016 83.20 [8] Mahbod, Amirreza, Gerald Schaefer, Isabella Ellinger, Rupert Ecker,
Alain Pitiot, and Chunliang Wang. ”Fusing Fine-tuned Deep Features
[13] 2018 ISBI 2017 88.20 for Skin Lesion Classification.” Computerized Medical Imaging and
[14] 2017 ISBI 2016 83.60 Graphics (2018).
ISBI 2016 90.20 [9] Kawahara, Jeremy, Aicha BenTaieb, and Ghassan Hamarneh. ”Deep
Proposed 2018 ISBI 2017 95.60 features to classify skin lesions.” In Biomedical Imaging (ISBI), 2016
HAM10000 89.8 IEEE 13th International Symposium on, pp. 1397-1400. IEEE, 2016.
[10] Harangi, Balazs. ”Skin lesion classification with ensembles of deep
convolutional neural networks.” Journal of biomedical informatics 86
IV. CONCLUSION (2018): 25-32.
[11] Nasiri, Sara, Matthias Jung, Julien Helsper, and Madjid Fathi.
In this paper, we proposed a new CAD system for ”DeepCLASS at ISIC Machine Learning Challenge 2018.” arXiv
classification of skin lesion through deep learning. In the preprint arXiv:1807.08993 (2018).
[12] Heller, Nicholas, Erika Bussman, Aneri Shah, Joshua Dean, and
proposed system, features are extracted through ResNet-50 Nikolaos Papanikolopoulos. ”Computer Aided Diagnosis of Skin
and ResNet101. The enhanced dermoscopic images are Lesions from Morphological Features.”
provided for deep features extraction. After features [13] Khan, M. Attique, Tallha Akram, Muhammad Sharif, Aamir Shahzad,
extraction, prominent features are selected through the new Khursheed Aurangzeb, Musaed Alhussein, Syed Irtaza Haider, and
Abdualziz Altamrah. ”An implementation of normal distribution based
implemented approach name KcPCA. Through KcPCA, top segmentation and entropy controlled features selection for skin lesion
60% features which are later fed to multi-class SVM of kernel detection and classification.” BMC cancer 18, no. 1 (2018): 638.
function RBF. The experiments are performed on three [14] Vasconcelos, Cristina Nader, and Brbara Nader Vasconcelos.
”Experiments using deep learning for dermoscopy image analysis.”
datasets including ISBI 2016, ISBI 2017, HAM10000 and Pattern Recognition Letters (2017).
achieved an accuracy of 90.5%, 95.5%, and 89.71%, [15] He, Kaiming, Jian Sun, and Xiaoou Tang. ”Single image haze removal
respectively. From experimental results, we conclude that the using dark channel prior.” IEEE transactions on pattern analysis and
machine intelligence 33, no. 12 (2011): 2341-2353.
selection of prominent features from both models produced a [16] Simonyan, Karen, and Andrew Zisserman. ”Very deep convolutional
significant performance. Moreover, the images of HAM10000 networks for large-scale image recognition.” arXiv preprint
dataset are complex and mostly similar to each other which arXiv:1409.1556 (2014).
[17] Vedaldi, Andrea, and Karel Lenc. ”Matconvnet: Convolutional neural
are degrading the classification accuracy. Through the fusion networks for matlab.” In Proceedings of the 23rd ACM international
of two models, we achieved significant performance on this conference on Multimedia, pp. 689-692. ACM, 2015.
dataset and also better as compared to existing techniques. In [18] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. ”Deep
residual learning for image recognition.” In Proceedings of the IEEE
the future, auto-encoder based features are selected and then conference on computer vision and pattern recognition, pp. 770-778.
performs classification. 2016.
[19] Ning, Guanghan, Zhi Zhang, Chen Huang, Xiaobo Ren, Haohong Wang,
Canhui Cai, and Zhihai He. ”Spatially supervised recurrent
convolutional neural networks for visual object tracking.” In Circuits
and Systems
(ISCAS), 2017 IEEE International Symposium on, pp. 1-4. IEEE, 2017.
[20] Sharif, Muhammad, Muhammad Attique Khan, Muhammad Rashid,
Mussarat Yasmin, Farhat Afza, and Urcun John Tanik. ”Deep CNN and
geometric features-based gastrointestinal tract diseases detection and
classification from wireless capsule endoscopy images.” Journal of
Experimental & Theoretical Artificial Intelligence.
[21] Rashid, Muhammad, Muhammad Attique Khan, Muhammad Sharif,
Mudassar Raza, Muhammad Masood Sarfraz, and Farhat Afza. ”Object
detection and classification: a joint selection and fusion strategy of deep
convolutional neural network and SIFT point features.” Multimedia
Tools and Applications (2018): 1-27.
[22] Targ, Sasha, Diogo Almeida, and Kevin Lyman. ”Resnet in Resnet:
generalizing residual architectures.” arXiv preprint arXiv:1603.08029
(2016).
[23] Khan, Muhammad Attique, Tallha Akram, Muhammad Sharif,
Muhammad Awais, Kashif Javed, Hashim Ali, and Tanzila Saba.
”CCDF: Automatic system for segmentation and recognition of fruit
crops diseases based on correlation coefficient and deep CNN features.”
Computers and Electronics in Agriculture 155 (2018): 220-236.
[24] Sharif, Muhammad, Muhammad Attique Khan, Muhammad Faisal,
Mussarat Yasmin, and Steven Lawrence Fernandes. ”A framework for
offline signature verification system: Best features selection approach.”
Pattern Recognition Letters (2018).
[25] Khan, Muhammad Attique, Tallha Akram, Muhammad Sharif,
Muhammad Younus Javed, Nazeer Muhammad, and Mussarat Yasmin.
”An implementation of optimized framework for action classification
using multilayers neural network on selected fused features.” Pattern
Analysis and Applications (2018): 1-21.
[26] Sharif, Muhammad, Muhammad Attique Khan, Tallha Akram,
Muhammad Younus Javed, Tanzila Saba, and Amjad Rehman. ”A
framework of human detection and action recognition based on uniform
segmentation and combination of Euclidean distance and joint entropy-
based features selection.” EURASIP Journal on Image and Video
Processing 2017, no. 1 (2017): 89.
[27] Khan, Muhammad Attique, Muhammad Sharif, Muhammad Younus
Javed, Tallha Akram, Mussarat Yasmin, and Tanzila Saba. ”License
number plate recognition system using entropy-based features selection
approach with SVM.” IET Image Processing 12, no. 2 (2017): 200-209.

You might also like