0% found this document useful (0 votes)
52 views8 pages

Skin Cancer Detection Using Convolutional Neural Network

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views8 pages

Skin Cancer Detection Using Convolutional Neural Network

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Skin Cancer Detection using Convolutional Neural

Network
Dipu Chandra Malo Md. Mustafizur Rahman Jahin Mahbub
Department of Electrical and Computer Department of Electrical and Computer Department of Electrical and Computer
Engineering Engineering Engineering
North South University North South University North South University
Bashudnhara R/A, Dhaka-1229, Bashudnhara R/A, Dhaka-1229, Bashudnhara R/A, Dhaka-1229,
Bangladesh Bangladesh Bangladesh
[email protected] [email protected] [email protected]
Mohammad Monirujjaman Khan
Department of Electrical and Computer
Engineering
2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC) | 978-1-6654-8303-2/22/$31.00 ©2022 IEEE | DOI: 10.1109/CCWC54503.2022.9720751

North South University


Bashudnhara R/A, Dhaka-1229,
Bangladesh
[email protected]

Abstract—The advancement of artificial intelligence is performed, which is probably followed by dermoscopic


reshaping various sectors of our lives including disease analysis. Then a diagnostic assay and a histopathological
identification. Today, dermatologists depend greatly on examination were performed [6]. In skin lesion classification,
digitalized output of patients’ results to be absolutely confirm options play a vital role [39]. Many options will be taken
about skin cancer. In recent times, many researches based on under consideration within the context of skin. There are
machine learning pave the way to classify the stages of skin color options, contour options, dermal options, geometric
cancer in clinicopathological practice. In this paper, we have options, bar graph options, texture options, etc. Therefore,
tried to evaluate the chance of deep learning algorithm namely the ABCD rule options analyze spatiality, border irregularity,
Convolutional Neural Network (CNN) to detect skin cancer
color variation, and diameter [39]. Today, computer-aided
classifying benign and malignant mole. We have discussed
recent studies that use different models of deep learning on
call systems have become necessary for evaluating and
practical datasets to develop the classification process. The diagnosing medical pictures [1]. E.g., Computer-Aided
dataset we use for this research is ISIC containing a total of Designation (CAD) is an element of the routine once-
2460 colored images. We use 1800 images as training set and sleuthing carcinoma on mammograms within the US [8].
the rest 660 for testing set. A detailed workflow to build and CAD is also one of the key analysis subjects in medical
run the system is presented too. We have used Keras and imaging and diagnostic radiology [8]. Early detection of
TensorFlow to structure our model. Our proposed VGG-16 illness will be carried out using the correct CAD system. thus
model shows a promising development upon some modification yielding earlier treatment, which may save lives [9]. For
to the parameters and classification functions. The model example, the ability to effectively treat and cure cancer is
achieves an accuracy of 87.6%. As a result, the study shows a inextricably linked to the ability to detect cancers in their
significant outcome of using CNN model in detecting skin early stages [10].Cancer could be an assortment of connected
cancer. diseases wherever designation and treatment are of great
interest due to its widespread prevalence [27].
Keywords: Skin cancer, CNN, deep learning, benign,
malignant, google net, TensorFlow, AlexNet. In 2012, there were 14 million new cases of cancer and 2
million cancer-related deaths worldwide. which makes
cancer one of the most common causes of death for humans
I. INTRODUCTION [28]. Carcinoma is the most common kind of cancer and
Cancer refers to a group of connected diseases wherever typically forms on skin exposed to daylight, but it will occur
a number of the body's cells begin to divide with no end and on any part of the body [29]. Carcinoma begins within the
unfold into encompassing tissues [1]. Carcinoma is the most stratum (the outer layer of the skin). It is thus clearly visible,
common cancer and may be extremely malignant [2]. It's which suggests that a CAD can use only pictures of the skin
most frequently caused by ultraviolet radiation, which lesion, with no alternative information, to present a
damages the deoxyribonucleic acid in skin cells. The broken preliminary designation.
deoxyribonucleic acid then triggers mutations that produce Recently, in a study performed at Stanford [6], the
the skin cells multiply and form tumors. Carcinoma may also researchers developed a deep convolutional neural network
occur from genetic defects [3]. Numerous classifications of (CNN). It has proven to be more effective than
carcinoma exist. Some examples are skin cancer, basal and dermatologists. They are classifying keratinocyte carcinomas
epithelial cell malignant neoplastic diseases, the most versus benign seborrheic keratoses and malignant
dangerous skin cancers [4]. Malignant neoplastic diseases of melanomas versus benign nevi. However, it's still unknown
the basal and epithelial cell rarely manifest on the first whether the CNN performs throughout the classification of
growth website [5].Skin cancer represents only four- alternative skin diseases. Only open-access datasets are
dimensional of all cancers, but it's liable for seventy-fifth of accustomed to training, validating, and looking at the CNN.
all carcinoma deaths [4]. Skin cancer is more aggressive than Lining up an identical environment to [6], equivalent pictures
how the opposite carcinoma sorts and spreads to close tissues are used once attainable. However, not all the datasets
[5]. utilized by [6] were offered, and a few new datasets were
Today, carcinoma diagnoses are mainly determined used to extend the number of pictures for the diseases
visually. First, an associated initial clinical screening is investigated during this report.

978-1-6654-8303-2/22/$31.00 ©2022 IEEE


0169
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
The performance of the CNN is tested on diseases that alternative varieties. They learned in the style of CNN
are acknowledged [29, 30] to be laborious to differentiate models such as DenseNet201, ResNet152, and
from skin cancer. The main focus of this thesis is to check origination_V4.They pre-trained their models on the
the accuracy of CNN's classification of different skin lesions. ImageNet dataset.
There are 2 main objectives: producing the skin lesion
dataset and using transfer learning on Google's origin v3. For Melanocytic mar, they achieved a confusion matrix
The skin lesion dataset is preprocessed in line with an of 0.96, 0.96, 0.96, and 65 with DenseNet201, ResNet152,
equivalent methodology utilized by [6]. As a result of the and origination V4 respectively. For dermatofibroma, they
relative bit of skin lesion footage offered, transfer learning is achieved a confusion matrix of 0.86, 0.94, 0.82, and 65 with
used. Origin v3 is the network used for coaching because [6] DenseNet201, ResNet152, and origination V4 respectively.
it produced the best results.[11] They achieved a confusion matrix of 0.73, 0.76, and 0.65
with DenseNet201, ResNet152, and origination V4
The authors have used the U-net formula from CNN for respectively. They achieved a pair of improvements in the
the segmentation method. They used edge bar graphs (EH), confusion matrix of 0.98 for Melanocytic mar victimization
native Binary Patterns (LBP), Dennis Gabor's methodology, in DenseNet201 by cropping pictures for coaching and
and bar graphs of directed Gradients (HOG) to extract the validation. [16] They used CNNs trained on datasets like
options from the metameric pictures. Options extracted from ImageNet and a network trained on Dermnet-A disease of
the aforementioned ways were fed into the Support Vector the skin atlas. They used a standard CNN known as AlexNet
Machine (SVM). Additionally, K-Nearest Neighbor (KNN), for classification. They achieved an associate degree
Nave Bayes (NB), and Random Forest (RF) classifiers to accuracy of 89.3%, with a sensitivity of 77.1% and a
diagnose whether or not it's benign or malignant melanoma. specificity of 93.0%. [17] They used a CNN trained on
This experiment is meted out with 900 dermoscopic pictures. 129,450 images; 3374 of the total images were taken from
For pictures, the International Skin Imaging Collaboration dermatoscopic devices representing 2032 different types of
(ISIC) is employed. 10% of the 900 metameric pictures were skin lesions.
used to look at the knowledge. The remaining 90% of the
900 pictures are used as coaching knowledge for The authors used the GoogleNet Origin v3 model for
classification. classification. The CNN was then checked with test
information and achieved an AUC of 0.96 for carcinomas,
These options were fed into the classifiers and created an 0.96 for melanomas, and a mythical monster AUC of 0.94
associate degree accuracy of 85% using the SVM classifier. for melanomas classified with dermatoscopic pictures. [18]
The experimental results of the classification ways for the They planned a new prediction model using a novel
extracted options The SVM predicts a recall of 50%, the regularizer technique that classifies a given lesion as benign
accuracy of (85.19%), and the F1_score of (46%), and the or malignant. So, this can be a binary classifier. The
Nave Bayes classification predicts the exactness of (45.62%). information set is taken from ISIC, where 5600 picture
[12]. Ten sorts of skin lesions were classified as exploitative square measures were used for coaching CNN and 2400
linear classifiers. The last layer of AlexNet was replaced pictures for validation. This planned model achieved an
with a convolutional layer. Feature extraction was accuracy of 97.49% in determining benign vs. malignant.
additionally performed by exploitation associate degree The performance of CNN in AUC-ROC is calculated for
AlexNet. The authors used the public DermoWork Image various cases with an embedded novel regularizer. The AUC
Library, which has 1300 clinical pictures, and therefore the for keratosis vs. basal cell malignant neoplastic disease
algorithm on top of a slightly changed AlexNet was tested lesion is 0.93.Mar has an AUC of 0.77 against malignant
exploitation those pictures. A total of 10 forms of skin melanoma lesions. AUC achieved for star macula vs.
lesions were found in an entire dataset of 1300 clinical melanoma (malignant malignant melanoma) lesion and
pictures. The accuracy achieved over the whole dataset with keratosis vs. melanoma lesion squared off at 0.86 and 0.85,
10 different skin lesions was 81.8% [13]. respectively. [19]
Using the vector-based SURF approach for the popularity This analysis applied transfer learning to the AlexNet
of lesion patterns, the options found are classified by model in numerous ways; one approach was to replace the
exploitation of a multi-SVM classifier to classify the sort of classification layer with a SoftMax layer. Another technique
lesion. This system provided 86.37% accuracy, 86.53% they used was to standardize the weights of the design, and
sensitivity, and 96.42% specificity rates. Six hundred and the last one was to augment the information set by fastening
eleven pictures were used, with four skin lesion categories and random rotation angle. The SoftMax layer can classify
[14] projected onto a computerized technique that is metameric color image lesions into three types: mar,
automatic for skin lesion classification. This analysis pre- keratosis, and malignant melanoma. ISIC contains 2000
trained 3 models, ResNet-18, AlexNet, and VGG16, as pictures, 374 square meters of malignant melanoma, 254
feature generators. Support Vector Machines are then trained keratoses, and 1372 pictures square meters of mar, and Derm
to victimize these extracted options. These classifier outputs (IS & Quest) 206 pictures of skin lesion divided into 87 and
are consolidated in the last stage to get a classification. They 119 photos for mar and malignant melanoma, respectively.
used one hundred fifty pictures from ISIC 2017, yielding an From MED-Node, 170 total images, out of which seventy-
83% for skin cancer and a 55% for keratosis classification. one hundred pictures of malignant melanoma and mar
[15] They used the HAM10000 dataset [14]. They used pictures, were utilized in testing and confirmative of the
10015 pictures for coaching and validation. 80% of the planned technique. The accuracy achieved for ISIC is
photographs were split into training and 20% for validation. 95.91%, for Derm (IS & Quest) it is 97.70%, and for MED-
6705 of the entire images were of melanocytic mar, and the NODE it is 96.86%. [20]
little class dermatofibroma has 115 pictures. A few of the
remaining images are of skin cancer, and the rest are The effectiveness and capability of CNNs are studied
during this analysis by classifying eight skin diseases. Pre-

0170
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
trained progressive architectures like InceptionResNet v2, second attack they did was the 3-pixel attack, by modifying
ResNet 152, DenseNet 201, and Origination v3 area units only three pixels within the image and leaving the rest
were used. 10135 dermoscopy pictures were used, 10015 unchanged. They found that this additional diode was a sure-
from HAM10000, and a hundred and twenty pictures from fire attack. CNN's area unit is sometimes trained with
PH2. This dataset includes eight types of skin cancer: basal pictures from dermoscopy. The color balance is influenced
cell cancer, melanoma, keratosis, tube-shaped structure by skin pigmentation, image capture, illumination, and
lesions, melanocytic nevi, benign skin disease, atypical nevi, processing. They tested to see if the image color could
and dermatofibroma. The results evidenced that they influence the accuracy of skin lesion classification. They
outperform dermatologists by Martinmas. The simplest patterned various skin cancer pictures with alterations in
United Self-Defense Force of Colombia mythical monster RGB color diodes to misclassify them as benign naevus.
values for basal cell cancer and skin cancer areas are 99.30% They also tried to envision relieving it by coaching CNN
(DenseNet 201) and 94.40% (ResNet 152) compared to with varied image colors and found a 33% decrease in
88.82% and 82.26% for dermatologists. adversarial attack rates.
Also, DenseNet 201 had the best small and macro– Next, they tested the photographs with additional
United Self-Defense Force of Colombia averaged values for variation by subtracting ten units from an inexperienced
the overall classification, at 98.79% and 98.16%, channel diode, leading to a 235% increase in false-negatives
respectively. [21] The team has created their PC algorithmic for skin cancer designation. Second, we examine whether
rules that they used for this analysis and made them publicly they tested the rotation of images and whether this affected
accessible. They developed a ResNetmodel that was fine- the correctness in classifying images.They applied an
tuned with 19,398 pictures for coaching functions. They used evolution-based improvement technique by permitting
this developed classifier to classify twelve different types of capricious mixtures of rotation up to 360 degrees and
skin diseases. They used a public dataset. As for translation up to 50 pixels, resulting in an input image size of
classification exploitation, the CNN achieved 0.96 for skin 299x299 pixels in horizontal and vertical directions. And
cancer, 0.83 for epithelial cell cancer, 0.96 for basal cell they found that 45.6% of the pictures used for testing
cancer, and 0.82 for intraepithelial cancer.[22] deceived classifiers into classifying 106 International Journal
for Contemporary Trends in Science and Technology skin
A deep CNN is trained using 4867 clinical images cancer as benign naevus with simple rotation and translation
obtained from the University of 105 International Journal for of the image.
Contemporary Trends in Science and Technology Tsukuba
Hospital between 2003 and 2016, from 1842 patients with They also tested the photographs by 45-degree and 180-
carcinoma and tumors. These pictures incorporate 14 degree rotation, and each case multiplied the false-negative
malignant and benign conditions. This analysis was rate by Martinmas. [26] A true world study has shown a
performed on 13 certified dermatologists and nine medical considerable distinction in skin classification results thanks
trainees. The accuracy share achieved for classification to CNN's accuracy in classifying carcinoma. Consistent with
exploitation training DCNN was 76.5%. DCNN achieved this study, it created a recognition that the photographs were
89.5% specificity and 93.3% sensitivity. Therefore, the taken, whether or not the iPhone, Samsung, or DSLR were
conclusion is that DCNN classified skin lesions more taken, all gave different results. This study can look into the
accurately compared to board-certified dermatologists. [23] classification accuracy of a CNN created by utilizing the
tactic conferred by [8].
The authors used a convolutional neural network with
Fisher vector coding and an SVM classifier. They eliminated The CNN will be designed to tell apart skin cancer from
tiny dataset issues by giving samples or sub-images as the keratosis and star macula, which might be tough [9, 10]. As a
input, rather than whole pictures as input to the CNN. 1279 result, this paper can establish a new set of results for the
skin pictures from the ISBI 2016 dataset were used; the Stanford study's strategy [8].It is often vital since the
projected technique achieved an accuracy of 83.09% [24]. algorithmic rule must distinguish between multiple
CNN's may be misled into incorrect classification by completely different benign and malignant lesions to produce
artificial means by troubling natural-world pictures. This sort an accurate diagnosis. Therefore, we tend to study how the
of manipulation of an input image to deceive the network performance of a progressive CNN depends on various kinds
into an incorrect classification is called an "adversarial of skin lesions.
attack" [34].
In this study, the intent was that the CNN would classify
This paper will discuss a number of these adversarial the binary comparisons between mar versus skin cancer and
attacks that might arise accidentally in clinical settings, keratosis versus basal and epithelial cell malignant neoplastic
resulting from analyzing the paper with correct knowledge diseases with the same accuracy as the CNN conferred by
and findings [34]. [6]. That wasn't the case. The CNN accuracy for mar versus
skin cancer was 23 proportion points worse, and keratosis
Alterations in rotation and translation of the input image versus basal and epithelial cell malignant neoplasms was five
cause misclassification of melanoma|malignant skin proportion points worse.
cancer|skin cancer as a benign naevus. The authors enforced
a CNN for melanoma versus benign melanocytic naevi However, mar versus skin cancer is akin to [32]. WHO
classification. They fine-tuned origination v3 pre-trained on got an accuracy of 74.3 1.3% when coaching on one
the information of skin lesion pictures from the ISIC 2018 thousand pictures, which is the same size as the most used
dataset. They enforced an FGSM attack that changed blue to for our coaching categories. The CNN [32] was solely
inexperienced and red values for every element within the trained on mar and skin cancer, which might justify why they
input image consistent with its magnitude. This can affect the got a higher accuracy in their binary comparison. The
ultimate classification by incorrectly classifying [25]. The validation results show that the CNN has a slightly lower

0171
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
validation accuracy of 52.0% throughout the 16-way
classification than the similar 9-way classification [6] and the
23-way classification [31]. It is frequently slightly lower than
the two dermatologists' 9-way accuracy in [6].It's vital to
notice that the 16-way classification provides CNN with
additional choices to settle on from throughout variety.
Therefore, lower classification accuracy is often expected
compared to 9-way. These accuracy results are not fully
comparable, and, potentially, the CNN would perform higher
throughout the 9-way classification during this study.
Throughout the 3-way type, the CNN got an accuracy of
68.3%, below the accuracy noninheritable by [6].
However, our network performed higher than the 2
dermatologists in [6]. The validation results thereby show Figure 1: skin lesion to segmented lesion then learning model and
prediction of Benign and Malignant [41]
that the performance of the CNN during this study is
comparable to that of the CNN and dermatologists in [6]
classifying skin lesions. This paper built a method that CNNs are going to classify skin lesions in 2 alternative
successfully analyzes the dataset and successfully detects ways. One is that a CNN is often applied as a feature
skin cancer. We ran this method on Kaggle. extractor that is pretrained on giant datasets like ImageNet
[8]. In this case, classification is performed by another
In section one, an introduction has been presented. In classifier, like support artificial neural networks, vector
section two, processes and materials have been given. In machines, or k-nearest neighbors. The second is that,
section three, results and analysis will be presented. through mistreatment end-to-end learning, a CNN will
Conclusion and future work will be presented in section four. directly learn the link between the raw element information
and sophistication labels. In contrast to the progress utilized
in machine learning, we have a tendency to not want human
II . METHOD AND MATERIALS
experience for feature extraction. It's not thought of as a free
A convolutional neural network (CNN) tries to imitate, step because it is currently an integral part of the
but the cortical area of the brain acknowledges pictures. To classification step. The CNN is trained by end-to-end
induce higher results with image classification, feature learning. The process is once more divided into 2 totally
extraction ought to be used [37]. Before CNNs existed, different types: learning from scratch or transfer learning.
these feature extractors were designed by specialists in The final layer of origin v3, see Figure 3, was retrained with
every field of the photographs to be classified. However, the skin lesion knowledge set. Transfer learning was used
with CNNs, the feature extractor is enclosed within the owing to the relative amount of information offered. This
coaching method. The feature extractors encompass many CNN was trained with backpropagation. All layers were set
convolutional layers and pooling layers. The convolutional to use an identical world learning rate of 0.001 and a decay
layer is seen as a digital filter. The pooling layer reduces the issue of 16 every 30 epochs. RMSProp was used with a
dimension of the image by combining nearby picture decay of 0.9, momentum of 0.9 and a letter of the alphabet
elements into one pixel [37]. of 0.1. The batch size was set to a hundred. Google’s
CNN is one of the reasons why there have been significant TensorFlow [36] was used to train, validate, and check the
advances in image recognition in recent years. LeNet5 set CNN. The pictures were increased by willy-nilly rotating
up what has currently become a commonplace structure for between 0 and 359 degrees throughout coaching. What is
CNN [38]. The structure has stacked convolutional layers, more interesting, for every image, the most important
which might be followed by normalization and max-pooling inscribed parallelogram, see All pictures were resized to
layers. This square measure is then followed by a fully- 224x224 pixels since this is often the initial dimension
connected layer (s). Compared to a feed-forward network origin v3 was originally trained on.
with equally sized layers, a CNN has fewer parameters and
connections. This makes them easier to coach, but on paper,
their best performance is somewhat of a feed-forward
network [38]. CNNs square measure process significant
once applied on an oversized scale over high resolution
pictures. However, with the GPUs since 2012 and optimized
versions of second convolution, it's been possible to try and
do this with cheap process resources [38].

Figure 2, was cropped from the image and flipped vertically with a
likelihood of 0.5. Figure five - Variations of the most important inscribed
parallelogram [35].

0172
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
(2818, 224, 224, 3) and shape of the new test set: (330, 224,
224, 3) and shape of the validation set: (149, 224, 224, 3).
After that, we re-generate the data and the new number of
training samples is: 5636. Then we normalize the values.
Training data shape: (5636, 224, 224, 3) with a minimum
value of 0.0 and a maximum value of 1.0. After that, we
build and train the model. We built a CNN model in Keras.
Table 1 shows the details.

Import Save the file path separate them to


function of each image different classes

Fig. 3. Best/worst classification results. higher row Z melanoma: pictures Normalizing values Data generator Load the images
to memory
1a and 1b were related to highest sensitivity (all 157 dermatologists opted
for biopsy); for image a pair of, diagnostic assay was counseled by 30
dermatologists (127 dermatologists opted to ‘reassure the patient’). Lower Building and training Build a CNN
the model model in keras Train the model
row: benign nevi (biopsy-verified): pictures 3a (156 opted to “reassure
patient”; one skin doctor counseled biopsy) and 3b (157 dermatologists
opted to “reassure the patient”) were related to the best specificity; for Plotting loss Plotting accuracy history
image four, diagnostic assay was counseled by 156 of the 157 Accuracy on test set history
dermatologists.[40]
Figure 5: Full system block diagram showing the process by which the
system accomplishes pick and place activity.

Table 1 shows the model details.

TABLE I. MODEL DETAILS


Layer (Type) Output Shape Param #

Vgg16(Model) (None,7,7,512) 14714688


Flatten_1(flatten) (None,25088) 0
Dense_1(Dense) (None,32) 802848
Leaky_re_lu_1(LeakyReLU) (None,32) 0
Dense_2(Dense) (None,16) 528
Leaky_re_lu_2(LeakyReLU) (None,16) 0
Dense_3(Dense) (None,1) 17
Fig. 4. Best/worst classification results. higher row (melanoma): 1a and 1b
were related to the best sensitivity (all dermatologists opted for biopsy); for
image two, diagnostic assay was suggested by 45 cases (100 dermatologists Total params: 15,518,081
opted to ‘reassure the patient’). Lower row (benign nevi): 3a and 3b were
related to the best specificity (100% opted to ‘reassure the patient’); 4 had Trainable params: 803,393
very cheap specificity (three dermatologists opted for support of patient and Non-trainable params: 14,714,688
142 suggested biopsy). [40]
Then we Train the model. We Train on 5636 samples and
validate on 149 samples. We take 10 epochs. Below table. 2
In this paper, we write our code in Python, and on the Train the Model showing details.
backend, we use TensorFlow. Basically, we use CNN
(Convolutional Neural Network). It is a Deep Learning
TABLE II. TRAIN THE MODEL
algorithm. In this project, we import some functions.
Examples include os, gc, np, sns, pd, plt, and Quetial. Epoch Loss Acc. Val_loss Val_acc
Import pyplot as plt from matplotlib.fromkeras.models
1 0.4790 0.7585 0.3532 0.8322
import Sequential keras.preprocessing.image import,
fromsklearn.model_selectionImageDataGenerator, 2 0.3549 0.8373 0.3724 0.8188
fromkeras.layers, import Dense, Conv2D, MaxPooling2D, 3 0.3131 0.8650 0.3434 0.8389
Flatten, Dropout, from keras.applications import Import 4 0.2886 0.8740 0.4584 0.8121
5 0.2961 0.8708 0.3619 0.8389
set_random_seed from tensorflow and check_random_state 6 0.2446 0.8962 0.4101 0.8121
from sklearn.utils. Then we set the file path of each image to 7 0.2284 0.9037 0.3817 0.8322
be saved. Here, the size of the train set is 2637, the size of 8 0.2070 0.9171 0.4076 0.8591
the test set is 660, 1800 benign labeled samples and 1497 9 0.2277 0.9003 0.4288 0.8322
malignant, then we save the file path of each image. Here in 10 0.1957 0.9168 0.4034 0.8322
the train set has a size of 2637, while the test set has a size
of 660. Then we separate them into different classes and
load the images to memory. Shape of the new train set:

0173
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
III. TEST AND ANALYSIS 0.2961 and validation loss is 0.3619,In 6th epoch training
loss is 0.2446 and validation loss is 0.4101,In 7th epoch
The results show that the CNN obtained in this study does training loss is 0.2284 and validation loss is 0.3817,In 8th
not have a more difficult time classifying skin cancer epoch training loss is 0.2070 and validation loss is 0.4076,In
against other benign lesions than defect. As expressed by 9th epoch training loss is 0.2277 and validation loss is
[29, 30], star macule and keratosis are visually like skin 0.4288,In 10th epoch training loss is 0.1957 and validation
cancer. Therefore, the results strengthen the quality of the loss is 0.4034,
CNN throughout the classification of skin lesions. However,
none of the comparisons to skin cancer achieved an AN
accuracy higher than the check with keratosis versus basal
and epithelial cell malignant neoplastic disease. This means
that the CNN has a more difficult time distinguishing benign
lesions from skin cancer than it does distinguishing benign
lesions from malignant neoplastic disease cancer types. In
Figure 06, we see accuracy. The relationship between
validation and training accuracy. In the first epoch, we see
that validation accuracy is 0.83 and training accuracy is
0.75. In the second epoch, we see validation accuracy of
0.81 and training accuracy of 0.83. In the third epoch, we
see validation accuracy of 0.83 and training accuracy of
0.86. In the fourth epoch, we see validation accuracy of 0.81
and training accuracy of 0.87. In the fifth epoch, we see
validation accuracy of 0.83 and training accuracy of 0.75. In
the 6th epoch, we see that validation accuracy is 0.81 and
training accuracy is 0.89. In the 7th epoch, we see validation
accuracy of 0.83 and training accuracy of 0.90. In the 8th Figure 7: In 10 epochs of Function validation loss and
epoch, we see validation accuracy of 0.85 and training training loss
accuracy of 0.91. In the 9th epoch, we see validation
accuracy of 0.83 and training accuracy of 0.90. In the 10th In figure 8 we see the skin have benign. There have two
epoch, we see validation accuracy of 0.83 and training picture and show Benign.
accuracy of
0.91.

Figure 8:Skin cancer in Benign ·

In figure 9 we see the skin have malignant. There are have


two pictures and this picture show malignant.

Figure 6: In 10 epochs of validation accuracy and training


accuracy

In figure 7 we see loss. Between validation and training loss


relation showing in graph.
In first epoch training loss is 0.4790 and validation loss is
0.3532,In 2nd epoch training loss is 0.3549 and validation Figure 9: skin cancer in Malignant
loss is 0.3724,In 3rd epoch training loss is 0.3131and
validation loss is 0.3434,In 4th epoch training loss is 0.2886 Accuracy on test set: 87.6%
and validation loss is 0.4584,In 5th epoch training loss is

0174
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
[3] The Skin Cancer Foundation. Skin Cancer Information; 2018.
Accessed: 2018-04-25. Available from: https://www. skincancer.org/skin-
IV. CONCLUSION AND FUTURE WORK cancer-information.
[4] Kanimozhi T, Murthi A. Computer Aided Melanoma Skin
In conclusion, this study explored the capacity of profound Cancer Detection Using Artificial Neural Network Classifier. Journal of
convolutional neural systems within the classification of Selected Areas in Microelectronics (JSAM). 2016;8(2):35–42.
[5] National Cancer Institute. Skin Cancer (Including Melanoma)—
benign vs. dangerous skin cancer. It appears that state-of- Patient Version; 2018. Accessed: 2018-03-22. Available from:
the-art profound learning designs prepared on dermoscopy https://www.cancer.gov/types/skin.
pictures outflank dermatologists. We found that with the use [6] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et
of exceptionally profound convolutional neural systems al. Dermatologist-level classification of skin cancer with deep neural
networks. Nature. 2017;542(7639):115.
utilizing exchange learning and fine-tuning them on [7] Masood A, Ali Al-Jumaily A. Computer aided diagnostic
dermoscopy pictures, way better symptomatic precision can support system for skin cancer: a review of techniques and algorithms.
be accomplished compared to master doctors and clinicians. International journal of biomedical imaging. 2013;2013.
In spite of the fact that no preprocessing step is connected in [8] Doi K. Computer-aided diagnosis in medical imaging: historical
review, current status and future potential. Computerized medical imaging
this paper, the exploratory results are exceptionally and graphics. 2007;31(4-5):198–211.
promising. These models can be effectively actualized in [9] Abdel-ZaherAM, Eldeib AM. Breast cancer classification using
dermoscopy frameworks or indeed, on smartphones in order deep belief networks. Expert Systems with Applications. 2016;46:139–144.
to help dermatologists. To encourage change, more different [10] Wulfkuhle JD, Liotta LA, Petricoin EF. Early detection:
proteomic applications for the early detection of cancer. Nature reviews
datasets (changed categories, diverse ages) with many more cancer. 2003;3(4):267.
dermoscopy pictures and adjusted 6 tests per lesson are [11] Seeja R.D., Suresh A. Deep Learning Based Skin Lesion
required. Utilizing the metadata of each picture can be Segmentation and Classification of Melanoma Using Support Vector
valuable to extend the exactness of the model. The new Machine (SVM) Asian Pac. J. Cancer Prev. 2019;20:1555–1561. doi:
10.31557/APJCP.2019.20.5.1555. [PMC free article] [PubMed] [CrossRef]
number of prepared tests is 5636. Accuracy on the test set: [Google Scholar]
0.875757576118816. The results indicate that a CNN [12] Kawahara J, BenTaieb A, Hamarneh G. Deep features to
developed by the strategy given in [6] wouldn't perform classify skin lesions. Proceedings of the 2016 IEEE 13th International
worse for binary classification of star freckle versus skin Symposium on Biomedical Imaging (ISBI); 2016 IEEE 13th International
Symposium on Biomedical Imaging (ISBI); April 13-16, 2016; Prague,
cancer (skin cancer) and keratosis versus melanoma Czech Republic. 2016. [CrossRef] [Google Scholar]
(melanoma), compared to mar versus malignant melanoma. [13] Thompson F, Jeyakumar MK. Vector based classification of
In comparison the new binary classifications for keratosis dermoscopic images using SURF. IJAER. 2017;12:1758–64. [Google
versus basal and epithelial cell cancer, the CNN would Scholar
[14] A. Mahbod, R. Ecker, and I. Ellinger. (Feb. 2017). ‘‘Skin lesion
perform slightly worse. As a result, of the classifications classification using hybrid deep neural networks.’’ [Online].
tested, mar versus malignant melanoma appears to be the Available:https://arxiv.org/abs/1702.08434v1
most difficult for the CNN.However, this is often not certain [15] K. M. Li and E. C. Li. Skin lesion analysis towards melanoma
since the study wasn't able to mimic the strategy by [8] in detection via end-to-end deep learning of convolutional neural networks.
CoRR, abs/1807.08332, 2018.
each detail. A scarcity of applied mathematics proof within [16] A. Nylund, ‘‘To be, or not to be Melanoma: Convolutional
the results additionally diminishes the conclusion, since no neural networks in skin lesion classification,’’ Ph.D. dissertation, School
applied mathematics vital variations between the AUCs are Technol. Health, KTH Roy. Inst. Technol., Stockholm, Sweden, 2016.
often established. There is a chance of continued study in [Online]. Available: http://kth.diva-portal.org/smash/get/diva2:950147/FUL
LTEXT01.pdf
attempting to realize a CNN with greater or equal accuracy [17] A. Esteva et al., ‘‘Dermatologist-level classification of skin
compared to [6] throughout the classification of mar versus cancer with deep neural networks,’’ Nature, vol. 542, no. 7639, pp. 115–
malignant melanoma. Thenceforth, do a similar binary 118, 2017.
comparison as given in this report. It might even be [18] Albahar, M. A., Skin lesion classification using convolutional
neural network with novel Regularizer. IEEE Access 7:38306–38313,
fascinating to match the performance of dermatologists to 2019.
our results for the classification of star freckles and keratosis [19] Hosny KM, Kassem MA, Foaud MM (2019) Classification of
versus malignant melanoma. Moreover, we tend to be able skin lesions using transfer learning and augmentation with Alex-net. PLoS
to notice 2 skin lesions that were confirmed to be visually ONE 14(5): e0217293. https://doi.org/10.1371/journal.pone.0217293
[20] Rezvantalab, Amirreza, HabibSafigholi, and
like malignant melanoma. Dermatologists are also able to SomayehKarimijeshni. "Dermatologist level dermoscopy skin cancer
notice alternative binary comparisons that might need classification using different deep learning convolutional neural networks
testing before mistreatment of the CNN during a real algorithms." arXiv preprint arXiv:1810.10348 (2018).
clinical setting. However, the alternative analysis that would [21] Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE.
Classification of the clinical images for benign and malignant cutaneous
be done is work that CNN performs for skin of various tumors using a deep learning algorithm. J Invest Dermatol 2018
colors. This is going to be necessary to try to envision if Jul;138(7):1529-1538. [CrossRef] [Medline]
CNN might be employed by all humans. [22] Fujisawa, Y., Y. Otomo, Y. Ogata, Y. Nakamura, R. Fujita, Y.
Ishitsuka, R. Watanabe, N. Okiyama, K. Ohara, and M. Fujimoto.
V. REFERENCES "Deep‐learning‐based, computer‐aided classifier developed with a small
dataset of clinical images surpasses board‐certified dermatologists in skin
tumour diagnosis." British Journal of Dermatology 180, no. 2 (2019): 373-
[1] National Cancer Institute. What Is Cancer?; 2015. Accessed: 381.
2018- 03-03. Available from: [23] Yu, Z., Ni, D., Chen, S., Qin, J., Li, S., Wang, T., Lei, B.:
https://www.cancer.gov/aboutcancer/understanding/what-is- Hybrid dermoscopy image classification framework based on deep
cancer\#tissuechanges-not-cancer. convolutional neural network and fisher vector. In: Biomedical Imaging
[2] Pathan S, Prabhu KG, Siddalingaswamy P. Techniques and (ISBI 2017), 2017 IEEE 14th International Symposium on, IEEE (2017)
algorithms for computer aided diagnosis of pigmented skin lesions—A 301–304
review. Biomedical Signal Processing and Control. 2018;39:237–262. [24] Du-Harpur, Xinyi, Callum Arthurs, Clarisse Ganier, Rick
Woolf, ZainabLaftah, ManpreetLakhan, Amr Salam et al. "Clinically-

0175
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
relevant vulnerabilities of deep machine learning systems for skin cancer [38] Krizhevsky A, Sutskever I, Hinton G. ImageNet classification
diagnosis." The Journal of Investigative Dermatology (2020). with deep convolutional neural networks. 2017 May;60(6):84–90.
[25] Goodfellow IJ, Shlens J, Szegedy C. Explaining and Harnessing [39] Hameed N, Ruskin A, Hassan KA, Hossain M. A
Adversarial Examples [Internet]. arXiv [stat.ML]. 2014. Available from: comprehensive survey on image-based computer aided diagnosis systems
http://arxiv.org/abs/1412.6572 for skin cancer. In: Software, Knowledge, Information Management &
[26] Phillips M, Marsden H, Jaffe W, Matin RN, Wali GN, Applications (SKIMA), 2016 10th International Conference on. IEEE;
Greenhalgh J, et al. Assessment of Accuracy of an Artificial Intelligence 2016. p. 205–214.
Algorithm to Detect Melanoma in Images of Skin Lesions. JAMA Netw [40] T. J. Brinker and B. Schilling .Comparing artificial intelligence
Open. 2019;2(10):e1913436 algorithms to 157 German dermatologists: the melanoma classification
[27] Choi YE, Kwak JW, Park JW. Nanotechnology for early cancer benchmark, in European Journal of Cancer · February 2019, publication at:
detection. Sensors. 2010;10(1):428–455. https://www.researchgate.net/publication/331287430
[28] National Cancer Institute. Cancer Statistics; 2017. Accessed: [41] R. Manne.Classification of Skin cancer using deep learning,
2018- 04-22. Available from: ConvolutionalNeural Networks -Opportunities and vulnerabilities-A
https://www.cancer.gov/aboutcancer/understanding/statistics. systematic Review, in International Journal for Modern Trends in Science
[29] Waltz E. Computer Diagnoses Skin Cancers: Deep learning and Technology · November 2020, publication at:
algorithm identifies skin cancers as accurately as dermatologists. IEEE https://www.researchgate.net/publication/346641510
Spectrum; 2017. Accessed: 2018-03-03. Available from: 42. Mohammad Monirujjaman Khan, Junayed Hossain, Kaisarul Islam,
https://spectrum.ieee.org/the-humanos/biomedical/diagnostics/computer- Nazmus Sadat Ovi and Md. Nakib Alalm Shovon et. al., “Design and Study
diagnosesskin-cancers. of a mmWave Wearable Textile Based Compact Antenna for Healthcare
[30] Chan B. Solar lentigo; 2014. Accessed: 2018-04-22. Available Application,” International Journal of Antennas and Propagation,
from: https://www.dermnetnz.org/topics/solar-lentigo/. Volume 2021, Article ID 6506128, pp.1-17,2021
[31] Nylund A. To be, or not to be Melanoma: Convolutional neural 43. Gazi Mohammed Ifraz, Muhammad Hasnath Rashid , Tahia Tazin and
networks in skin lesion classification; 2016. Mohammad Monirujjaman Khan, “Comparative Analysis for Prediction of
[32] Ridell P, Spett H, Herman P, Ekeberg Ö. Training Set Size for Kidney Disease Using Intelligent Machine Learning
Skin Cancer Classification Using Google’s Inception v3; 2017. Methods,” Computational and Mathematical Methods in Medicine,
[33] Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et Volume 2021, Article ID 6141470, https://doi.org/10.1155/2021/6141470
al. TensorFlow: Large-Scale Machine Learning on Heterogeneous 44. . M. D. Kamrul Hasan, Sakil Ahmed, Z. M. Ekram Abdullah,
Distributed Systems. 2016 March;. Mohammad Monirujjaman Khan, Mehedi Masud et al., “Deep Learning
[34] Bogley W, Robson R. Finding the Largest Inscribed Rectangle; Approaches for Detecting Pneumonia in COVID-19 Patients by Analyzing
2018. Accessed: 2018-04-28. Available from: Chest X-Ray Images,” Mathematical Problems in Engineering, Volume
https://oregonstate.edu/instruct/mth251/cq/ Stage8/Lesson/rectangle.html. 2021, Article ID 9929274, PP. 1-8, 2021.
[35] Dermaamin; 2010. Accessed: 2018-04-17. Available from: http: 45. Omar Faruk, Eshan Ahmed, Sakil Ahmed, Anika Tabassum, Tahia
//www.dermaamin.com/site/. Tazin and Mohammad Monirujjaman Khan, “A Novel and Robust
[36] Dermatology Atlas; 2018. Accessed: 2018-04-17. Available Approach to Detect Tuberculosis Using Transfer Learning,” Journal of
from: http://www.atlasdermatologico.com.br/. Healthcare Engineering, Hindawi, 2021.
[37] Kim P. MATLAB Deep Learning With Machine Learning,
Neural Networks and Artificial Intelligence; 2017.

0176
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.

You might also like