Skin Cancer Detection Using Convolutional Neural Network
Skin Cancer Detection Using Convolutional Neural Network
Network
Dipu Chandra Malo Md. Mustafizur Rahman Jahin Mahbub
Department of Electrical and Computer Department of Electrical and Computer Department of Electrical and Computer
Engineering Engineering Engineering
North South University North South University North South University
Bashudnhara R/A, Dhaka-1229, Bashudnhara R/A, Dhaka-1229, Bashudnhara R/A, Dhaka-1229,
Bangladesh Bangladesh Bangladesh
[email protected] [email protected] [email protected]
Mohammad Monirujjaman Khan
Department of Electrical and Computer
Engineering
2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC) | 978-1-6654-8303-2/22/$31.00 ©2022 IEEE | DOI: 10.1109/CCWC54503.2022.9720751
0170
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
trained progressive architectures like InceptionResNet v2, second attack they did was the 3-pixel attack, by modifying
ResNet 152, DenseNet 201, and Origination v3 area units only three pixels within the image and leaving the rest
were used. 10135 dermoscopy pictures were used, 10015 unchanged. They found that this additional diode was a sure-
from HAM10000, and a hundred and twenty pictures from fire attack. CNN's area unit is sometimes trained with
PH2. This dataset includes eight types of skin cancer: basal pictures from dermoscopy. The color balance is influenced
cell cancer, melanoma, keratosis, tube-shaped structure by skin pigmentation, image capture, illumination, and
lesions, melanocytic nevi, benign skin disease, atypical nevi, processing. They tested to see if the image color could
and dermatofibroma. The results evidenced that they influence the accuracy of skin lesion classification. They
outperform dermatologists by Martinmas. The simplest patterned various skin cancer pictures with alterations in
United Self-Defense Force of Colombia mythical monster RGB color diodes to misclassify them as benign naevus.
values for basal cell cancer and skin cancer areas are 99.30% They also tried to envision relieving it by coaching CNN
(DenseNet 201) and 94.40% (ResNet 152) compared to with varied image colors and found a 33% decrease in
88.82% and 82.26% for dermatologists. adversarial attack rates.
Also, DenseNet 201 had the best small and macro– Next, they tested the photographs with additional
United Self-Defense Force of Colombia averaged values for variation by subtracting ten units from an inexperienced
the overall classification, at 98.79% and 98.16%, channel diode, leading to a 235% increase in false-negatives
respectively. [21] The team has created their PC algorithmic for skin cancer designation. Second, we examine whether
rules that they used for this analysis and made them publicly they tested the rotation of images and whether this affected
accessible. They developed a ResNetmodel that was fine- the correctness in classifying images.They applied an
tuned with 19,398 pictures for coaching functions. They used evolution-based improvement technique by permitting
this developed classifier to classify twelve different types of capricious mixtures of rotation up to 360 degrees and
skin diseases. They used a public dataset. As for translation up to 50 pixels, resulting in an input image size of
classification exploitation, the CNN achieved 0.96 for skin 299x299 pixels in horizontal and vertical directions. And
cancer, 0.83 for epithelial cell cancer, 0.96 for basal cell they found that 45.6% of the pictures used for testing
cancer, and 0.82 for intraepithelial cancer.[22] deceived classifiers into classifying 106 International Journal
for Contemporary Trends in Science and Technology skin
A deep CNN is trained using 4867 clinical images cancer as benign naevus with simple rotation and translation
obtained from the University of 105 International Journal for of the image.
Contemporary Trends in Science and Technology Tsukuba
Hospital between 2003 and 2016, from 1842 patients with They also tested the photographs by 45-degree and 180-
carcinoma and tumors. These pictures incorporate 14 degree rotation, and each case multiplied the false-negative
malignant and benign conditions. This analysis was rate by Martinmas. [26] A true world study has shown a
performed on 13 certified dermatologists and nine medical considerable distinction in skin classification results thanks
trainees. The accuracy share achieved for classification to CNN's accuracy in classifying carcinoma. Consistent with
exploitation training DCNN was 76.5%. DCNN achieved this study, it created a recognition that the photographs were
89.5% specificity and 93.3% sensitivity. Therefore, the taken, whether or not the iPhone, Samsung, or DSLR were
conclusion is that DCNN classified skin lesions more taken, all gave different results. This study can look into the
accurately compared to board-certified dermatologists. [23] classification accuracy of a CNN created by utilizing the
tactic conferred by [8].
The authors used a convolutional neural network with
Fisher vector coding and an SVM classifier. They eliminated The CNN will be designed to tell apart skin cancer from
tiny dataset issues by giving samples or sub-images as the keratosis and star macula, which might be tough [9, 10]. As a
input, rather than whole pictures as input to the CNN. 1279 result, this paper can establish a new set of results for the
skin pictures from the ISBI 2016 dataset were used; the Stanford study's strategy [8].It is often vital since the
projected technique achieved an accuracy of 83.09% [24]. algorithmic rule must distinguish between multiple
CNN's may be misled into incorrect classification by completely different benign and malignant lesions to produce
artificial means by troubling natural-world pictures. This sort an accurate diagnosis. Therefore, we tend to study how the
of manipulation of an input image to deceive the network performance of a progressive CNN depends on various kinds
into an incorrect classification is called an "adversarial of skin lesions.
attack" [34].
In this study, the intent was that the CNN would classify
This paper will discuss a number of these adversarial the binary comparisons between mar versus skin cancer and
attacks that might arise accidentally in clinical settings, keratosis versus basal and epithelial cell malignant neoplastic
resulting from analyzing the paper with correct knowledge diseases with the same accuracy as the CNN conferred by
and findings [34]. [6]. That wasn't the case. The CNN accuracy for mar versus
skin cancer was 23 proportion points worse, and keratosis
Alterations in rotation and translation of the input image versus basal and epithelial cell malignant neoplasms was five
cause misclassification of melanoma|malignant skin proportion points worse.
cancer|skin cancer as a benign naevus. The authors enforced
a CNN for melanoma versus benign melanocytic naevi However, mar versus skin cancer is akin to [32]. WHO
classification. They fine-tuned origination v3 pre-trained on got an accuracy of 74.3 1.3% when coaching on one
the information of skin lesion pictures from the ISIC 2018 thousand pictures, which is the same size as the most used
dataset. They enforced an FGSM attack that changed blue to for our coaching categories. The CNN [32] was solely
inexperienced and red values for every element within the trained on mar and skin cancer, which might justify why they
input image consistent with its magnitude. This can affect the got a higher accuracy in their binary comparison. The
ultimate classification by incorrectly classifying [25]. The validation results show that the CNN has a slightly lower
0171
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
validation accuracy of 52.0% throughout the 16-way
classification than the similar 9-way classification [6] and the
23-way classification [31]. It is frequently slightly lower than
the two dermatologists' 9-way accuracy in [6].It's vital to
notice that the 16-way classification provides CNN with
additional choices to settle on from throughout variety.
Therefore, lower classification accuracy is often expected
compared to 9-way. These accuracy results are not fully
comparable, and, potentially, the CNN would perform higher
throughout the 9-way classification during this study.
Throughout the 3-way type, the CNN got an accuracy of
68.3%, below the accuracy noninheritable by [6].
However, our network performed higher than the 2
dermatologists in [6]. The validation results thereby show Figure 1: skin lesion to segmented lesion then learning model and
prediction of Benign and Malignant [41]
that the performance of the CNN during this study is
comparable to that of the CNN and dermatologists in [6]
classifying skin lesions. This paper built a method that CNNs are going to classify skin lesions in 2 alternative
successfully analyzes the dataset and successfully detects ways. One is that a CNN is often applied as a feature
skin cancer. We ran this method on Kaggle. extractor that is pretrained on giant datasets like ImageNet
[8]. In this case, classification is performed by another
In section one, an introduction has been presented. In classifier, like support artificial neural networks, vector
section two, processes and materials have been given. In machines, or k-nearest neighbors. The second is that,
section three, results and analysis will be presented. through mistreatment end-to-end learning, a CNN will
Conclusion and future work will be presented in section four. directly learn the link between the raw element information
and sophistication labels. In contrast to the progress utilized
in machine learning, we have a tendency to not want human
II . METHOD AND MATERIALS
experience for feature extraction. It's not thought of as a free
A convolutional neural network (CNN) tries to imitate, step because it is currently an integral part of the
but the cortical area of the brain acknowledges pictures. To classification step. The CNN is trained by end-to-end
induce higher results with image classification, feature learning. The process is once more divided into 2 totally
extraction ought to be used [37]. Before CNNs existed, different types: learning from scratch or transfer learning.
these feature extractors were designed by specialists in The final layer of origin v3, see Figure 3, was retrained with
every field of the photographs to be classified. However, the skin lesion knowledge set. Transfer learning was used
with CNNs, the feature extractor is enclosed within the owing to the relative amount of information offered. This
coaching method. The feature extractors encompass many CNN was trained with backpropagation. All layers were set
convolutional layers and pooling layers. The convolutional to use an identical world learning rate of 0.001 and a decay
layer is seen as a digital filter. The pooling layer reduces the issue of 16 every 30 epochs. RMSProp was used with a
dimension of the image by combining nearby picture decay of 0.9, momentum of 0.9 and a letter of the alphabet
elements into one pixel [37]. of 0.1. The batch size was set to a hundred. Google’s
CNN is one of the reasons why there have been significant TensorFlow [36] was used to train, validate, and check the
advances in image recognition in recent years. LeNet5 set CNN. The pictures were increased by willy-nilly rotating
up what has currently become a commonplace structure for between 0 and 359 degrees throughout coaching. What is
CNN [38]. The structure has stacked convolutional layers, more interesting, for every image, the most important
which might be followed by normalization and max-pooling inscribed parallelogram, see All pictures were resized to
layers. This square measure is then followed by a fully- 224x224 pixels since this is often the initial dimension
connected layer (s). Compared to a feed-forward network origin v3 was originally trained on.
with equally sized layers, a CNN has fewer parameters and
connections. This makes them easier to coach, but on paper,
their best performance is somewhat of a feed-forward
network [38]. CNNs square measure process significant
once applied on an oversized scale over high resolution
pictures. However, with the GPUs since 2012 and optimized
versions of second convolution, it's been possible to try and
do this with cheap process resources [38].
Figure 2, was cropped from the image and flipped vertically with a
likelihood of 0.5. Figure five - Variations of the most important inscribed
parallelogram [35].
0172
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
(2818, 224, 224, 3) and shape of the new test set: (330, 224,
224, 3) and shape of the validation set: (149, 224, 224, 3).
After that, we re-generate the data and the new number of
training samples is: 5636. Then we normalize the values.
Training data shape: (5636, 224, 224, 3) with a minimum
value of 0.0 and a maximum value of 1.0. After that, we
build and train the model. We built a CNN model in Keras.
Table 1 shows the details.
Fig. 3. Best/worst classification results. higher row Z melanoma: pictures Normalizing values Data generator Load the images
to memory
1a and 1b were related to highest sensitivity (all 157 dermatologists opted
for biopsy); for image a pair of, diagnostic assay was counseled by 30
dermatologists (127 dermatologists opted to ‘reassure the patient’). Lower Building and training Build a CNN
the model model in keras Train the model
row: benign nevi (biopsy-verified): pictures 3a (156 opted to “reassure
patient”; one skin doctor counseled biopsy) and 3b (157 dermatologists
opted to “reassure the patient”) were related to the best specificity; for Plotting loss Plotting accuracy history
image four, diagnostic assay was counseled by 156 of the 157 Accuracy on test set history
dermatologists.[40]
Figure 5: Full system block diagram showing the process by which the
system accomplishes pick and place activity.
0173
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
III. TEST AND ANALYSIS 0.2961 and validation loss is 0.3619,In 6th epoch training
loss is 0.2446 and validation loss is 0.4101,In 7th epoch
The results show that the CNN obtained in this study does training loss is 0.2284 and validation loss is 0.3817,In 8th
not have a more difficult time classifying skin cancer epoch training loss is 0.2070 and validation loss is 0.4076,In
against other benign lesions than defect. As expressed by 9th epoch training loss is 0.2277 and validation loss is
[29, 30], star macule and keratosis are visually like skin 0.4288,In 10th epoch training loss is 0.1957 and validation
cancer. Therefore, the results strengthen the quality of the loss is 0.4034,
CNN throughout the classification of skin lesions. However,
none of the comparisons to skin cancer achieved an AN
accuracy higher than the check with keratosis versus basal
and epithelial cell malignant neoplastic disease. This means
that the CNN has a more difficult time distinguishing benign
lesions from skin cancer than it does distinguishing benign
lesions from malignant neoplastic disease cancer types. In
Figure 06, we see accuracy. The relationship between
validation and training accuracy. In the first epoch, we see
that validation accuracy is 0.83 and training accuracy is
0.75. In the second epoch, we see validation accuracy of
0.81 and training accuracy of 0.83. In the third epoch, we
see validation accuracy of 0.83 and training accuracy of
0.86. In the fourth epoch, we see validation accuracy of 0.81
and training accuracy of 0.87. In the fifth epoch, we see
validation accuracy of 0.83 and training accuracy of 0.75. In
the 6th epoch, we see that validation accuracy is 0.81 and
training accuracy is 0.89. In the 7th epoch, we see validation
accuracy of 0.83 and training accuracy of 0.90. In the 8th Figure 7: In 10 epochs of Function validation loss and
epoch, we see validation accuracy of 0.85 and training training loss
accuracy of 0.91. In the 9th epoch, we see validation
accuracy of 0.83 and training accuracy of 0.90. In the 10th In figure 8 we see the skin have benign. There have two
epoch, we see validation accuracy of 0.83 and training picture and show Benign.
accuracy of
0.91.
0174
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
[3] The Skin Cancer Foundation. Skin Cancer Information; 2018.
Accessed: 2018-04-25. Available from: https://www. skincancer.org/skin-
IV. CONCLUSION AND FUTURE WORK cancer-information.
[4] Kanimozhi T, Murthi A. Computer Aided Melanoma Skin
In conclusion, this study explored the capacity of profound Cancer Detection Using Artificial Neural Network Classifier. Journal of
convolutional neural systems within the classification of Selected Areas in Microelectronics (JSAM). 2016;8(2):35–42.
[5] National Cancer Institute. Skin Cancer (Including Melanoma)—
benign vs. dangerous skin cancer. It appears that state-of- Patient Version; 2018. Accessed: 2018-03-22. Available from:
the-art profound learning designs prepared on dermoscopy https://www.cancer.gov/types/skin.
pictures outflank dermatologists. We found that with the use [6] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et
of exceptionally profound convolutional neural systems al. Dermatologist-level classification of skin cancer with deep neural
networks. Nature. 2017;542(7639):115.
utilizing exchange learning and fine-tuning them on [7] Masood A, Ali Al-Jumaily A. Computer aided diagnostic
dermoscopy pictures, way better symptomatic precision can support system for skin cancer: a review of techniques and algorithms.
be accomplished compared to master doctors and clinicians. International journal of biomedical imaging. 2013;2013.
In spite of the fact that no preprocessing step is connected in [8] Doi K. Computer-aided diagnosis in medical imaging: historical
review, current status and future potential. Computerized medical imaging
this paper, the exploratory results are exceptionally and graphics. 2007;31(4-5):198–211.
promising. These models can be effectively actualized in [9] Abdel-ZaherAM, Eldeib AM. Breast cancer classification using
dermoscopy frameworks or indeed, on smartphones in order deep belief networks. Expert Systems with Applications. 2016;46:139–144.
to help dermatologists. To encourage change, more different [10] Wulfkuhle JD, Liotta LA, Petricoin EF. Early detection:
proteomic applications for the early detection of cancer. Nature reviews
datasets (changed categories, diverse ages) with many more cancer. 2003;3(4):267.
dermoscopy pictures and adjusted 6 tests per lesson are [11] Seeja R.D., Suresh A. Deep Learning Based Skin Lesion
required. Utilizing the metadata of each picture can be Segmentation and Classification of Melanoma Using Support Vector
valuable to extend the exactness of the model. The new Machine (SVM) Asian Pac. J. Cancer Prev. 2019;20:1555–1561. doi:
10.31557/APJCP.2019.20.5.1555. [PMC free article] [PubMed] [CrossRef]
number of prepared tests is 5636. Accuracy on the test set: [Google Scholar]
0.875757576118816. The results indicate that a CNN [12] Kawahara J, BenTaieb A, Hamarneh G. Deep features to
developed by the strategy given in [6] wouldn't perform classify skin lesions. Proceedings of the 2016 IEEE 13th International
worse for binary classification of star freckle versus skin Symposium on Biomedical Imaging (ISBI); 2016 IEEE 13th International
Symposium on Biomedical Imaging (ISBI); April 13-16, 2016; Prague,
cancer (skin cancer) and keratosis versus melanoma Czech Republic. 2016. [CrossRef] [Google Scholar]
(melanoma), compared to mar versus malignant melanoma. [13] Thompson F, Jeyakumar MK. Vector based classification of
In comparison the new binary classifications for keratosis dermoscopic images using SURF. IJAER. 2017;12:1758–64. [Google
versus basal and epithelial cell cancer, the CNN would Scholar
[14] A. Mahbod, R. Ecker, and I. Ellinger. (Feb. 2017). ‘‘Skin lesion
perform slightly worse. As a result, of the classifications classification using hybrid deep neural networks.’’ [Online].
tested, mar versus malignant melanoma appears to be the Available:https://arxiv.org/abs/1702.08434v1
most difficult for the CNN.However, this is often not certain [15] K. M. Li and E. C. Li. Skin lesion analysis towards melanoma
since the study wasn't able to mimic the strategy by [8] in detection via end-to-end deep learning of convolutional neural networks.
CoRR, abs/1807.08332, 2018.
each detail. A scarcity of applied mathematics proof within [16] A. Nylund, ‘‘To be, or not to be Melanoma: Convolutional
the results additionally diminishes the conclusion, since no neural networks in skin lesion classification,’’ Ph.D. dissertation, School
applied mathematics vital variations between the AUCs are Technol. Health, KTH Roy. Inst. Technol., Stockholm, Sweden, 2016.
often established. There is a chance of continued study in [Online]. Available: http://kth.diva-portal.org/smash/get/diva2:950147/FUL
LTEXT01.pdf
attempting to realize a CNN with greater or equal accuracy [17] A. Esteva et al., ‘‘Dermatologist-level classification of skin
compared to [6] throughout the classification of mar versus cancer with deep neural networks,’’ Nature, vol. 542, no. 7639, pp. 115–
malignant melanoma. Thenceforth, do a similar binary 118, 2017.
comparison as given in this report. It might even be [18] Albahar, M. A., Skin lesion classification using convolutional
neural network with novel Regularizer. IEEE Access 7:38306–38313,
fascinating to match the performance of dermatologists to 2019.
our results for the classification of star freckles and keratosis [19] Hosny KM, Kassem MA, Foaud MM (2019) Classification of
versus malignant melanoma. Moreover, we tend to be able skin lesions using transfer learning and augmentation with Alex-net. PLoS
to notice 2 skin lesions that were confirmed to be visually ONE 14(5): e0217293. https://doi.org/10.1371/journal.pone.0217293
[20] Rezvantalab, Amirreza, HabibSafigholi, and
like malignant melanoma. Dermatologists are also able to SomayehKarimijeshni. "Dermatologist level dermoscopy skin cancer
notice alternative binary comparisons that might need classification using different deep learning convolutional neural networks
testing before mistreatment of the CNN during a real algorithms." arXiv preprint arXiv:1810.10348 (2018).
clinical setting. However, the alternative analysis that would [21] Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE.
Classification of the clinical images for benign and malignant cutaneous
be done is work that CNN performs for skin of various tumors using a deep learning algorithm. J Invest Dermatol 2018
colors. This is going to be necessary to try to envision if Jul;138(7):1529-1538. [CrossRef] [Medline]
CNN might be employed by all humans. [22] Fujisawa, Y., Y. Otomo, Y. Ogata, Y. Nakamura, R. Fujita, Y.
Ishitsuka, R. Watanabe, N. Okiyama, K. Ohara, and M. Fujimoto.
V. REFERENCES "Deep‐learning‐based, computer‐aided classifier developed with a small
dataset of clinical images surpasses board‐certified dermatologists in skin
tumour diagnosis." British Journal of Dermatology 180, no. 2 (2019): 373-
[1] National Cancer Institute. What Is Cancer?; 2015. Accessed: 381.
2018- 03-03. Available from: [23] Yu, Z., Ni, D., Chen, S., Qin, J., Li, S., Wang, T., Lei, B.:
https://www.cancer.gov/aboutcancer/understanding/what-is- Hybrid dermoscopy image classification framework based on deep
cancer\#tissuechanges-not-cancer. convolutional neural network and fisher vector. In: Biomedical Imaging
[2] Pathan S, Prabhu KG, Siddalingaswamy P. Techniques and (ISBI 2017), 2017 IEEE 14th International Symposium on, IEEE (2017)
algorithms for computer aided diagnosis of pigmented skin lesions—A 301–304
review. Biomedical Signal Processing and Control. 2018;39:237–262. [24] Du-Harpur, Xinyi, Callum Arthurs, Clarisse Ganier, Rick
Woolf, ZainabLaftah, ManpreetLakhan, Amr Salam et al. "Clinically-
0175
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.
relevant vulnerabilities of deep machine learning systems for skin cancer [38] Krizhevsky A, Sutskever I, Hinton G. ImageNet classification
diagnosis." The Journal of Investigative Dermatology (2020). with deep convolutional neural networks. 2017 May;60(6):84–90.
[25] Goodfellow IJ, Shlens J, Szegedy C. Explaining and Harnessing [39] Hameed N, Ruskin A, Hassan KA, Hossain M. A
Adversarial Examples [Internet]. arXiv [stat.ML]. 2014. Available from: comprehensive survey on image-based computer aided diagnosis systems
http://arxiv.org/abs/1412.6572 for skin cancer. In: Software, Knowledge, Information Management &
[26] Phillips M, Marsden H, Jaffe W, Matin RN, Wali GN, Applications (SKIMA), 2016 10th International Conference on. IEEE;
Greenhalgh J, et al. Assessment of Accuracy of an Artificial Intelligence 2016. p. 205–214.
Algorithm to Detect Melanoma in Images of Skin Lesions. JAMA Netw [40] T. J. Brinker and B. Schilling .Comparing artificial intelligence
Open. 2019;2(10):e1913436 algorithms to 157 German dermatologists: the melanoma classification
[27] Choi YE, Kwak JW, Park JW. Nanotechnology for early cancer benchmark, in European Journal of Cancer · February 2019, publication at:
detection. Sensors. 2010;10(1):428–455. https://www.researchgate.net/publication/331287430
[28] National Cancer Institute. Cancer Statistics; 2017. Accessed: [41] R. Manne.Classification of Skin cancer using deep learning,
2018- 04-22. Available from: ConvolutionalNeural Networks -Opportunities and vulnerabilities-A
https://www.cancer.gov/aboutcancer/understanding/statistics. systematic Review, in International Journal for Modern Trends in Science
[29] Waltz E. Computer Diagnoses Skin Cancers: Deep learning and Technology · November 2020, publication at:
algorithm identifies skin cancers as accurately as dermatologists. IEEE https://www.researchgate.net/publication/346641510
Spectrum; 2017. Accessed: 2018-03-03. Available from: 42. Mohammad Monirujjaman Khan, Junayed Hossain, Kaisarul Islam,
https://spectrum.ieee.org/the-humanos/biomedical/diagnostics/computer- Nazmus Sadat Ovi and Md. Nakib Alalm Shovon et. al., “Design and Study
diagnosesskin-cancers. of a mmWave Wearable Textile Based Compact Antenna for Healthcare
[30] Chan B. Solar lentigo; 2014. Accessed: 2018-04-22. Available Application,” International Journal of Antennas and Propagation,
from: https://www.dermnetnz.org/topics/solar-lentigo/. Volume 2021, Article ID 6506128, pp.1-17,2021
[31] Nylund A. To be, or not to be Melanoma: Convolutional neural 43. Gazi Mohammed Ifraz, Muhammad Hasnath Rashid , Tahia Tazin and
networks in skin lesion classification; 2016. Mohammad Monirujjaman Khan, “Comparative Analysis for Prediction of
[32] Ridell P, Spett H, Herman P, Ekeberg Ö. Training Set Size for Kidney Disease Using Intelligent Machine Learning
Skin Cancer Classification Using Google’s Inception v3; 2017. Methods,” Computational and Mathematical Methods in Medicine,
[33] Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et Volume 2021, Article ID 6141470, https://doi.org/10.1155/2021/6141470
al. TensorFlow: Large-Scale Machine Learning on Heterogeneous 44. . M. D. Kamrul Hasan, Sakil Ahmed, Z. M. Ekram Abdullah,
Distributed Systems. 2016 March;. Mohammad Monirujjaman Khan, Mehedi Masud et al., “Deep Learning
[34] Bogley W, Robson R. Finding the Largest Inscribed Rectangle; Approaches for Detecting Pneumonia in COVID-19 Patients by Analyzing
2018. Accessed: 2018-04-28. Available from: Chest X-Ray Images,” Mathematical Problems in Engineering, Volume
https://oregonstate.edu/instruct/mth251/cq/ Stage8/Lesson/rectangle.html. 2021, Article ID 9929274, PP. 1-8, 2021.
[35] Dermaamin; 2010. Accessed: 2018-04-17. Available from: http: 45. Omar Faruk, Eshan Ahmed, Sakil Ahmed, Anika Tabassum, Tahia
//www.dermaamin.com/site/. Tazin and Mohammad Monirujjaman Khan, “A Novel and Robust
[36] Dermatology Atlas; 2018. Accessed: 2018-04-17. Available Approach to Detect Tuberculosis Using Transfer Learning,” Journal of
from: http://www.atlasdermatologico.com.br/. Healthcare Engineering, Hindawi, 2021.
[37] Kim P. MATLAB Deep Learning With Machine Learning,
Neural Networks and Artificial Intelligence; 2017.
0176
Authorized licensed use limited to: Hochschule Heilbronn. Downloaded on April 23,2024 at 09:03:28 UTC from IEEE Xplore. Restrictions apply.