New Approach Generative AI Melanoma Data Fusion For Classification in Dermoscopic Images With Large Language Model
New Approach Generative AI Melanoma Data Fusion For Classification in Dermoscopic Images With Large Language Model
† Universidade
Federal do Cariri, Juazeiro do Norte, CE - Brazil
∗ Instituto
Federal de Educação, Ciência e Tecnologia do Ceará, Fortaleza, CE - Brazil
‡ Universidade Federal do Ceará, Fortaleza, CE - Brazil
§ Laboratório de Inovação em Sistemas de Inteligência Artificial (LISIA), Juazeiro do Norte / Fortaleza, CE
Email: [email protected]
Authorized licensed use limited to: University of Arkansas Litte Rock. Downloaded on April 05,2025 at 17:20:12 UTC from IEEE Xplore. Restrictions apply.
these approaches, achieving high accuracy on specific datasets The derivation of the loss function is crucial for the opera-
[7]. tion of YOLO, as it measures the error between YOLO’s pre-
Bi et al. [8] developed an automatic skin lesion segmen- dictions and the true values for each measurement performed
tation method based on class-specific deep learning with [17]. This function is expressed as:
gradual integration grounded in probability (DCL-PSI). The
model aims to enhance lesion segmentation by integrating L = λcoord
PS 2 PB
1obj
2 2
i=0 j=0 ij (xi − x̂i ) + (yi − ŷi ) +
probabilistic steps, achieving an accuracy rate of 95.30%.
Vasconcelos et al. [9] proposed an automatic segmentation √ 2 √
p 2
PS 2 PB √
λcoord i=0 j=0 1obj ij wi − ŵi + hi − ĥi +
method using geodesic active contour (MGAC) based on PS 2 PB obj PS 2 PB noobj
2 2
mathematical morphology, highlighting its low computational i=0 j=0 1ij (Ci − Ĉi ) + λnoobj i=0 j=0 1ij (Ci − Ĉi ) +
PS 2 obj P 2
cost. On the PH2 dataset, the method achieved an accuracy of i=0 1i c∈classes (pi (c) − p̂i (c)) (1)
94.59%, demonstrating its reliability in skin lesion segmen-
tation. It is important to note that, due to the approximate
approach of the differential equation, the model is sensitive
to variations in lighter lesion areas. Similarly, De et al. [10] , where λcoord e λnoobj are hyperparameters that control the
proposed a method for skin lesion segmentation using digital relative importance of different error terms. 1obj noobj
ij e 1ij are bi-
image processing, achieving an accuracy of 94.25%. nary indicators that represent whether an anchor box contains
Al et al. [11] developed a diagnostic aid framework. The an object or not. xi , yi , wi , hi are the predicted coordinates
authors used the ISIC dataset from different years for training and dimensions of the bounding boxes. x̂i , ŷi , ŵi , ĥi are the
and obtaining results, specifically the versions from 2016, true coordinates and dimensions of the bounding boxes. Ci is
2017, and 2018. They achieved good accuracy values, above the predicted confidence score, and Ĉi is the true confidence
80%. However, refining the segmentation could potentially score. pi (c) is the predicted class probability, and p̂i (c) is the
improve the area of the segmented lesion, thereby enhancing true class probability [17].
system performance. Similarly, Yilmaz et al. [12] developed an
approach for melanoma classification in dermoscopic images, C. SAM
achieving an accuracy of 82.40%. The SAM (Segment Anything Model) is a segmentation
With advancements in models and the potential for devel- framework developed by META AI in 2023, capable of iden-
oping pre-diagnostic approaches based on LLM and clinical tifying objects in an image with high precision and efficiency,
workflow knowledge, Wang et al. [13] developed a consul- using a pipeline of networks combined with a set of processing
tation method known as multi-specialist agent-derived con- techniques [18].
sultation, aiming to perform adaptive fusion of probabilistic The segmentation loss function of SAM, which measures
distributions from agents regarding potential diseases. The the error between the predicted segmentation mask and the
approach requires fewer parameter updates and training time, true binary mask, is expressed as:
providing possible pathologies based on the symptoms raised. PN
(yi ·ŷi )
Lseg = λIoU 1 − PN i=1 +
III. M ATERIALS AND M ETHODS i=1
(yi +ŷi −yi ·ŷi )
A. Datasets
Two datasets were used for this study, PH2 and ISIC 2017. , where λIoU e λBCE are hyperparameters that control the
The PH2 dataset consists of a total of 200 dermoscopic relative importance of different error terms. yi is the value
images provided by the dermatology service of Hospital Pedro of the true mask at pixel i, provided by the specialist doctor.
Hispano in Portugal [14]. ŷi is the value of the predicted mask at pixel i. N is the total
The ISIC 2017 dataset is part of the International Skin number of pixels in the image [18].
Imaging Collaboration Archive. It was used in 2017 for the
melanoma detection challenge and includes clinical informa- D. Extractors and classifiers
tion about the patient and the lesion [15]. The study addressed different feature extractors and classi-
fiers with the aim of identifying the best combination of extrac-
B. YOLO tor/classifier for the proposed model. The extractors discussed
YOLO (You Only Look Once) is a high-precision and com- were LBP (Local Binary Patterns), VGG (Visual Geometry
putationally efficient object detection framework, transforming Group), and DenseNet [19]. The classifiers discussed in the
this task into a single regression problem. YOLO is capable models included KNN (K-Nearest Neighbors), MLP (Multi-
of producing bounding boxes, detecting objects of interest in layer Perceptron), Naive Bayes, Random Forest, and SVM
the scene [16]. (Support Vector Machine) [20].
Authorized licensed use limited to: University of Arkansas Litte Rock. Downloaded on April 05,2025 at 17:20:12 UTC from IEEE Xplore. Restrictions apply.
Generative AI Melanoma Data Fusion | New approach for detection, segmentation and classification in dermoscopic melanoma images with Large Language Model
Step 1 - Melanoma Detection Step 2 - Melanoma Segmentation Step 3 - Classification Step 4 - Pre-Diagnosis with LLM
Dataset For Training Model Training | Yolo Trained Model
Benign?
SAM
The patient, a female approximately 70 years old, has a
lesion located on the posterior torso with a clinical diameter
of 4.00 mm. The lesion was diagnosed as malignant
melanoma, confirmed by histopathology.
Backbone Network
SEGMENT DATA FUSION
Yolo ANYTHING SAM SAM
The patient, a female approximately
R A C Sex
C Approximate Age 70 years old, has a lesion located on
Anatomical Site the posterior torso with a clinical
C
Prompt Encoder ClinicalSize (mm) diameter of 4.00 mm. The lesion
SAM SAM
Diagnosis Confirmation was diagnosed as malignant
Type melanoma, confirmed by
SAM SAM Benign/Malignant histopathology.
Mask Decoder
Fig. 1. Proposed approach, Step 1 - Melanoma detection, involving training for bounding box detection and detection mask. Step 2 - Melanoma segmentation
using the SAM framework, generating a new detection mask from the YOLO mask. Step 3 - Melanoma classification through the developed pipeline, including
grid search, transfer learning, and cross-validation. Step 4 - Pre-diagnosis with LLM through interpretation of attributes extracted by the pipeline.
Fig. 2. Results from the different stages of the proposed methodology: two input images were used. In the detection stage, the melanoma region is marked
with a bounding box and a binary mask. During segmentation, the edges of the binary mask are refined through concatenation with SAM. Classification
pertains to malignancy and other medical attributes. Pre-diagnosis with LLM gathers information obtained from classification and presents it in the form of
textual pre-diagnosis.
E. LLM Model The PH2 databases have highly defined GTs, which are used
Large Language Models (LLMs) are deep neural networks in training detection and segmentation models. The ISIC 2017
trained on large volumes of textual data to perform a wide database has clinical information related to the patient, which
range of natural language processing (NLP) tasks. LLMs is used in the attribute estimation processor in the classification
employ advanced architectures and are designed for under- stage.
standing, generating, assimilating, and abstracting text. Words At step 1 in Figure 1, the dermatoscopic image is inputted
are processed as tokens, which are interconnected to form into the model and processed using the YOLO framework,
various contexts [21]. which has been previously trained to detect melanoma in two
Training LLM models involves obtaining the cross-entropy stages: In the first stage, YOLO detects melanoma in the
loss function, which measures the difference between the region and outlines it with a bounding box; in the second
predicted probability distribution by the model (ŷ) and the stage, the model classifies the pixels within the bounding
actual probability distribution (y): box region to form the corresponding binary mask of the
melanoma in the figure. The YOLO versions used in this
PN
Lcross-entropy = − i=1 yi log(ŷi ) (3) approach were: YOLOv8l, YOLOv8m, YOLOv8n, YOLOv8s,
YOLOv8x, YOLOv9c, and YOLOv9e.
where N is the total number of classes. yi is the true
probability of class i. ŷi the probability predicted by the model As step 2 in Figure 1, the binary detection mask generated
for class i [21]. by YOLO is used to initialize the model based on the SAM
framework, which generates its own melanoma-specific mask.
IV. M ETHODOLOGY Then, the two binary masks from YOLO and SAM are
The Figure 1 presents the proposed methodology for this combined through an intersection process, resulting in the final
study. The method has been divided into 4 main Steps, with segmentation binary mask with improved edges. This fine-
Steps 2 and 3 containing their respective Stages. In Step 1, tuning process using SAM on YOLO’s initialization enhances
initial melanoma detection occurs using the YOLO framework. the performance of the developed segmentation models, lead-
Step 2 involves melanoma segmentation, combining the YOLO ing to higher segmentation accuracy.
framework with SAM. In Step 3, classification of the region As Step 3 in Figure 1 the process involves classifying
of interest takes place, with extraction of medical attributes the segmented melanoma from Step 2, where data related to
from the image, which ultimately leads to a descriptive pre- malignancy classification, size of the melanoma, patient age,
diagnosis in Step 4, assisted by the LLM model used. and various attributes can be obtained. To achieve this, the
Authorized licensed use limited to: University of Arkansas Litte Rock. Downloaded on April 05,2025 at 17:20:12 UTC from IEEE Xplore. Restrictions apply.
proposed model combines classical and deep feature extractors TABLE I
with machine learning classifiers, aiming to find the best S EGMENTATION RESULTS FOR THE DIFFERENT MODELS GENERATED
FROM THE COMBINATION OF VARIOUS YOLO MODELS WITH THE SAM
combination for the proposed approach. Additionally, the grid FRAMEWORK .
search method is used to optimize hyperparameters of some
Detection Model ACC(%) JAC(%) DICE(%) SEN(%) SPE(%)
classifiers. The classifiers used include: KNN, MLP, Naive- SAM + YOLOv8n 99.13 ± 0.40 93.81 ± 1.48 96.80 ± 0.79 95.89 ± 2.21 99.57 ± 0.63
± ± ± 96.14 ± 2.30 ±
Bayes, RandomForest, linear SVM, polynomial SVM, and SAM + YOLOv8m
SAM + YOLOv8l
98.98
98.91 ±
0.50
0.52
92.91
92.36 ±
2.16
2.19
96.31
96.02 ±
1.19
1.21 95.95 ± 2.36
99.35
99.29 ±
0.83
0.84
SAM + YOLOv8s 99.03 ± 0.43 93.17 ± 2.02 96.45 ± 1.11 96.15 ± 2.37 99.42 ± 0.76
RBF SVM. Thus, Step 3 of the proposed model is a pipeline SAM + YOLOv8x 98.91 ± 0.54 92.35 ± 2.52 96.00 ± 1.40 95.25 ± 2.82 99.39 ± 0.89
SAM + YOLOv9c 98.78 ± 0.64 91.59 ± 3.31 95.58 ± 1.88 95.33 ± 3.01 99.27 ± 0.95
for image feature extraction and classification using transfer SAM + YOLOv9e 98.65 ± 0.81 90.55 ± 4.18 94.99 ± 2.43 94.55 ± 3.40 99.17 ± 1.23
8m
8l
8s
8x
9c
9e
8n
tain the remaining labels, which are: diagnosis confirm type,
Ov
Ov
Ov
Ov
Ov
Ov
Ov
OL
OL
OL
OL
OL
OL
OL
+Y
+Y
+Y
+Y
+Y
+Y
+Y
clin size long diam mm, anatom site general e age approx
M
M
M
SA
SA
SA
SA
SA
SA
SA
The class “diagnosis confirm type” refers to the method ACC JAC DICE SEN SPE
Authorized licensed use limited to: University of Arkansas Litte Rock. Downloaded on April 05,2025 at 17:20:12 UTC from IEEE Xplore. Restrictions apply.
TABLE III
C LASSIFICATION RESULTS FOR THE COMBINATION OF FEATURE EXTRACTORS AND CLASSIFIERS ADDRESSED IN THIS STUDY
Feature Extractor Classifier Accuracy Precision Recall F1-Score Matthews Train Time Predict Time
LBP KNN 85.0±1.0 62.0±4.0 52.0±1.0 50.0±1.0 9.0±3.0 0.05±0.01 0.01±0.01
MLP 86.0 ± 1.0 74.0 ± 23.0 50.0±1.0 47.0±1.0 7.0±6.0 3.64±0.83 0.01±0.01
NaiveBayes 77.0±2.0 59.0±2.0 63.0 ± 2.0 60.0 ± 2.0 22.0 ± 4.0 0.01 ± 0.01 0.01 ± 0.01
RandomForest 86.0 ± 1.0 68.0±6.0 51.0±1.0 49.0±2.0 9.0±4.0 45.18±0.71 0.01±0.01
SVM Linear 85.0±1.0 43.0±1.0 50.0±0.0 46.0±0.0 0.0±0.0 0.64±0.12 0.01±0.01
SVM Polynomial 85.0±1.0 43.0±1.0 50.0±0.0 46.0±0.0 0.0±0.0 3.82±0.18 0.01±0.01
SVM RBF 85.0±1.0 43.0±1.0 50.0±0.0 46.0±0.0 0.0±0.0 1.94±0.15 0.02±0.01
DenseNet121 KNN 86.0±1.0 72.0±9.0 53.0±1.0 52.0±2.0 15.0±5.0 0.35±0.02 0.02±0.01
MLP 83.0±1.0 62.0±3.0 57.0±1.0 58.0±1.0 19.0±4.0 11.86±2.16 0.01±0.01
NaiveBayes 71.0±2.0 58.0±1.0 63.0±2.0 57.0±2.0 20.0±3.0 0.04±0.01 0.01 ± 0.01
RandomForest 85.0±1.0 62.0±19.0 50.0±0.0 47.0±1.0 4.0±4.0 483.26±13.20 0.01±0.01
SVM Linear 84.0±1.0 64.0±3.0 57.0±2.0 58.0±2.0 19.0±5.0 399.82±85.72 0.05±0.01
SVM Polynomial 85.0±1.0 48.0±11.0 50.0±0.0 47.0±1.0 2.0±4.0 15.15±2.02 0.08±0.01
SVM RBF 85.0±1.0 51.0±11.0 51.0±1.0 47.0±2.0 3.0±6.0 29.26±2.99 0.30±0.13
VGG16 KNN 85.0±1.0 56.0±7.0 51.0±1.0 49.0±2.0 4.0±5.0 0.18±0.01 0.01±0.01
MLP 81.0±2.0 60.0±3.0 57.0±3.0 58.0±3.0 17.0±6.0 26.86±2.59 0.01±0.01
NaiveBayes 58.0±5.0 56.0±2.0 62.0±3.0 51.0±4.0 17.0±4.0 0.02±0.01 0.01±0.01
RandomForest 86.0±1.0 65.0±22.0 50.0±0.0 47.0±1.0 5.0±6.0 200.23±5.17 0.01±0.01
SVM Linear 85.0±1.0 43.0±1.0 50.0±0.0 46.0±0.0 -0.0±1.0 17.18±1.62 0.03±0.01
SVM Polynomial 85.0±1.0 43.0±1.0 50.0±0.0 46.0±0.0 0.0±0.0 7.34±0.75 0.04±0.01
SVM RBF 85.0±1.0 43.0±1.0 50.0±0.0 46.0±0.0 0.0±0.0 8.90±0.30 0.07±0.01
B. Melanoma classification For the lesion location on the patient’s body and age
Table III presents the classification results for the pipeline classes, the system showed low accuracy for a simple reason:
developed to classify melanoma as benign or malignant. The to improve the data extraction about melanoma, extraction
primary metric used to select the best combination was ac- and classification were performed on the segmentation of the
curacy. The database used in the classification was the ISIC melanoma in the dermoscopic image. These two classes, which
2017 database. had lower accuracies, require context from the image beyond
As observed in Table III,the best combination obtained for the melanoma to be more accurate, such as the texture of the
classifying the ISIC 2017 dataset was using LBP as the feature patient’s skin.
extractor and MLP as the classifier. A maximum of 1000 C. Comparison with the state of the art
iterations was used for the automatic adjustment of MLP’s
Table V presents the comparison with the state of the art
internal hyperparameters, which was excluded from the grid
for the proposed model and methods found in the literature
search applied to the other classifiers.
that also used the PH2 dataset to segment melanoma. The
The accuracy obtained by the best combination, 86.0%
segmentation metrics strategically compared were Accuracy,
± 1.0%, indicates a satisfactory ability to correctly classify
Dice coefficient, and Specificity, aiming to evaluate the mod-
labels. The precision of 74.0% indicates a satisfactory result
els’ performance in terms of pixel accuracy in segmentation,
in the model’s ability to classify true positives within the
similarity to ground truth (GT), and rate of segmented true
melanoma region, compared to other models.
positive pixels.
The LBP as a feature extractor may have outperformed CNN
extractors due to the texture variability among melanomas,
TABLE V
as homogeneous or heterogeneous coloration in melanoma C OMPARISON WITH THE STATE OF THE ART FOR MELANOMA
is indicative of its malignancy. MLP, overall, excels in its SEGMENTATION CONCERNING THE PH2 DATASET.
ability to handle non-linear problems. Coupled with the lower Methods ACC(%) DICE(%) SPE(%)
complexity of features extracted by LBP, which are significant Proposed Method(S+v8n) 99.13 ± 0.40 96.80 ± 0.79 99.57 ± 0.63
Deep class (2019) [8] 95.30 92.10 94.52
for malignancy labeling, this combination resulted in good Geodesic (2019) [9] 94.59 92.17 97.99
FLog Parzen (2020) [10] 94.25 92.49 93.21
performance.
The proposed method achieved state-of-the-art performance
TABLE IV
C LASSIFICATION RESULTS OF THE OTHER LABELED DATA FROM THE ISIC in all three adopted metrics, with accuracy 3.83% higher
2017 DATASET FOR THE BEST PROPOSED EXTRACTION / CLASSIFICATION (99.12% - 95.30%) compared to the second-best model, Deep
COMBINATION . Class [8]. The DICE coefficient of the proposed model was
Feature Extractor Classifier Label Accuracy 4.75% higher (96.80% - 92.10%). Finally, specificity was
diagnosis confirm type
benign malignant
0.90 ± 0.01
86.0 ± 1.0
5.05% higher (99.57% - 94.52%).
LBP MLP clin size long diam mm
anatom site general
0.79 ± 0.03
0.41 ± 0.02
Table VI presents the comparison with the state of the
age approx 0.12 ± 0.01 art for melanoma classification using the ISIC 2017 dataset.
The proposed model achieved state-of-the-art performance in
Table IV presents the average classification accuracies of the melanoma classification, with accuracy 4.71% higher (86.0%
other classes in the ISIC 2017 dataset for melanoma images - 81.20%) than the method proposed by Al et al. [11],
using the best extraction/classification combination, LBP + which used Inception-v3 for feature extraction and melanoma
MLP. Overall, the pipeline was effective in classifying diag- classification.
nosis confirmation type, malignancy, and melanoma diameter Figure 4 presents the graphical illustration of the results
in millimeters. compared in Table VI.
Authorized licensed use limited to: University of Arkansas Litte Rock. Downloaded on April 05,2025 at 17:20:12 UTC from IEEE Xplore. Restrictions apply.
TABLE VI [4] C. Ring, N. Cox, and J. B. Lee, “Dermatoscopy,” Clinics in Dermatol-
C OMPARISON WITH THE STATE OF THE ART FOR MELANOMA ogy, vol. 39, no. 4, pp. 635–642, 2021.
CLASSIFICATION CONCERNING THE ISIC 2017 DATASET. [5] H. Xiao, L. Li, Q. Liu, X. Zhu, and Q. Zhang, “Transformers in
medical image segmentation: A review,” Biomedical Signal Processing
Author Feature Extractor Classifier Accuracy(%) and Control, vol. 84, p. 104791, 2023.
Proposed LBP MLP 86.0 ± 1.0 [6] T. Mazhar, I. Haq, A. Ditta, S. A. H. Mohsan, F. Rehman, I. Zafar, J. A.
al [11] Inception-v3 Inception-v3 81.29 ± -
al [11] ResNet-50 ResNet-50 81.57 ± - Gansau, and L. P. W. Goh, “The role of machine learning and deep
al [11] Inception-ResNet-v2 Inception-ResNet-v2 81.34 ± - learning approaches for the detection of skin cancer,” in Healthcare,
al [11] DenseNet-201 DenseNet-201 73.44 ± -
yilmaz [12] NASNetMobile - 16 NASNetMobile - 16 82.00 ± - vol. 11, no. 3. MDPI, 2023, p. 415.
yilmaz [12] MobileNetV2 - 16 MobileNetV2 - 16 81.45 ± - [7] A. K. Tiwari, M. K. Mishra, A. R. Panda, and B. Panda, “Survey on
yilmaz [12] MobileNet - 32 MobileNet - 32 80.73 ± -
budhiman [23] ResNet 50 ResNet 50 78.50 ± - computer-aided automated melanoma detection,” Computer Methods in
budhiman [23] ResNet 25 ResNet 25 80.70 ± - Biomechanics and Biomedical Engineering: Imaging & Visualization,
budhiman [23] ResNet 7 ResNet 7 82.40 ± -
vol. 11, no. 7, p. 2300257, 2024.
[8] L. Bi, J. Kim, E. Ahn, A. Kumar, D. Feng, and M. Fulham, “Step-
wise integration of deep class-specific learning for dermoscopic image
85
segmentation,” Pattern recognition, vol. 85, pp. 78–89, 2019.
80 [9] F. F. X. Vasconcelos, A. G. Medeiros, S. A. Peixoto, and P. P.
75
Reboucas Filho, “Automatic skin lesions segmentation based on a new
morphological approach via geodesic active contour,” Cognitive Systems
Research, vol. 55, pp. 44–59, 2019.
7
ed
v3
01
50
25
-5
-v
-1
-1
-3
et
os
n-
-2
et
et
et
et
sN
op
io
et
ile
et
sN
sN
sN
sN
V
eN
pt
Re
N
Pr
Re
Re
Re
et
ce
ile
s
N
M
en
In
n-
ob
ile
et
D
io
M
SN
ob
pt
M
NA
In
ACC Peixoto, “Level set approach based on parzen window and floor of log
for edge computing object segmentation in digital images,” Applied Soft
Fig. 4. Graphical comparison of the accuracy of proposed methods and state- Computing, vol. 105, p. 107273, 2021.
of-the-art methods. [11] M. A. Al-Masni, D.-H. Kim, and T.-S. Kim, “Multiple skin lesions
diagnostics via integrated deep convolutional networks for segmentation
and classification,” Computer methods and programs in biomedicine,
vol. 190, p. 105351, 2020.
VI. C ONCLUSION [12] A. Yilmaz, M. Kalebasi, Y. Samoylenko, M. E. Guvenilir, and
This study developed an innovative approach that combines H. Uvet, “Benchmarking of lightweight deep learning architectures
for skin cancer classification using isic 2017 dataset,” arXiv preprint
various aspects of artificial intelligence for the detection, arXiv:2110.12270, 2021.
segmentation, and classification of melanoma in dermoscopic [13] H. Wang, S. Zhao, Z. Qiang, N. Xi, B. Qin, and T. Liu, “Beyond direct
images. Detection and segmentation, inspired by YOLO and diagnosis: Llm-based multi-specialist agent consultation for automatic
diagnosis,” arXiv preprint arXiv:2401.16107, 2024.
SAM architecture models, achieved detection and segmenta- [14] T. Mendonça, M. Celebi, T. Mendonca, and J. Marques, “Ph2: A public
tion accuracy above 99%, surpassing the state of the art. For database for the analysis of dermoscopic images,” Dermoscopy image
classification, a pipeline of feature extraction and classification analysis, vol. 2, 2015.
[15] M. Berseth, “Isic 2017-skin lesion analysis towards melanoma detec-
was created using grid search, transfer learning, and cross- tion,” arXiv preprint arXiv:1703.00523, 2017.
validation to find the best extractor-classifier combination for [16] P. Jiang, D. Ergu, F. Liu, Y. Cai, and B. Ma, “A review of yolo algorithm
the proposed approach, achieving a detection accuracy of 86% developments,” Procedia computer science, vol. 199, pp. 1066–1073,
2022.
as its best result with LBP-MLP [17] M. Hussain, “Yolo-v1 to yolo-v8, the rise of yolo and its complementary
The proposed model emerged from the experimental selec- nature toward digital manufacturing and industrial defect detection,”
tion of a set of methods, varying models and methods for Machines, vol. 11, no. 7, p. 677, 2023.
[18] R. Deng, C. Cui, Q. Liu, T. Yao, L. W. Remedios, S. Bao, B. A.
detection, segmentation, and classification. The best combina- Landman, L. E. Wheless, L. A. Coburn, K. T. Wilson et al., “Segment
tion was compared with various studies found in the literature anything model (sam) for digital pathology: Assess zero-shot segmenta-
to validate the results obtained with the datasets approached, tion on whole slide imaging,” arXiv preprint arXiv:2304.04155, 2023.
[19] Y. Xu, M. A. Dos Santos, L. F. F. Souza, A. G. Marques, L. Zhang,
surpassing several studies found in the literature. J. J. da Costa Nascimento, V. H. C. de Albuquerque, and P. P.
As future work, we aim to apply the proposed methodology Rebouças Filho, “New fully automatic approach for tissue identification
to other datasets, including different datasets such as brain in histopathological examinations using transfer learning,” IET Image
Processing, vol. 16, no. 11, pp. 2875–2889, 2022.
tumors and hemorrhagic stroke. We plan to propose new CNN [20] M. J. Nayeem, S. Rana, F. Alam, and M. A. Rahman, “Prediction
models via transfer learning, with different classifiers, custom of hepatitis disease using k-nearest neighbors, naive bayes, support
CNN architectures, and other LLM models for the textual pre- vector machine, multi-layer perceptron and random forest,” in 2021
international conference on information and communication technology
diagnostic prompt. for sustainable development (ICICT4SD). IEEE, 2021, pp. 280–284.
[21] J. Wang, Z. Yang, Z. Yao, and H. Yu, “Jmlr: Joint medical llm and
R EFERENCES retrieval training for enhancing reasoning and professional question
[1] K. Saginala, A. Barsouk, J. S. Aluru, P. Rawla, and A. Barsouk, answering capability,” arXiv preprint arXiv:2402.17887, 2024.
“Epidemiology of melanoma,” Medical sciences, vol. 9, no. 4, p. 63, [22] D. Müller, I. Soto-Rey, and F. Kramer, “Towards a guideline for
2021. evaluation metrics in medical image segmentation,” BMC Research
[2] M. Arnold, D. Singh, M. Laversanne, J. Vignat, S. Vaccarella, F. Meheus, Notes, vol. 15, no. 1, p. 210, 2022.
A. E. Cust, E. De Vries, D. C. Whiteman, and F. Bray, “Global [23] A. Budhiman, S. Suyanto, and A. Arifianto, “Melanoma cancer clas-
burden of cutaneous melanoma in 2020 and projections to 2040,” JAMA sification using resnet with data augmentation,” in 2019 international
dermatology, vol. 158, no. 5, pp. 495–503, 2022. seminar on research of information technology and intelligent systems
[3] A. F. Duarte, B. Sousa-Pinto, L. F. Azevedo, A. M. Barros, S. Puig, (ISRITI). IEEE, 2019, pp. 17–20.
J. Malvehy, E. Haneke, and O. Correia, “Clinical abcde rule for early
melanoma detection,” European Journal of Dermatology, vol. 31, no. 6,
pp. 771–778, 2021.
Authorized licensed use limited to: University of Arkansas Litte Rock. Downloaded on April 05,2025 at 17:20:12 UTC from IEEE Xplore. Restrictions apply.