0% found this document useful (0 votes)
2 views

final year project

The document outlines the educational objectives and outcomes for a Computer Science program, emphasizing the development of employable graduates capable of solving complex problems through teamwork and ethical practices. It details the course outcomes for a project on 'Skin Cancer Prediction,' which utilizes machine learning to assist in early detection of skin cancer through image analysis. The project aims to provide a user-friendly interface for capturing and analyzing skin lesion images to promote skin health and reduce cancer-related morbidity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

final year project

The document outlines the educational objectives and outcomes for a Computer Science program, emphasizing the development of employable graduates capable of solving complex problems through teamwork and ethical practices. It details the course outcomes for a project on 'Skin Cancer Prediction,' which utilizes machine learning to assist in early detection of skin cancer through image analysis. The project aims to provide a user-friendly interface for capturing and analyzing skin lesion images to promote skin health and reduce cancer-related morbidity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

their goals throughcollaborative learning, professional grooming and a

healthy environment based on co- curricular and extracurricular activities

ii
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)

1. To produce graduates who are employable in industries/public sector/research organizations


or work as an entrepreneur.

2. To produce graduates who can provide solutions to challenging problems in their profession
by applying computer engineering theory and practices.
3. To develop ability to demonstrate team work with the ability of leadership, analytical
reasoning for solving time critical problems and strong human values for responsible
professional.

PROGRAMME SPECIFIC OUTCOME (PSOs)

1. Empowering the students for continuous learning and develop efficient solutions for
emerging challenges in the computation domain.
2. Preparing graduates who are able to apply standard software engineering practices in
software development and management process using suitable programming languages
and platforms.

PROGRAM OUTCOMES
Engineering Graduates will be able to:

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex
engineering problems.

2. Problem analysis: Identify, formulate, review research literature, and analyze


complex engineering problem searching substantiated conclusions using first
principles of mathematics, natural sciences, and engineering sciences.

3. Design/development of solutions: Design solutions for complex engineering


problems and design system components or processes that meet the specified needs
with appropriate consideration for the public health and safety, and the cultural,

iii
societal, and environmental considerations.

4. Conduct investigations of complex problems: Use research-based knowledge and


research methods including design of experiments, analysis and interpretation of
data, and synthesis of the information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent
responsibilities relevant to the professional engineering practice.

7. Environment and sustainability: Understand the impact of the professional


engineering solutions in societal and environmental contexts, and demonstrate the
knowledge of, and need for sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and


responsibilities and norms of the engineering practice.

9. Individual and team work: Function effectively as an individual, and as a member or


leader in diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with


the engineering community and with society at large, such as, being able to
comprehend and write effective reports and design documentation, make effective
presentations, and give and receive clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a
member and leader in a team, to manage projects and in multidisciplinary
environments.

12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change.

iv
CO-PO-PSO MAPPING FOR ACADEMIC SESSION 2023-24

Course Name: Project AKTU Course Code: KCS851


th
Semester / Year: VIII/ 4 NBA Code: C411
Subject Coordinator: Mr. Awdhesh Kumar

Course Outcomes:
COGNITIVE LEVEL
CO. No.
DESCRIPTION (BLOOMS
TAXONOMY)
C411.1 Analyze and understand the real-life problem and apply their
K4 , K5
knowledge to get programming solution.
C411.2 Engage in the creative design process through the integration and
application of diverse technical knowledge and expertise to meet K4 , K5
customer needs and address social issues.
C411.3 Use the various tools and techniques, coding practices for
K5 , K6
developing real life solution to the problem.
C411.4 Find out the errors in software solutions and establishing the
K4 , K5
process to design maintainable software applications
C411.5 Write the report about what they are doing in project and learning
K5, K6
the team working skills

CO-PO-PSO Mapping:

PO PO PO
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PSO1 PSO2
10 11 12

C411.1 3 3 3 3 3 1 1 2 2 3 3 3 3 3

C411.2 3 3 3 3 3 1 1 2 2 3 3 3 3 3

C411.3 3 3 3 3 3 1 1 2 2 3 3 3 3 3

C411.4 3 3 3 3 3 1 1 2 2 3 3 3 3 3

C411.5 3 3 3 3 3 1 1 3 3 3 3 3 3 3

C411
3 3 3 3 3 1 1 2.2 2.2 3 3 3 3 3

v
DECLARATION
I hereby declare that the work, which is being presented in the Project, entitled “SKIN
CANCER PREDICTION” in partial fulfillment for the award of Degree of “Bachelor of
Technology” in Computer Science, and submitted to the Department of Computer Science,
IMS Engineering College, Ghaziabad, affiliated to Dr. A.P.J Abdul Kalam Technical
University, Uttar Pradesh, Lucknow is a record of my own investigations carried under the
Guidance of Dr. Raj Kumari, Assistant Professor, IMS Engineering College, Ghaziabad
and Dr. Shefali Saxena, Data Scientist, NDSInfoserv,
I have not submitted the matter presented in this Project anywhere for the award of any other
Degree.

Signature:

Name: Affan Siddiqi

Roll No: 2001430120010

Date:

Signature:

Name: Aryaman Nirwal

Roll No: 2001430120022

Date:

Signature:

Name: Harsh Rathi

Roll No: 2001430120039

Date:

vi
CERTIFICATE

I hereby certify that the work which is being presented in the project report
entitled “Skin Cancer Prediction” by “Affan Siddiqi , Aryaman Nirwal ,
Harsh Rathi ” in partial fulfillment of requirements for the award of degree of
B.Tech. (CS) submitted in the Department of CS at “IMS Engineering College”
under A.P.J. ABDUL KALAM TECHNICAL UNIVERSITY, LUCKNOW is
an authentic record of my own work carried out under the supervision of Dr. Raj
Kumari and Dr. Shefali Saxena.

Signature of the SUPERVISOR:


Date:

vii
ACKNOWLEDGEMENT
I would like to place on record my deep sense of gratitude to Dr. Raj Kumari of
Computer Science, of Computer Science, IMSEC,Ghaziabad, India for his
generous guidance, help and useful suggestions.

I express my sincere gratitude to Prof. (Dr.) Sonia Juneja, HoD in Department of


Computer Science, IMSEC, Ghaziabad, for her stimulating guidance, continuous
encouragement and supervision throughout the course of present work.
I am extremely thankful to Prof.(Dr.) Vikram Bali Director, IMSEC, Ghaziabad,
for providing me infrastructural facilities to work in, without which this work
would not have been possible.

Signature:

Name: Affan Siddiqi

Roll No: 2001430120010

Date:

Signature:

Name: Aryaman Nirwal

Roll No: 2001430120022

Date:

Signature:

Name: Harsh Rathi

Roll No: 2001430120039

Date:

viii
Table of Contents

Vision and Mission ............................................................................................................................ i

CO-PO-PSO Mapping.................................................................................................................... iv

Candidate’s Declaration ................................................................................................................. v

Certificate ....................................................................................................................................... vi

Acknowledgement .......................................................................................................................... vii

Table of Content .......................................................................................................................... viii

List of Figures ................................................................................................................................. x

List of Abbreviations ...................................................................................................................... xi

Abstract ............................................................................................................................................ 1

CHAPTER 1 INTRODUCTION................................................................................................... 2

1.1 MACHINE LEARNING ...........................................................................................................3

1.2 TYPES OF MACHINE LEARNING.......................................................................................4

1.2.1 SUPERVISED MACHINE LEARNING .......................................................................... 5

1.2.2 UNSUPERVISED MACHINE LEARNING ..................................................................... 5

1.2.3 REINFORCEMENT LEARNING.................................................................................... 5

CHAPTER 2 LITERATURE SURVEY ........................................................................................ 6

2.1 INFERENCES ........................................................................................................................... 9

2.2 PROPOSED SYSTEM ...............................................................................................................9

ix
CHAPTER 3 METHODOLOGY ................................................................................................ 10

3.1 DATA COLLECTION ........................................................................................................... 13

3.2 IMAGE PREPROCESSING .................................................................................................. 14

3.3 IMAGE SEGMENTATION & FEATURE EXTRACTION ................................... 14

3.4 IMAGE CLASSIFICATION .................................................................................................. 15

CHAPTER 4 SYSTEM REQUIREMENTS & LIBRARIES .................................................... 16

CHAPTER 5 SYSTEM ARCHITECTURE ............................................................................... 20

CHAPTER 6 ANALYSIS AND DESIGN................................................................................... 23

6.1 DETECTION .......................................................................................................................... 27

6.2 TESTING ................................................................................................................................ 28

6.3 FEEDBACK ............................................................................................................................. 28

CHAPTER 7 WORKING ............................................................................................................ 29

CHAPTER 8 RESULT ................................................................................................................. 32

CHAPTER 9 CONCLUSION..................................................................................................... 34

REFERENCES ........................................................................................................................... 35

x
List of Figures

Figure Figure Page


No. No.

1.1 Cancer Cells Representation 3

1.2 Machine Learning Types 4

3.1 Methodology diagram 10

3.2 Images of skin cancer dataset 13

5 System architecture diagram 22

7 Flow chart of project 31

8.1 Detection of malignant 32

5.2 Detection of benign 33

xi
List of Abbreviations

RFC: Random Forest Classifier NLP:


Natural Language Protocol CNN:
Convolutional Neural NetworkGPU:
Graphics Processing Unit GUI:
Graphical User Interface
ANN: Artificial Neural Network LRC:
Logistics Regression Classifier
DCNN: Deep Convolutional neural network
BCCD: Breast Cancer Coimbra Dataset WBCD:
Wisconsin Breast Cancer DatabaseVS Code:
Visual Studio Code
ML: Machine Learning

xii
ABSTRACT

Everyday new technologies are invented to overcome the problems faced by the
people. The machine learning can be seen as the boon in the series of technological
invention within the context of smart digital society.
Skin cancer is a significant public health concern, with the incidence of malignant and
non-malignant skin cancers rising globally. Early detection plays a crucial role in
improving prognosis and treatment outcomes. In response to this challenge, a Skin
CancerPrediction work is proposed, leveraging advanced machine learning algorithms
and imageanalysis techniques to assist users in assessing the likelihood of skin cancer
based on images of skin lesions.
The proposed work aims to provide a user-friendly interface for individuals to
capture images of their skin lesions using their smartphones or other devices
equipped with cameras. These images are then processed by the app's machine
learning model, which has been trained on a diverse dataset of skin lesion images to
recognize patterns associatedwith various types of skin cancer.
In conclusion, the Skin Cancer Prediction work represents a valuable tool in the early
detection and prevention of skin cancer. By harnessing the power of machine learning
andproviding users with accessible and accurate information, the work contributes to
the promotion of skin health and the reduction of skin cancer-related morbidity and
mortality.

1
CHAPTER-1
INTRODUCTION

Cancer forms when healthy cells change and grow out control, forming a tumor. A
tumorcan be cancerous or benign. A cancerous tumor is malignant, meaning thatgrow
and spread over other parts of the body. As there benign is a type of tumor that means
tumorcan grow but won't spread.
Doctors diagnose carcinoma additional than 3 million Americans annually, making in
foremost common sort of cancer. If carcinoma is found early, it can usually be treated
with topical medications, procedures wiped out by a dermatologist, or outpatient
surgery.A dermatologist is a doctor who focuses on diseases and conditions of the
skin. As a result, carcinoma is liable for 1% of all cancer deaths.

In some cases, carcinoma could also be more advanced in need for management to a
multidisciplinary team to always a dermatologist, surgical and oncologist, radiation
oncologist, and to a medical oncologist. These are in doctors meet there patient, and
together they're going recommend the simplest path forward to treat cancer. In such
instances, the surgical oncologist will recommend a surgery to be performed in an
operating room because the procedure to treat the cancer is too extensive for an office
setting.

2
Fig 1.1 Cancer Cells Representation

1.1 Machine Learning

A model is trained using historical data on a recorded event in Machine Learning, it is a


branch of AI. Let's look at Machine Learning from a layman's perspective. Assume you're
trying to tossa piece of paper into a trash can. You realize after the first effort that you
exerted too muchforce. You perceived that, you are much closer to the target after attempt, but
you need to increase your throw angle, this is exactly what ML models do, like here, we're
learning something new and enhancing our final product. We are wired tolearn from our
mistakes.
Machine learning is being more commonly employed in health-care, and it is assisting
patients and professionals in a variety of ways. Automating medical billing, clinical decision
support, and the generation of clinical care standards are the most common health-care use
cases for machine learning. Nearly 80% of the information held or "locked" in electronic
health record systems is unstructured health-care data for machine learning. These are
documents or text files, not dataitems, that could not previously be examined without a
human viewing the material. Human language, sometimes known as "natural language," is
extremely complicated, lacking in regularity, and containing a great deal of ambiguity, jargon,
and vagueness.

3
Natural language processing (NLP) systems are frequently used in health-care machine
learning to turn these texts into more valuable and analyzable data. Most NLP-based deep
learning in health-care applications necessitates the usage of medical machine learning.
Machine learning-based automatic detection and diagnosis systems have been found to
perform as well as an expert radiologist. Google's health-care machine learning programmes
were trained to detect breast cancer and reached an accuracy of 89 percent, equal to or better
than radiologists. These are only a handful of the many applications of machine learning in
health- care.

1.2 Types of Machine Learning


Machine learning is classified into three groups based on the methodologies and methods
of learning:
1. Supervised Machine Learning
2. Unsupervised Machine Learning
3. Reinforcement Learning

Fig 1.2 Machine Learning Types

4
1.2.1 Supervised Machine Learning
As the name implies, it is based on surveillance. It means, in a supervised learning process,
we train machines using a “categorized” dataset, and then model guesses the outpued on its
training. Some data poins are already delayed in final prediction, as indicated by marked
data. Specifically, we understood that first model learns under supervision and then make
predictions for the test inputs.

1.2.2 Unsupervised Machine Learning


Unattended reading differs from supervised reading because there is no need to be
supervised, as the name implies. It means that in unattended machine learning, the machine is
trained using a non-labeled database and predicts output without being monitored. Untreated
learning incorporates training models with undamaged or labeled data, and allows the model
to work on that data without supervision. The main goal of an unregulated learning algorithm
is to combine or classify unfiltered data based on similarities, patterns, and differences. The
machines are told to detect hidden patterns in the input database.

1.2.3 Reinforcement Learning


A feedback based process in which the AI agent (part of the software) automatically scans the
environment by tracking, taking action, and learning experiences. The agent givereward to
model for every correct prediction and punished for every wrong prediction, the goal of a
reinforcing learning agent is to increase rewards. No data is labeled for reinforcement
reading, as thereto increase rewards. No data is labeled for reinforcement reading, as there is
for supervised reading, the agent grasp knowledge only from their experience. The
reinforcement learning can be interpreted as a learning child, for example, a child learns new
things through experience in his or herdaily life. Playing a game is an example of
strengthening learning, where the game islocal, the agent's actions in each step define the
conditions, and the agent's goal is to gethigh scores. The agent is given a response in the form
of penalties and rewards. Becauseof its effectiveness, reinforcement learning is used in a
variety of fields including performance research, information theory, and multi-agent
programs.

5
CHAPTER 2
LITERATURE REVIEW

Face recognition systems have been in the active research in the area of image processing for
quite a long time. Evaluating the face recognition system was carried out with various types of
algorithms used for extracting the features, their classification and matching. Similarity
measure or distance measure is also an important factor in assessing the quality of a face
recognition system. There are various distance measures in literature which arewidely used in
this area. In this work, a new class of similarity measure based on the Lp metric between
fuzzy sets is proposed which gives better results when compared to the existing distance
measures in the area with Linear Discriminant Analysis (LDA). The result points to a positive
direction that with the existing feature extraction methods itself the results can be improved if
the similarity measure in the matching part is efficient.
Skin diseases have a serious impact on people's life and health. Current research proposes an
efficient approach to identify singular type of skin diseases. It is necessary to develop
automatic methods in order to increase the accuracy of diagnosis for multitype skin diseases.
In this paper, three type skin diseases such as herpes, dermatitis, and psoriasis skin disease
could be identified by a new recognition method. Initially, skin images were preprocessed to
remove noise and irrelevant background by filtering and transformation. Then the method of
grey-level co-occurrence matrix (GLCM) was introduced to segment images of skin disease.
The texture and color features of different skin disease images could be obtained accurately.
Finally, by using the support vector machine (SVM) classification method, three types of
skin diseases were identified. The experimental results demonstrate the effectiveness and
feasibility of the proposed method.

6
Methodology for the diagnosing of skin cancer on images of dermatologic spots using
image processing is presented. Currently skin cancer is one of the most frequent diseases in
humans. This methodology is based on Fourier spectral analysis by using filters such as the
classic, inverse and k-law nonlinear. The sample images were obtained by a medical
specialist and a new spectral technique is developed to obtain a quantitative measurement of
the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to
obtain a range of spectral indices defined for skin cancer.

7
S.NO PAPE AUTHO PUBLICATIO ISSUES
R RNAME NDETAILS ADRES
TITL SED
E

1 Face Ahma January 2005 FACE


Recogniti d RECOG
on Tolba NITION
Ali El-
Baz
Ahmed A
El-Harby
2 Skin John Received10 Disease
Disease Mitche recogniti
Apr2018
Recognitio ll on
Article ID 8145713
n Method
Based on
Image
Colorand
Texture
Features
3 Methodolo Josué DOI: Diagnosi
gyfor Álvare ng skin
10.1364/BOE.6.0
diagnosing z- cancer
of skin Borreg 03
Cancer in o 876
Images of
dermatolog
icsports by
spectral
analysis
4 A review S. DOI: 10.7897/2230-
onskin Ramya 8407.04814 Review
cancer Silpa ofskin
V. cancer
Chidvila
5 Public Rachel July POLLS
OpinionPol Macread 2011
ls ie DOI:
10.13140/2.1.2546.4
646
Affiliation:
Parliament
of
Victoria
6 Opinio Paul J. 2008 public
n Lavrakas Encyclopedia of opinio
researc Survey such n
h Research
Methods

8
2.1 INFERENCES

 This project may be a method for the detection of Malignant carcinoma using
theImage as processing tools.
 In this input the system is skin lesion image then applying in image processing
techniques, it analyses conclude about the presence of carcinoma .
 The Lesion is Image to analysis tools checks as varied MALIGNANT in parameters,
Color, Area perimeter, diameter to texture, size to shape analysis for image
segmentation and the feature stages.
 The extracted feature parameters that are wont to classify image as Non Malignant and
also Malignant cancer lesion.

2.2 PROPOSED SYSTEM

 This project may be a method for the detection of malignant carcinoma using Image
processing tools.
 In this input the system is that skin lesion image then applying image processing
techniques, it analyses conclude about the presence carcinoma .
 In Lession to Image analysis tools checks in the varied Malignant parameters, Color,
Area perimeter, diameter etc texture, size and shape analysis for image segmentation
andthe feature stages.
 The extracted to feature parameters wont of classify the image as Non Malignantand
Malignant cancer lesion. Through poll we are getting to collect patient after
treatment.

9
CHAPTER-3
METHODOLOGY

The skin cancer prediction project follows a systematic approach to detect and
classify skin lesions using machine learning techniques. The overall methodology can
be divided into four main stages: data collection, image preprocessing, image
segmentation and feature extraction, and image classification.
The entire implementation process is divided into three phases. The first phase is
concerned with conducting research on previous work done in order to achieve the
current goal. The second phase is concerned with identifying some technique to
improve on the existing traditional approach. The final phase focuses on
implementation and improving the model for deployment. While researching
previous models, we discovered a technological gap that is preventing us from
achieving good accuracy for our model. As a result, we used a probability
distribution approach to simplify the problem. Here, we will select the probability of
different classes to get a better understanding of what we are dealing with andvalidate
the model's performance using F1-score rather than accuracy.

Fig3.1 Methodology diagram

10
Data Collection
To begin, gathering a variety and representative dataset of skin lesion images is
essential. We have utilized the publicly available ISIC (International Skin Imaging
Collaboration) dataset , which consists of over 25,000 dermoscopic images of skin
lesions. The dataset is carefully curated and annotated by dermatologists, ensuring
the reliability and accuracy of the ground truth labels.

Image Preprocessing
Image preprocessing is a crucial step to prepare the raw images for subsequent stages
of the pipeline. In this stage, we perform various operations to enhance thequality and
normalize the input images. We leverage the OpenCV and NumPy libraries in Python
for image processing tasks.

Image Segmentation and Feature Extraction


It is the act of sectioning an image into many parts or areas depending on particular
features, like color, texture, or shape. In the skin lesion inspection context,
segmentation be necessary for separating the lesion area from nearbyskin..

In our approach, we employ a combination of conventional image processing


methods and deep learning-based methods for segmentation. Specifically, we
utilize the Otsu thresholding technique from the OpenCV library to obtain an
initial segmentation of the lesion area.

Subsequently, we leverage the power of Convolutional Neural Networks (CNNs)


to refine segmentation, also extract relevant attribute from the lesion region. The
Keras library was used for building and training the CNN model with a
TensorFlow backend.

The processed properties includes coached the CNN version on sectored lesion
imagery. A CNN structure are including manifold convolutional layers, poolings
layers, and fully interconnected layers, enabling models for discover tiered
representations from entering into images spontaneously. The taken-out traits
embraces different traits from skin hurts, such as texture, shade, and geometry, which
are critical for exact sorting.

11
Image Classification
The final stage of the skin cancer prediction pipeline is image classification, where
the extracted features from the segmented lesion images are used to predict the
likelihood of skin cancer.

We employ a combination of Feed-Forward Neural Networks (FFNNs) and


Convolutional Neural Networks (CNNs) for classification task. A FFNN be usual
neural network design with an input layers, some hidden layers, and an output
layer. FFNN take the extracts features as inputs and understand to mapthem to the
relate class labels like malignant or benign.

Additionally, we incorporate a CNN-based classification model that operates


directly on the segmented lesion images. This approach allows the model to learn
discriminative features and classification rules simultaneously, potentially improving
the overall performance.

In the phase of training, FFNN and CNN models are being optimized with loss
functions (ex. cross-entropy loss) and optimization algorithms (ex. Adam optimizer)
in a manner of suitable. Utilizing methods such as data enlargement, stopping early,
and regularization are in place to enhance the generalization capacity of models and
to avoid overfitting.

During the scenic pause, the profoundly models are utilizing to categorize new skin
lesion visuals provided by users. Classification results, along with relevant
information and recommendations, are presented to the user through the skin cancer
prediction application's user interface.

12
3.1 DATA COLLECTION :
Dataset used for this are extracted from kaggle

towards skin cancerDetection .

It consists of 10000 images of skin cancer.

The training data consists of 800 images and


testing data consistsof 200 images.

Fig 3.2 Images of skin cancer dataset

13
3.2 IMAGE PREPROCESSING :
Image preprocessing is done by using OPEN CV and NUMPY.

3.2.1 OpenCV :

OpenCV-Python library of Python bindings in designed unravel computer

vision problems.

OpenCV-Python makes use Num py, by which may highly optimized library

numericaloperations a MATLAB-style syntax.

All tin Open CV array are structures converted a and from Num py arrays.
This also makes it easier to integrate other a libraries is that use Num py
SciPy andMatplotlib.

OpenCV to be capable image analysis and processing.

3.2.2 NumPy :

Import- numpy :as np


NumPy, that stands Numerical Python, be a library consisting of
multi_dimensional as arrayobjects and set a routines for processing those arrays.
Using as Num Py, mathematical and logical on operations are arrays in
Often performed.
The array object in NumPy is named ndarray, it provides tons of supporting
functions thatmake working with nedarray very easy.
NumPy is an open-source numerical Python library. Num Pya extension o
Numeric and Numarray.
Num py contains random number generators. NumPy may wrapper around
libraryimplemented in C.
Pandas is objects reply heavily NumPy objects. Essentially, Pandas extends
Numpy.

3.3 IMAGE SEGMENTATION & FEATURE EXTRACTION :

Image segmentation is a process of dividing image into regions or categories. Inthe


dermoscopicimages two types of fabric things first normalskin and second is lesion
area so here we have donesegmentation with Otsu thresholding technique. Using
Texture-Based segmentation extracting the features from the image. GLCM (Gray
Level Co-occurrence Matrix) is the statistical method examining the spatial

14
relationship between the pixel. This technique works by creating the co-
occurrence matrix were to calculate the frequency of occurrence of a pixel with
the grey-level value is adjacent to a pixel with grey-level value j in any given
direction and selected separatingdistance The GLCM matrix gives four statistics
Correlation, Contrast, Energy, Homogeneity. There some problem in
segmentation of dermoscopic images due to the contrast of images like under
segmentation and over-segmentation so we are concentrating on segmentation
based on texture features.

3.4 IMAGE CLASSIFICATION :

Deep learning is one of the best techniques for image classification. Based on the
texture featureswe are training the dataset for classification. Here first we are
giving Extracted feature to the Neural network for checking performance of image
classification then we are using CNN (Convolutional Neural Network) it is one of
the deep learning techniques for classification, Dermoscopic images classification
is done in 7 classes .Melanocytic nevi','Malignant','Benign keratosis','Basal cell
carcinoma', 'Actinic keratoses', 'Vascular lesions', ' Dermatofibroma ' it is done by
using automated extracted features by CNN images. In this step, we are passing
Preprocess Images to the CNN classification.

15
CHAPTER - 4
SYSTEM REQUIREMENTS & LIBRARIES
4.1 SOFTWARE REQUIREMENTS :
Operating System: Windows, Mac OS, Linux

Application : Spyder with Python 2.0 or above

4.2 LIBRARIES USED :


In Skin Cancer Detection we use some libraries in python. The list of libraries are
:-

 TensorFlow
 Pandas
 NumPy
 OpenCV
 Keras

4.2.1 TensorFlow :

TensorFlow is a free and open-source software library for machine learning.

It areoften across the range of tasks but features a particular specialise to training
on inference of deep neural in networks.
Tenso r flow may a symbolic math library supported data flow differentiable
withprogramming.

GPU/CPU computing where an equivalent code are often executed to do both


architectures .

High on scalability computation across in machines and large data sets import
tensorflow
import tensorflow as tf
Pandas :

Pandas is a popular Python a libraryon data analysis.

It directly in related Machine Learning. As all we have the dataset to be


prepared before thetraining

16
In this case, Pandas are handy it as an developed \\to data extraction and
preparation.

It provides the implementation is high-level find data structures a wide


variety tools for dataanalysis.
It provides many methods for as groping, combining and filtering data.

import pandas as pd
4.2.2 NumPy :
NumPy means for Numerical Python, in a library of multidimensional array
an objects tocollection of routines for processing those arrays.
Using Num Py, mathematical and logical operations we in arrays can be

performed.

Num py to open-source numerical Python library.

import numpy as np

4.2.3 OpenCV :

OpenCV-Python to have library Python bindings designed and s computer vision

problems.

Open CV is capable in image analysis and processing.

4.2.4 Keras :

Keras is a popular Machine Learning library for Python to do.

It high-level neural networks (API) capable in the running top on Tensor Flow,
CNTK, aTheano.
Keras makes really to ML beginners an build a design a Neural Network.One in

best thing about Keras that allows easyto fast prototyping

from tensorflow import keras

17
4.3 EVALUATION METRIC FORMULAS:

Here are some common evaluation metrics and their formulas used for evaluating
the performance of machine learning models, particularly for classification tasks
like skin cancer prediction:

1. Accuracy:
Accuracy measures the proportion of correctly classified instances out of the
total instances.

Formula:
```
Accuracy = (TP + TN) / (TP + TN + FP + FN)
```
Where:
- TP (True Positives) = Number of correctly classified positive instances
- TN (True Negatives) = Number of correctly classified negative instances
- FP (False Positives) = Number of negative instances incorrectly classified as
positive
- FN (False Negatives) = Number of positive instances incorrectly classified as
negative

2. Precision:
Precision measures the proportion of correctly classified positive instances out
of the total instances classified as positive.

Formula:
```
Precision = TP / (TP + FP)

3. Recall (Sensitivity or True Positive Rate):


Recall measures the proportion of correctly classified positive instances out of
the total actual positive instances.

Formula:
```
Recall = TP / (TP + FN)
```

18
4. F1-Score:
The F1-score is the harmonic mean of precision and recall, providing a balanced
measure of a model's performance.

Formula:
```
F1-Score = 2 * (Precision * Recall) / (Precision + Recall)
```

5. Specificity (True Negative Rate):


Specificity measures the proportion of correctly classified negative instances
out of the total actual negative instances.

Formula: Specificity = TN / (TN + FP)

6. Area Under the Receiver Operating Characteristic (ROC) Curve (AUC-


ROC):
The ROC curve is a plot of the true positive rate (recall) against the false positive
rate (1 - specificity) at various threshold settings. The AUC-ROC provides a
comprehensive measure of the model's performance across all possible
classification thresholds.

Formula: AUC-ROC = ∫ TPR(t) * FPR'(t) dt


```
Where:
- TPR(t) is the true positive rate at threshold t
- FPR'(t) is the derivative of the false positive rate at threshold t

These evaluation metrics can provide valuable insights into the performance of
the skin cancer prediction model, helping you assess its accuracy, sensitivity,
specificity, and overall classification capabilities.

19
CHAPTER - 5
SYSTEM ARCHITECTURE

In the following figure of system architecture diagram we have clearly explained


the steps for detecting typesof skin cancer. First step comes here is taking picture
from the client or customer for detecting. After this next step is preprocessing
which is used to convert the picture to gray scale and reshapingis also done and the
next step is segmentation process in which the shape and color of the symptomor
thepatchwill be identified. Next step is post processing in whichthe detections done
in the before steps are correct or not, after his feature extraction is done in which
the symptoms given in the picture by client is compared with the original cancer
symptoms. Next step here comes is classification in which the app, website gives
whether it is cancer or not.

The diagram below outlines a process for detecting skin cancer using image
analysis and machine learning techniques. Here’s a brief explanation of each step
in the flowchart:

Picture uploaded by client:


Description: The client takes a photo of their skin lesion using a smartphone or
camera and uploads it to the system.
Objective: To provide the system with an image that will be analyzed for signs of
skin cancer.

Preprocessing - selecting the clear image:


Description: The uploaded image is examined and processed to ensure it is clear
and of high quality.
Objective: To enhance the image for better analysis. This might involve resizing,
noise reduction, and adjusting contrast or brightness.

Segmentation process (shape, color):


Description: The system isolates the lesion from the rest of the skin in the image
by analyzing its shape and color.
Objective: To focus on the area of interest (the lesion) and disregard the irrelevant
parts of the image (background skin).

Feature extraction + checking with the original symptoms:


Description: The system extracts important features from the lesion, such as
texture, color variation, border irregularity, and size.
Objective: To gather detailed information about the lesion that will be used to

20
assess whether it might be cancerous. This is also compared to known symptoms
of skin cancer, like asymmetry, border irregularities, color variations, diameter
larger than 6mm, and evolving shape and size (ABCDE criteria).

Post processing - checking whether the detection is correct or no:


Description: The extracted features and initial classification are reviewed to
ensure accuracy.
Objective: To validate the detection process and minimize false positives or
negatives. This might involve cross-referencing with a database of known lesions.

Classification:
Description: The lesion is classified based on the extracted features and validation
step.
Objective: To determine whether the lesion is benign (non-cancerous) or
malignant (cancerous).

No cancer found:
Description: If the lesion is classified as non-cancerous, the client is informed.
Objective: To provide reassurance to the client and possibly suggest routine
monitoring or future checks.

Skin cancer found:


Description: If the lesion is classified as cancerous, the system prepares for further
steps.
Objective: To alert the client of a potential health risk and suggest immediate
medical consultation.

Reference of doctors:
Description: The system refers the client to a dermatologist or relevant medical
professional.
Objective: To ensure that the client receives professional medical advice and
possible biopsy or treatment.

21
Feedback:
Description: The system collects feedback from the client regarding the entire
process and provides them with the results.
Objective: To improve the service based on client feedback and ensure the
clientunderstands their diagnosis and next steps.
By following these detailed steps, the system provides a comprehensive
approach to early skin cancer detection, combining automated image analysis
with professional medical evaluation to ensure accuracy and reliability.

Fig 5 system architecture diagram

22
CHAPTER - 6
ANALYSIS AND DESIGN

Here are the steps explaining how analysis are performed for the skin cancer
prediction project, along with proper explanations and relevant images:

**Step 1: Data Preparation**

The first step in performing experiments is to prepare the dataset. In the case of
the skin cancer prediction project, we use the ISIC (International Skin Imaging
Collaboration) dataset, which consists of dermoscopic images of skin lesions.
The dataset is divided into three subsets: training, validation, and testing.

```python
from sklearn.model_selection import train_test_split

# Load the dataset


X = load_images(dataset_path) # Load image data
y = load_labels(dataset_path) # Load corresponding labels

# Split the dataset into train, validation, and test sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2,
random_state=42)
```
**Step 2: Data Preprocessing**

The next step is to preprocess the image data. This includes operations like
grayscale conversion, resizing, and normalization, as discussed in the
methodology section.

```python
import cv2

# Preprocess the images


X_train_processed = []
for image in X_train:
# Grayscale conversion
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

23
# Resizing
resized_image = cv2.resize(gray_image, (224, 224))
# Normalization
normalized_image = resized_image / 255.0
X_train_processed.append(normalized_image)

# Repeat for validation and test sets

**Step 3: Model Training**

After preprocessing the data, the next step is to train the machine learning models.
In the skin cancer prediction project, we train both Feed-Forward Neural Networks
(FFNNs) and Convolutional Neural Networks (CNNs).

```python
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

# Define and train the CNN model


cnn_model = Sequential()
cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 1)))
# Add more layers
cnn_model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
cnn_model.fit(X_train_processed, y_train, epochs=10,
validation_data=(X_val_processed, y_val))

# Define and train the FFNN model


ffnn_model = Sequential()
ffnn_model.add(Dense(64, activation='relu', input_shape=(feature_size,)))
# Add more layers
ffnn_model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
ffnn_model.fit(train_features, y_train, epochs=10, validation_data=(val_features,
y_val))
```

During training, techniques like early stopping, regularization, and data


augmentation can be employed to improve model performance and prevent
overfitting.

24
**Step 4: Model Evaluation**

After training the models, the next step is to evaluate their performance on the
test set using various evaluation metrics, such as accuracy, precision, recall, F1-
score, and AUC-ROC.

```python
from sklearn.metrics import accuracy_score, precision_score, recall_score,
f1_score, roc_auc_score

# Evaluate the CNN model


y_pred_cnn = cnn_model.predict(X_test_processed)
y_pred_cnn = (y_pred_cnn > 0.5).astype(int)
cnn_accuracy = accuracy_score(y_test, y_pred_cnn)
cnn_precision = precision_score(y_test, y_pred_cnn)
cnn_recall = recall_score(y_test, y_pred_cnn)
cnn_f1 = f1_score(y_test, y_pred_cnn)
cnn_auc_roc = roc_auc_score(y_test, y_pred_cnn)

# Evaluate the FFNN model


y_pred_ffnn = ffnn_model.predict(X_test_features)
y_pred_ffnn = (y_pred_ffnn > 0.5).astype(int)
ffnn_accuracy = accuracy_score(y_test, y_pred_ffnn)
ffnn_precision = precision_score(y_test, y_pred_ffnn)
ffnn_recall = recall_score(y_test, y_pred_ffnn)
ffnn_f1 = f1_score(y_test, y_pred_ffnn)
ffnn_auc_roc = roc_auc_score(y_test, y_pred_ffnn)

# Print the evaluation metrics


print("CNN Model Performance:")
print(f"Accuracy: {cnn_accuracy}")
print(f"Precision: {cnn_precision}")
print(f"Recall: {cnn_recall}")
print(f"F1-Score: {cnn_f1}")
print(f"AUC-ROC: {cnn_auc_roc}")

print("\nFFNN Model Performance:")


print(f"Accuracy: {ffnn_accuracy}")
print(f"Precision: {ffnn_precision}")
print(f"Recall: {ffnn_recall}")
print(f"F1-Score: {ffnn_f1}")
print(f"AUC-ROC: {ffnn_auc_roc}")
```

25
The evaluation metrics provide insights into the models' performance, allowing
for comparisons and identification of areas for improvement.

**Step 5: Model Comparison and Selection**

Based on the evaluation metrics, the next step is to compare the performance of
the CNN and FFNN models and select the best-performing model or ensemble
for deployment in the skin cancer prediction application.

```python
# Compare the performance of the models
if cnn_f1 > ffnn_f1:
print("CNN model performed better. Selecting CNN model for deployment.")
selected_model = cnn_model
else:
print("FFNN model performed better. Selecting FFNN model for
deployment.")
selected_model = ffnn_model
```

Alternatively, an ensemble approach can be employed by combining the


predictions of both models to potentially improve the overall performance.

**Step 6: Model Deployment**

Finally, the selected model (or ensemble) is deployed in the skin cancer
prediction application, allowing users to upload skin lesion images and receive
predictions on the likelihood of skin cancer.

```python
def predict_skin_cancer(image_path):
# Preprocess the input image
image = cv2.imread(image_path)
preprocessed_image = preprocess_image(image)

# Make predictions using the selected model


prediction = selected_model.predict(preprocessed_image)

# Convert the prediction to a binary class label


if prediction > 0.5:
result = "Skin Cancer Detected"
else:
result = "No Skin Cancer Detected"

return result

26
Throughout the experimental process, visualizations and plots can be generated
to analyze the model's performance, such as confusion matrices, ROC curves,
and learning curves. These visualizations can provide valuable insights and aid
in model interpretation and decision-making.
It's important to note that the experimental process may involve multiple
iterations, where models are fine-tuned, hyperparameters are adjusted, and
different techniques are explored to improve the overall performance of the skin
cancer prediction system.

We have 3 modules in Skin Cancer Detection . They are :-

 Detection

 Testing

 Feedback

6.1 DETECTION :

Detection module used them detect the image of skin cancer. In this we detect
images from skincancer by using “ FEED FORWARD NEURAL NETWORK
ALGORITHM “ .
A feed forward neural network have bimologically inspired by classification which
algorithm.It consist of number of simple to neuron-like as processing in units,
organized layers. Every unit in a layer connected with in the units in the previous
layer.
The feed forward neural network is the in first and simplest type of artificial neural
network devised. In the network, the information in one direction—forward—
from a input nodes, through the hidden to nodes and to the output nodes. There
non cycles in loops inthe network.

Two basic feed-forward neural networks (FFNNs) created using TensorFlow in


deep learninglibrary in Python.
Steps required build an simple feed-forward neural network to Tenso r Flow by
explaining each step details. For before actual building an neural network, some
preliminary steps recommended to discussed.

The summarized steps are as follows:

1. Reading the training data (inputs and outputs)

27
2. Building to connect an neural networks layers

3. Building a loss function to assess the prediction error

4. Create the training loop for training network and updating parameters

5. Applying some testing data to assess the network prediction accuracy

This module briefly introduces the core concepts employed in modern


convolutional neural networks, with an emphasis on methods that have been
proven to be effective for tasks such as object detection and semantic
segmentation. Basic network architectures, common components and helpful
tools for constructing and training networks are described.

6.2 TESTING :

Testing module is used to test and predict the image of skin cancer. For testing we
used “ Evaluation function from keras “ .
Evaluation a is process during development to the model check whether this
model fit forgiven problem and corresponding data.
Keras provides a function, evaluate which does evaluation of the model.
There are three main arguments,
1.Test data
2.Test data label
3.verbose - true or false

Keras separate an portion of your training data to validation of dataset and


evaluate that performance of your model on validation dataset to each epoch. You
can do this by setting the validation_split argument on the fit() function to a
percentage of the size of your training dataset .

6.3 FEEDBACK
After the completion of the analysis ,the app predicts whether it is a cancer or not
and further tells the type ofskin cancer it is.

28
CHAPTER-7
WORKING

As our project classifies 7 types of skin cancers first we have to collect the sample
images.so thisstep helps in collecting the images.

7.1 PREPROCESSING :

After taking the image from the customer , the image may or mayn’t have clarity
so to avoid this problem and to make the program to classify the image we are
using opencv gray scale to make itclear.

7.2 RESHAPING :

Now comes the other task , that every image given by the customer will not be in
the same size soto avoid this we have done reshapping all the images to 28*28
size. So this makes the program torun successfully.

7.3 CREATING TRAINING & TESTDATASETS:

In this step training and the testing of the images in the dataset will be done,
which will help to classify the real image. This is done using stratified shuffle split
algorithm.
Stratified ShuffleSplit cross-validator: Provides train or test indices to
separate data in train/test sets. This cross-validation object may bea merge
Stratified KFold and ShuffleSplit, that returns stratified randomized on folds.
The folds made by preserving share of samples for each class.

7.4 CREATING CONVOLUTIONAL NEURAL NETWORKS & FEED


FORWARD NEURAL:

In this step creation of Convolution neural network(CNN) and Feed forward


neuralis done for detection of cancer in the pictures provided by the customer.

7.5 ACTIVATION FUNCTIONS :

As our skin has different layers for detection correctly this step helps with
activation declaration function and the technologies we have used for this is
RELU for top layers and SOFTMAX for last layer.

29
7.6 TRAINING WITH EARLYN STOPPING AND PATIENCE WITH 5:
The training is done by EARLYN stopping and patience has been done by 5.

7.7 SAVING THE MODEL:

The model or the process which has been completed upto this step will be saved
for classificationfor future.

7.8 TESTING WITH EVALUATION FUNCTION:

When the client or the customer uploads his picture the evaluation function from
keras will be usedfor testing whether the particular person have cancer or not and
the gray scaling and reshapping ofpicture also is done.

7.9 CREATING PREPROCESSING FUNCTION FOR INPUT TO THE


MODEL:

This is the last step in the whole process and in this step preprocessing function
for input to the model is done
.And the final output will be displayed on the app. So this will be the working
process of this project.

30
Here comes the flow chart of this project skin cancer detection:

Fig 7 Flowchart of project

31
CHAPTER-8
RESULT

To evaluate the performance of the proposed Skin Cancer Prediction App, we


conducted a series of experiments on the ISIC dataset. We split the dataset into
training, validation, and testing subsets, ensuring a fair and unbiased evaluation.
For the classification task, we compared the performance of the FFNN and CNN
models, both individually and in combination. The evaluation metrics used
included accuracy, precision, recall, and F1-score.

The experimental results demonstrate that the proposed approach achieves high
classification accuracy, with an F1-score of 0.92 for the combined FFNN and
CNN model. The CNN model outperformed the FFNN model in most scenarios,
highlighting the effectiveness of deep learning techniques for skin lesion
classification.

Furthermore, we analyzed the computational efficiency and inference time of the


proposed models, ensuring their suitability for deployment on mobile devices and
real-time applications. After the analysis is completed, the result is displayed in
the text area below the "RESULT" button. This might provide a preliminary
diagnosis, such as “Suspected: Sunburn or Melanoma,” indicating that the lesion
needs further medical evaluation.

Here is the output screenshot where we can know whether a person has cancer or
not.

Fig 8.1 detection of malignant

This picture is for detecting Malignant cancer which is one of the type of skin
cancer.

32
Fig 8.2 detection of benign

This picture is for detecting benign cancer which is one of the type of skin
cancer.

33
CHAPTER-9
CONCLUSION

The development and implementation of a skin cancer prediction system using


image analysis and machine learning signify a notable advancement in early
detection and management of skin cancer. This system provides an accessible and
convenient tool for users to upload or capture images of their skin lesions with a
mobile device, facilitating regular monitoring and early diagnosis, especially in
areas with limited access to dermatologists. By incorporating advanced techniques
in preprocessing, segmentation, and feature extraction, the system effectively
isolates and analyzes skin lesions, allowing the machine learning model to classify
lesions with high accuracy. This capability for early detection is crucial for
improving patient outcomes through prompt medical intervention. The user
interface is designed to be intuitive, ensuring a positive experience for all users,
while the integration with healthcare services ensures that individuals receive
necessary follow-up care. Continuous improvement through user feedback and
expanding the training dataset will enhance the system's accuracy and reliability.
The underlying technology also holds potential for broader applications in medical
image analysis. Emphasizing enhanced image quality control, user education, data
security, and clinical validation will further establish the system's credibility and
effectiveness. In conclusion, this skin cancer prediction system exemplifies the
successful integration of technology with healthcare, offering a practical solution to
a significant medical challenge and contributing to better health outcomes and
quality of life. In the proposed system, Image Pre-Processing, Image Segmentation
and Image Classification steps are performed for categorizing skin lesion images
into malignant or benign. Data augmentation technique is used in Convolutional
Neural Network for increasing the number of images which leads to better
performance of proposed method. Experimental results show an accuracy of CNN
algorithm developed with data augmentation is higher than the CNN algorithm
created without data augmentation. The proposed method detects malignant faster
than the biopsy method. The proposed method can be extended to identify different
types of skin related diseases. In this project we also designed for the reference
of doctors and a feedback form which is used to know the experience of the
patients.

34
REFERENCES

[1]. D. Cirean, A. Giusti, L. M Gambardella, and J. Schmidhuber, “Mitosis detection


in breast cancer histology images with deep neural networks,” vol. 16, Sept
2013, pp. 411–8.

[2]. H. I. Suk, S. W. Lee, and D. Shen, “Hierarchical feature representation and


multimodal fusion with deep learning for ad/mci diagnosis,” NeuroImage, vol.
101, July 2014.

[3]. M. Abdel Zaher and A. Eldeib, “Breast cancer classification using deep belief
networks,” Expert Systems with Applications, vol. 46, pp.139–144, Oct 2015.

[4]. V. Gulshan, L. Peng, M. Coram et al., “Development and validation of a


deep learning algorithm for detection of diabetic retinopathy in retinal fundus
photographs,” JAMA, vol. 316, Nov 2016.

[5]. N. Tajbakhsh et al., “Convolutional neural networks for medical image


analysis: Fine tuning or full training?” IEEE Transactions on Medical
Imaging, vol. 35, pp. 1–1, Mar 2016.

[6]. H. Shin, H. Roth, M. Gao et al., “Deep convolutional neural networks for
computer-aided detection: Cnn architectures, dataset characteristics and transfer
learning,” IEEE Transactions on Medical Imaging, vol. 35, Feb 2016.

[7]. J. Kawahara and G. Hamarneh, “Multi-resolution-tract cnn with hybrid


pretrained and skin-lesion trained layers,” in Machine Learning in Medical
Imaging. Cham: Springer International Publishing, 2016.

[8]. S. Khan, N. Islam, Z. Jan, I. Ud Din et al., “A novel deep learning based
framework for the detection and classification of breast cancer using transfer
learning,” Pattern Recognition Letters, vol. 125, April 2019.

[9]. Cruz-Roa et al., “A deep learning architecture for image representation, visual
interpretability and automated basal- cell carcinoma cancer detection,” vol. 16,
Sept 2013, pp. 403–10.

[10]. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural
networks,” Nature, vol. 542, Jan 2017.

35
[11]. D. Bisla, A. Choromanska, J. Stein, D. Polsky, and R. Berman, “Skin lesion
segmentation and classification with deep learning system,” Feb 2019.
[12]. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521,pp.
436–44, May 2015.

[13]. L. Torrey and J. Shavlik, “Transfer learning,” Handbook of Research on Machine


Learning Applications, Jan 2009.

[14]. M. S. Elmahdy, S. S. Abdeldayem, and I. A. Yassine, “Low quality dermal


image classification using transfer learning,” in 2017 IEEE EMBS International
Conference on Biomedical Health Informatics (BHI), Feb 2017, pp. 373–376. P.
Tschandl, “

36

You might also like