Workspace
Workspace
PROJECT DESCRIPTION
1. Input Images:
Based on the sparse texture distributions and their associated TD metrics, the input image is
considered part of the lesion. The image is first over segmented, which results in dividing an
image into large number of regions. The next step is to categorize each region based on its
textural contents as representing normal skin or lesion. Lastly, postprocessing steps refine the
segmentation of lesions.
2. Pre-processing:
Due to the presence of shadows and bright areas which are caused by illumination variation,
Segmentation of skin lesion becomes difficult. If this illumination variation is not corrected,
these algorithms tend to identify shadow areas as a part of the skin lesion. Therefore,
preprocessing step should be done to correct the illumination.
2.1. Filtering:
Using Gaussian filtering, a texture channel is obtained, and by using the inverse of the red
channel, a color channel is obtained. The third channel is created using PCA. The Otsu-PCA
algorithm is referred to for simplicity. To clean up the contour, all algorithms contain
additional postprocessing steps, which are implemented.
3. Segmentation:
In image segmentation, an image is divided into several parts. Typically, this is used to
identify objects or other information in digital images.
4. Threshold:
A threshold refers to division of set of representative texture distributions into two classes
i.e., normal skin and lesion which is based on the TD metrics. There are several ways to find
two classes from a single-dimensional set of features. In the TDLS algorithm, the threshold is
obtained such that the total intraclass variance of the TD metric is minimized for each class.
5. Traced Area:
Traced area output from Neural network, often referred to as Artificial Neural Network
(ANN) is a computing system made up of processing elements called neurons which process
the information by their dynamic state response to external inputs.
Neural networks are organized as layers – one input layer, one or more hidden layers and an
output layer. Hidden layers are made up of a number of neurons. Features/patterns are given
to the network via the input layer, which are connected to one or more of the hidden layers.
The actual processing is done in the hidden layers which are connected to the output layer.
The objective of this experiment is to measure sensitivity, specificity, and accuracy of TDLS
and TDLS with learning after the algorithms classify each pixel as belonging to the normal
skin class or lesion class. A set of 30 images are used to test the segmentation and 70 images
are used to train the neural network. Each algorithm is applied to the corrected images and
the resulting segmentation is compared to the manually drawn segmentation acting as ground
truth.
Deep learning can be considered as a subset of machine learning. It is a field that is based on
learning and improving on its own by examining computer algorithms. While machine
learning uses simpler concepts, deep learning works with artificial neural networks, which are
designed to imitate how humans think and learn. Until recently, neural networks were limited
by computing power and thus were limited in complexity. However, advancements in Big
Data analytics have permitted larger, sophisticated neural networks, allowing computers to
observe, learn, and react to complex situations faster than humans. Deep learning has
aided image classification, language translation, speech recognition. It can be used to solve
any pattern recognition problem and without human intervention.
Artificial neural networks, comprising many layers, drive deep learning. Deep Neural
Networks (DNNs) are such types of networks where each layer can perform complex
operations such as representation and abstraction that make sense of images, sound, and text.
Considered the fastest-growing field in machine learning, deep learning represents a truly
disruptive digital technology, and it is being used by increasingly more companies to create
new business models.
Deep learning is powered by artificial neural networks, which contain many layers. In
deep neural networks (DNNs), each layer of a DNN is capable of representing,
abstracting, and understanding images, sounds, and texts. Among the fastest-growing
fields of machine learning, deep learning represents a truly disruptive technology, and
companies are using it to create new business models at an increasing pace.
Defining Neural Networks
A neural network is structured like the human brain and consists of artificial neurons, also
known as nodes. These nodes are stacked next to each other in three layers:
Data provides each node with information in the form of inputs. The node multiplies the
inputs with random weights, calculates them, and adds a bias. Finally, nonlinear functions,
also known as activation functions, are applied to determine which neuron to fire.
Information is provided to each node as inputs. The node multiplies the inputs with random
weights, calculate them, and add a bias to them. As a final step, activation functions, also
known as nonlinear functions, are used to determine which neuron to fire.
Convolutional Neural Networks (CNNs)
CNN's, also known as ConvNets, consist of multiple layers and are mainly used for image
processing and object detection. Yann LeCun developed the first CNN in 1988 when it was
called LeNet. It was used for recognizing characters like ZIP codes and digits.
CNN's are widely used to identify satellite images, process medical images, forecast time
series, and detect anomalies.
CNN's have multiple layers that process and extract features from data:
Convolution Layer
CNN has a convolution layer that has several filters to perform the
convolution operation.
Pooling Layer
The rectified feature map next feeds into a pooling layer. Pooling is a down-
sampling operation that reduces the dimensions of the feature map.
The pooling layer then converts the resulting two-dimensional arrays from
the pooled feature map into a single, long, continuous, linear vector by
flattening it.
A fully connected layer forms when the flattened matrix from the pooling
layer is fed as an input, which classifies and identifies the images.
There are multiple layers in CNNs that process and extract data features:
Convolution Layer
Pooling Layer
The rectified feature map is then fed into a pooling layer. Pooling is an
operation which performs down-sampling. This reduces the dimensions of the
feature map which results in two-dimensional arrays.
The pooling layer next converts the two-dimensional into a single, continuous,
long, linear vector by flattening it.
A fully connected layer is formed when the flattened matrix is fed as an input
from the pooling layer, which identifies and classifies the images.
A dermatoscope is a special device for assessing the risk of skin lesions [2]. Images
acquired through a digital dermatoscope are known as dermoscopy images. Early work
on automated systems to assess the risk of melanoma used dermoscopy images. There is
a need for a segmentation algorithm for dermoscopy images of skin lesions.
A dermatoscope is a special device used to diagnose skin lesions. The images acquired
by a digital dermatoscope are called dermoscopy images. Automated systems for
assessing melanoma risk relied initially on images of dermoscopy. In order to segment
dermoscopy images of skin lesions, a segmentation algorithm is needed.
In digital dermatoscope images, noise levels are generally low and background
illumination is consistent. An optional preprocessing algorithm for dermatological
images is applied to normalize or enhance the colors of the images.
When doctors diagnose melanoma early, they often can treat it before it starts to spread
(metastasize). But detecting melanomas can prove tricky because they don't all look the same.
The first sign of melanoma is often a change in an existing mole. Or you may notice a new or
odd-looking growth on your skin.
It's vital to check your skin often for new growths or any changes to existing moles.
Doctors can often treat melanoma before it spreads (metastasizes) when they
diagnose it early. The problem with detecting melanomas is that they don't all
look alike. Change in an existing mole may often referred as a first sign of
melanoma or one can notice the new or old growth on the skin. It's vital to
check the skin very often for any changes to existing moles or new growths. But
the manual examination of skin lesions is time consuming.
SOCIAL IMPACT
Since it directly relates to patients' medical records, the project has an
immediate social impact. Furthermore, this project has a great benefit to the
community because it may prevent people from being diagnosed with skin
cancer in their last stage of development and thus enables them to be detected
more quickly. Furthermore, by reducing their daily workload and concentrating
them on the most important cases, this project could help health professionals
and dermatologists save lives.
TECHNOLOGICAL IMPACT
As regards technology, the project used other research papers and technological
developments. Understanding the approach of dermatologists and attempting to
replicate it by using appropriate technology and algorithms was a challenge in
this project.
ECONOMICAL IMPACT
The project could reduce the cost of diagnosis of skin cancer, since doctors would be
able to respond quickly to the computer, which would allow them to see more patients
per day, which would result in a cheaper service. Moreover, by using the project,
doctors can be less likely to biopsy patients' skin regularly and this will reduce diagnosis
costs. The reduction of prices is beneficial to the social good, because more people are
able to afford it. The use of computers and image processing in healthcare brings
benefits, as these will allow processes to be performed more quickly and at a lower cost.
ENVIRONMENTAL IMPACT
It's an environmentally friendly project. One of the main reasons is that this project
uses an easy method. Image of the lesion for diagnosis which means that patients can
send the images from home to the doctor and wait for a response without using a car or
public transportation to go physically to the doctor’s office. This will help reduce the
total amount of pollution coming from vehicle use by bringing this project into practice.
However, electricity continues to be consumed by computer machines and this method
is not as polluting as going to the hospital.
POLITICAL IMPACT
LEGAL IMPACT
The project deals with patient health records and introduces the use of computers for the
diagnosis of skin cancer. When this project is applied in a given country, there should be
legislation that will allow the use of such solutions. In addition, an accompanying document
describing how to apply image processing machine learning at the diagnosis phase should be
agreed and signed by the patient. This document should also be evidence that there is
agreement on the use of this new diagnostic method for melanoma. In addition, the patient
should know that an error may have been made by a machine learning model because it was
impossible to achieve 100% accuracy.
ETHICAL IMPACT
For two different reasons, this project is considered to be ethical. First of all, the images are
anonymous. Private information of patients will not be taken into account in the software
solution. In this case, raw data that has already been captured in the image is used. Secondly,
the project aims at saving lives and this is an excellent goal to reach.
Methodology-I identifies the melanoma stages based on the Total Dermoscopic Score
(TDS) value using four features namely Asymmetry, Border Irregularity, Color
Variations and Differential Structure. The variable Asymmetry has three different
values: If no asymmetry on both axis, then the score is 0, Asymmetry present on single
axis the score is 1, asymmetry present on both axis the score is 2. Border is a numerical
attribute, with values from 0 to 8 where a lesion is partitioned into eight segments. The
border of each segment is evaluated; the sharp, abrupt Cut-off within one eight border
contributes 1, the gradual indistinct cut-off within one eight contributes 0. Based on the
Color count on the skin lesion the increments apply to Color value. Black, blue, dark
brown, light brown, red and white are the possible colors of melanoma lesions. Pigment
network, homogeneous areas, dots, globules and streaks network are the five
differential structures to be considered. Methodology –II identifies the different types of
melanoma. Lentigo maligna melanoma, Superficial spreading melanoma, Acral
lentiginous melanoma, mucosal melanoma, Nodular melanoma, Polypoid melanoma,
Desmoplastic melanoma, Amelanotic melanoma, and Soft-tissue melanoma comes
under the melanoma types. These are identified using the features like color, circularity,
uniformity, solidity, Correlation, Homogeneity, Texture, Brightness, Mean, Skewness,
kurtosis, Variance, Standard Deviation, RMS, Smoothness, Compact index, edge
abruptness, Tumor thickness, level of invasion, site(face,scalp,neck,ear).The location of
appearance is also important in diagnosing the type of melanoma.
The methodology-I uses four parameters, namely asymmetry, border irregularity, colour
variations and differential structures, in order to identify melanoma stages based on the Total
Dermoscopic Score (TDS) value. The Asymmetry variable has three values: if there is no
asymmetry on either axis, then the score will be 0, asymmetry present on one axis shall have
1, an asymmetry present on both axes shall have 2. The increment is applied to the colour
value, taking into account the color count of the skin lesions. The possible color of melanoma
lesions is black, blue, dark brown, light brown, red and white. The five differential structures
to be looked at are the pigment network, uniformly distributed regions, dots, globules and
streaks networks. The different types of melanoma are identified in Methodology II. The
different types of melanoma are identified in Methodology II. The types of melanoma are
Lentigo maligna melanoma, Acral lentiginous melanoma, Superficial spreading melanoma,
mucosal melanoma, Polypoid melanoma, Desmoplastic melanoma, Nodular melanoma,
Amelanotic melanoma, and Soft-tissue melanoma. These can be found using features such as
color, circularity, correlation, homogeneity, texture, uniformity, compact index, edge
abruptness, tumor thickness, solidity, brightness, proportion, skewness, kurtosis, Standard
Deviation, RMS, variance, smoothness, invasion level, site.
Two types of image augmentations were implemented in this work, with one consisting
of geometric transformations (GT) and the other focusing on image contrast random
normalization (CRN). In order to evaluate the performance of the proposed FCN
model, two experiments are conducted on PH2, an independent dermoscopic image
database that is widely used for algorithm validation and benchmark. In this paper a
novel loss function is designed based on the Jaccard distance to further boost the
segmentation performance. Compared to the conventionally used cross entropy, the
Jaccard distance-based loss function directly maximizes the overlap between the
foreground of the ground truth and that of the predicted segmentation mask, and thus
eliminates the needs of data re-balancing when the numbers of foreground and
background pixels are highly unbalanced, such as binary medical image segmentation.
Two types of image augmentations were implemented in this work, one consists of
geometric transformations (GT) and the other focuses on image contrast random
normalization (CRN). Two experiments are performed on PH2, an independent
dermoscopic image database widely used in algorithm validation and benchmark, with
a view to evaluating the performance of the proposed FCN model. A novel loss function
on the basis of Jaccard distance is proposed for this paper which will increase
segmentation performance. Jaccard distance-based loss function Maximizes the
overlapping between foreground of the ground truth and the predicted segmentation
mask. When the foreground and background pixels are highly unbalanced, the need of
data re-balancing is eliminated.
The skin disease dataset includes 7 categories of all important diagnostic categories in
the field of pigmented lesions: melanocytic nevi, melanoma, benign keratosis lesions,
basal cell carcinoma, actinic keratosis and intraepithelial carcinoma, vascular lesions
and dermatofibroma. To improve the accuracy and training speed, InceptionV3,
ResNet50 and DenseNet201 with ImageNet trained weight were transfered to train the
models on total preporcessed 7283 skin leision samples. Image enhancement is mainly
carried out by random flipping, random horizontal and vertical offset amplitude,
shearing and amplification.
Seven categories of major diagnostic criteria for pigmented lesions are included in the
skin disease dataset: melanocytic nevi, melanoma, benign keratosis, basal cell
carcinoma, actinic keratosis and intraepithelial carcinoma, dermatofibroma and
vascular lesions. InceptionV3, ResNet50 and DenseNet201 with trained ImageNet
weight were used to train the models on a total preparation of 7283 skin samples in
order to improve accuracy and training speed. Random horizontal and vertical offset
amplitude, random flipping, shearing and amplification are mainly used for image
enhancement.
Different deep learning models were built using the same dataset with extensive data
augmentation. For melanoma detection, deep convolutional neural networks including
Inception-v4, ResNet-152 and DenseNet-161 were trained for melanoma classification
and seborrheic keratosis classification. For lesion segmentation, U-Net and U-Net with
VGG-16 Encoder were trained to produce segmentation masks. The proposed method
was evaluated on the ISIC 2017 Skin Lesion Analysis Towards Melanoma Detection
Challenge dataset.
Using the same dataset and extensive data augmentation, different deep learning models
have been developed. For the detection of melanoma, Deep convolutional neural
networks including ResNet-152, Inception-v4 and DenseNet-161 were trained for
classification of melanoma and classification of seborrheic keratosis. U-NET and U-
NET with the VGG-16 Encoder have been trained to produce segmentation masks for
lesion segmentation.
In this method, it is proposed to train only the top layers of the network because those
layers contain more dataset-specific features. The first four convolutional layers will be
frozen and use pre-trained VGG-16 weights on ImageNet dataset. The last
convolutional and fully-connect layers will be trained using pre-trained weights on
fully-connected layers.
Only the top layers of the network are proposed to be trained in this method because
these layers contain more dataset specific features. The first four convolutional layers,
using pretrained VGG16 weights in the ImageNet dataset, will be frozen. The fully-
connect layers and the last convolutional using pre-trained weights will be trained on
fully-connected layers.
I INTRODUCTION 1
1.1. GENERAL
1.2. SKIN CANCER
1.2.1. CAUSES OF SKIN CANCER
1.3. MELANOMA
1.3.1. SIGNS OF MELANOMA
1.3.2. CAUSES OF MELANOMA
1.4. DEEP LEARNING
4. Faaiza Rashid, Aun Irtaza, Nudrat Nida, Ali Javed, Hafiz Malik, Khalid
Mahmood Malik “Segmenting melanoma Lesion using Single Shot
Detector (SSD) and Level Set Segmentation Technique” 2019 13th
International Conference on Mathematics, Actuarial Science, Computer
Science and Statistics (MACS), DOI: 10.1109/MACS48846.2019.9024823