0% found this document useful (0 votes)
118 views9 pages

SFSSF

it is good pdf

Uploaded by

adane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views9 pages

SFSSF

it is good pdf

Uploaded by

adane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

ISSN No.

0976-5697
Volume 6, No. 1, Jan-Feb 2015
International Journal of Advanced Research in Computer Science
RESEARCH PAPER
Available Online at www.ijarcs.info

Skin Cancer Detection using Arificial Neural Network


Ekta Singhal Shamik Tiwari
M.Tech II Year, Assistant Professor
Dept of Computer Science Engineering Dept of Computer Science Engineering
MUST – FET, Lakshmangarh, India MUST – FET, Lakshmangarh, India
Abstract: Skin Cancer is most prevalent cancer in the light-Skinned population and it is generally caused by exposure to ultraviolet light. In this
paper, an automatically skin cancer classification system is developed and the relationship of skin cancer image across the neural network are
studied with different type of pre-processing. The collected image is feed into the system and image pre-processing is used for noise removal.
Images are segment using thresholding. There is certain feature unique in skin cancer region these feature are extract using feature extraction
technique. Mltilevel 2-D wavelet decomposition is used for feature extraction technique. These features are given to the input nodes of neural
network. Back propagation neural network and radial basic neural network are used for classification purpose, which categories the given images
into cancerous or non-cancerous.

Keywords: Skin cancer; Segmentation; thresholding; 2D Wavelet transform; BPN network; RBF network .

describes the different step of process, image acquisition,


I. INTRODUCTION image pre-processing, feature extraction, and classification.
Sonali et al. [6] have implemented a simple algorithm
Skin cancer is increasing in different countries for detection of skin cancer and also finding shape and
especially in Australia [5]. Skin cancer is the uncontrolled infected area. This describe the different step of process in
growth of abnormal skin cell. Skin cancer diseases are very first step pre processing done using median filter and in
dangerous, particularly when not treated at an early stage. second step segmentation done using two methods,
Skin cancer is the most common of all cancer type. In skin Threshodling and Fuzzy c-means. In third step feature
cancer number of cases has been going up over the past few extraction done by Gray Level Co- occurrence Matrix
year. Many skin cancers are caused by much exposure to (GLCM) and contour signature. Ho Tak Lau et al.[7] have
ultraviolet (UV) rays[6]. describe automatically skin cancer classification system and
Most of this exposure comes from the sun and man- relationship of skin cancer image using different type of
made sources[6]. The three most common types are: neural network and different type of pre processing. This
Melanoma: Melanoma begins in melanocytes. On any describe that the image acquire and different image
skin surface melanoma can occur. Melanoma is rare in dark processing procedure used to enhance the image properties
skin people. It is found on skin on the head, on the neck, and then useful information extract from the image and pass
between the shoulders, on lower legs, on palms of the hands, to the classification system for training and testing. Abdul
on the soles of feet or under the finger nails[6]. et al.[8] have recommended a technique in which early
Basal Cell Skin Cancer: Basal cell skin cancer begins detection of Skin Cancer using Artificial Neural Network.
in the basal cell layer of the skin. It is usually occurs in The diagnosing methodology uses Image processing
places that have been in the sun[6]. Basal cell skin cancer is technique and Artificial Intelligence. This describe the
the most common type of cancer in fair people. dermoscopy image of skin cancer is taken and it subjected to
Squamous Cell Skin Cancer: Squamous cell skin various pre processing for noise removal and image
cancer begins in squamous cells. Squamous cell skin cancer enhancement. Then image segment using Thresholding.
is the most common type of skin cancer in dark people and Feature are extract using feature extraction technique-
its usually found in places that are not in the sun such as the 2D wavelet transform. And Back Propagation Neural
legs or feet[6]. Network used for classification purpose. Liu Jianli et al.[9]
have carried out a survey in which Skin Cancer is segment
II. LITERATURE REVIEW by genetic neural network. The segmentation speed of the
genetic neural network is much higher as compared with the
Image processing techniques provide a efficient tool to
standard BP neural network. The skin cancer images
classify the cancer from the images. In the recent year neural
segmented by gentaic neural network have continuous edge
network also used to detect the cancer to obtain proficient
and clear contour, which can be used in the quantitative
results. Different authors have used different ways to
analysis and identification of skin cancer. Mariam A.Sheha
combine these technologies to achieve better conclusion.
et al. [10] have design automated method for melanoma
Various works have been done in the detection of Skin
diagnosis applied on a set of dermoscopy images. Features
cancer using image processing and combination of neural
extracted are based on gray level Co-occurrence matrix
network. Some of them discussed. Azadeh et al. [5] have
(GLCM) and Using Multilayer perceptron classifier (MLP)
carried out a survey in which Skin Cancer is detected. This
to classify between Melanocytic Nevi and Malignant
describe the automatic detection of Skin Cancer can help to
melanoma. MLP classifier was proposed with two different
increase the accuracy and also review the techniques used in
techniques in training and testing process: Automatic MLP
recent years, for the early detection of skin cancer. This

© 2015-19, IJARCS All Rights Reserved 149


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

and Traditional MLP. Texture analysis is a useful method detected using transillumination imaging. Current system
for discrimination of melanocytic skin tumours with high uses size difference based on lesion physiology and achieves
accuracy. Sigurdur et al.[11] have designe skin cancer great overall accuracy. This explore texture information, one
classification based on in vitro Raman spectroscopy is of the criteria dermatologists use in the diagnosis of skin
approached using a nonlinear neural network classifier. cancer, but has been found very difficult to utilize in an
The classification framework is probabilistic and highly automatic manner. The overarching goal is to improve the
automated. The framework includes a feature extraction for overall decision support capability of the DSS. The
Raman spectra and a fully adaptive and robust feed forward objective is to use texture information ONLY to classify the
neural network classifier. Classification rules learned by the benign and malignancy of the skin lesion. A three-layer
neural network may be extracted. Nilkamal et al. [12] have mechanism that inherent to the support vector machine
described the past and present technologies for skin cancer (SVM) methodology is employed to improve the
detections along with their relevant tools. This design new generalization error rate and the computational efficiency.
approach for Skin Cancer detection and analysis from given M. J. Ogorzałek et al. [17] computer-assisted techniques
photograph of patient’s cancer affected area, which can be and image processing methods can be used for image
used to automate the diagnosis and theruptic treatment of filtering and for feature extraction and pattern recognition in
skin cancer. The proposed scheme is using Wavelet the selected images. Apart from standard approaches based
Transformation for image improvement, de noising and on geometrical features and color/pattern analysis we
Histogram Analysis whereas ABCD rule with good propose to enhance the computer-aided diagnostic tools by
diagnostic accuracy worldwide is used in diagnostic system adding non-standard image decompositions and applying
as a base and finally Fuzzy Inference System for Final classification techniques based on statistical learning and
decision of skin type based on the pixel color severity for model ensembling. Ensembles of classifiers based on the
final decision of Benign or Malignant Skin Cancer. extended feature set show improved performance figures
Mahmoud et al. [13] have describe two hybrid suggesting that the proposed approach could be used as
techniques for the classification of the skin images to predict powerful tool assisting medical diagnosis. Maglogiannis et
it if exists. The proposed hybrid techniques consist of three al. [18] have review the state of the art in systems by first
stages, namely, feature extraction, dimensionality reduction, presenting the installation, the visual features used for skin
and classification. In the first stage, we have obtained the lesion classification, and the methods for defining them.
features related with images using discrete wavelet Then, describe how to extract these features through
transformation. In the second stage, the features of skin digital image processing methods, i.e., segmentation, border
images have been reduced using principle component detection, and color and texture processing, and we present
analysis to the more essential features. In the classification the most prominent techniques for skin lesion classification.
stage, two classifiers based on supervised machine learning The describe the statistics and the results of the most
have been developed. The first classifier based on feed important implementations that exist in the literature, while
forward back-propagation artificial neural network and the it compares the performance of several classifiers on the
second classifier based on k-nearest neighbour. Maryam et specific skin lesion diagnostic problem and discusses the
al.[14] have describe Irregular streaks are important clues corresponding findings.
for Melanoma diagnosis using dermoscopy images.
This extends our previous algorithm to identify the III. METHODOLOGY
absence or presence of streaks in a skin lesions, by further
analyzing the appearance of detected streak lines, and
performing a three-way classification for streaks, Absent,
Regular, and Irregular, in a pigmented skin lesion. The
directional pattern of detected lines is analyzed to extract
their orientation features in order to detect the underlying
pattern. The method uses a graphical representation to
model the geometric pattern of valid streaks and the
distribution and coverage of the structure. Robert Amelard
et al.[15] have describe High-level intuitive features (HLIF)
that measure border irregularity of skin lesion images Figure1: block diagram representation
obtained with standard cameras are presented. Existing Skin cancer detection consists following steps:
feature sets have defined many low-level unintuitive a. Image Acquisition: In first step images are acquired.
features. Incorporating HLIFs into a set of low-level features b. Image Preprocessing: In second step pre-processing is
gives more semantic meaning to the feature set, and allows done using median filter. Median filter is used to
the system to provide intuitive rationale for the classification remove unwanted hair, bubbles and noise from the
decision. Promising experimental results show that adding a images [6]. The skin cancer image usually contains
small set of HLIFs to the large state-of-the-art low-level skin fine hair, noise and bubbles[5]. These are not the
lesion feature set increases sensitivity, specificity, and cancer factor so these are removing using median
accuracy, while decreasing the cross-validation error. filter[6].
Xiaojing Yuan et al. [16] developed a decision support c. Segmentation: In third step segmentation done using
system for early skin cancer detection that relies on analysis thresholding[7]. Thresholding is the technique for
of the pigmentation characteristics of a skin lesion, detected established boundaries in image that contain solid
using cross polarization imaging, and the increased object resting on a contrasting background[6].
vasculature associated with malignant lesions that is d. Feature Extraction: In fourth step features are

© 2015-19, IJARCS All Rights Reserved 150


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

extracting using feature extraction technique[7]. In filter.


feature extraction technique useful information are
extract from segmented image. Feature extraction is V. AREA OF INTEREST SEGMENTATION
done using multilevel 2-D wavelet decomposition[8].
e. Artificial Neural Network Classifier: In fifth step Region of interest (ROI) is extract by segmentation.
these information is used in classification system for Segmentation is done using thresholding [7]. The aim of
training and testing[3]. Classification done by using segmentation process is to divide the image into
back propagation neural network and radial basic homogenous, self-consistent regions. Segmentation is the
neural network. process of partition the image into the group of pixels which
f. Output: In last step data set of cancerous and non- are homogenous with respect to some criterion [6].
cancerous are found. Thresholding technique produces segments having
pixels with similar intensities. Usually the cancer remains in
IV. IMAGE PROCESSING ALGORITHMS image after segmentation. Thresholding is a technique for
established a boundaries in image that contain solid objects
Pre-processing is to perform image processing on resting on contrasting background [6]. Segmentation
original image [5]. The various phases include converts any image into a series of Black text written on
Conversion to Gray image and Noise Reduction. Pre- a White background. Thresholding is a simplest method
processing used to referring to remove the unwanted feature of image segmentation. From gray scale image,
on skin [7]. threholding can be used to create binary image.
In first step converts the true color image RGB to Thresholding can be used to separate light and dark region
the gray scale intensity image by using a function [8]. Image thresholding classifies pixel into two
called rgb2gray (). categories:-
In second step noise are remove using median filter. Those to which some property measured from the
Median filter are used to preserve the amplitude and location image fall below the threshold, and those at which the
of edges. Median filter reduces the variance of the intensities property equal and exceeds a threshold.
in the image that means median filter smoothers the image
by utilizing the median of the neighbourhood [6].

Figure 5. Thresholding segmentation


Figure 2. Original image In the skin cancer image, cancer is clearly seen by using
median filter. Thus we can divide segmentation image into
two parts object and background part.

VI. WAVELET TRANSFORM

Wavelet transforms have become one of the most


important and powerful tool of signal representation.
wavelet transforms are based on small waves, called
wavelets[23]. The two-dimensional wavelet transforms are
slightly different to one-dimensional ones. One can easily
extend it by simply multiply the one-dimensional scaling
Figure3. RGB to gray scale image and wavelet functions. The maximum level to apply the
wavelet transform depends on how many data points contain
in a data set, since there is a down-sampling by 2 operation
from one level to the next one[28]. Wavelet transform in
two dimensions is used in image processing. We should
select our “mother wavelet” for better approximate and
capture the transient spikes of the original signal. “Mother
wavelet” will not only determine how well we estimate the
original signal, but also, it will affect the frequency
spectrum of the denoised signal. We are using 8 mother
wavelet such as Haar, Sym4, Syn7, Db1, Db10, Bior1.3,
Bior5.5, Coif3.
Figure4. Pre-processed image

Figure4 show image pre-processing by using median

© 2015-19, IJARCS All Rights Reserved 151


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

VII. FEATURE EXTRACTION

Important feature of image data are extract from the


segmented image[2]. The extracted feature should be
detailed enough classified. Multilevel 2-D wavelet transform
is used for feature extraction[3]. Two dimension wavelets
Low pass Vertical Detailed Horizontal Detailed
are a natural extension from the single dimension case. 2-D Approximation Image Image
wavelet can be defined as the outer product of one
dimension wavelets. Outer product work in a similar way to
inner products, however inner products take two vectors and
combine to form a scalar, outer product work the other way,
extrapolating a matrix from two vectors. Two level of
decomposition are used. At each level of decomposition,
the wavelet of primary image is divided into an approximate Diagonal Detailed Vertical Detailed Horizontal Detailed
and three detailed image which show the basic information Image Image Image
and vertical, horizontal and diagonal details, respectively[4].
The Feature extracted using the wavelet transform are:
Mean, Maximum, Minimum, Median, Standard deviation
and Variance.
Mean:-
1
fˆ ( x, y ) = ∑ g ( s, t )
mn ( s ,t )∈S xy
(1) Diagonal Detailed
Image

Maximum:-
fˆ ( x, y ) = max {g ( s, t )} (2)
Figure 6. Feature extraction
( s ,t )∈S xy
VIII. ARTIFICIAL NEURAL NETWORKS
Minimum:-
fˆ ( x, y ) = min {g ( s, t )} (3) An Artificial Neuron is basically an engineering
( s ,t )∈S xy
approach of biological neuron. A Neural Network is
Median:- a massively parallel distributed processor made up
fˆ ( x, y ) = median{g ( s, t )} (4) of simple processing units which have natural
( s ,t )∈S xy propensity for storing experiential knowledge and
If total number of n is odd number than the formula of making it available for use. It resembles to brain in
median is two aspects. First, Knowledge is acquired by the
Network from its environment through a learning
 n +1
th
process. Second, Interneuron connection strength is
Median=   term (5)
 2  used to store acquired knowledge[3].
In Neural Network, each node perform some
If total number of n is even than the formula of median is
simple computation and each connection conveys a
Median=
th th
signal from one node to another labelled by a number
n n  called “connection strength” .
   + 1
 2  term +  2  term (6)
2 2
Variance:-

( )
2
1 n
s =
2
∑ x−x
n − 1 i =1
(7)

Standard deviation:-

s=
1 n
∑ x−x
n − 1 i =1
( ) 2
(8)

n: is the symbol for the number of data. Figure 7. simple neuron


x: is correspond to the observed value.
s: is sample data. Linear Combination U k ,
g ( s, t ) : is the original image pixel, U k = ∑Wkj ∗ X j (9)
fˆ ( x, y ) : is the resulting noisy pixel Induced Local Field Vk ,
Vk = U k + bk (10)
Activation function defines the value of output Yk ,

© 2015-19, IJARCS All Rights Reserved 152


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

Yk = φ (Vk ) (11) θj – Node threshold at unit j,


There are different type activation function in F – Activation Function.
neural network such as, Threshold Activation In third step weight training done
Function, Piecewise Linear Activation Function, Start at output unit and work backward to the hidden
Sigmoid Activation Function, Signum Activation layer recursively adjust the weight by
Function etc. Learning is a process by which free W ji (t + 1) = W ji (t ) + ∆W ji (13)
parameters of a Neural Networks are adapted through a
process of simulation by the environment in which The weight change is computed by:
the network is embedded. Once the system begins to ∆W ji = ηδ j Oi (14)
learn containing some initial weight values, as the Where η = learning rate,
learning process increase weight values keeps on
changing and provide the final output at end [24]. δj = error gradient
The error gradient is given as follows at Output Unit
IX. BACKPROPOGATION ALGORITHM
δ j = O j (1 − O j )(T j − O j ) (15)
Back-propagation neural network is one of the most And for Hidden Unit
common neural network structures, as it is simple and
effective. Back-propagation is the generalization of the
δ j = O j (1 − O j )∑ δ kWkj (16)
Widrow-Hoff learning rule to multiple-layer networks and Where T j = Target Value,
nonlinear differentiable transfer function. The hidden and
output layer nodes adjust the weights value depend on the O j = Actual Output Value,
error in classification. The modification of the weights is
according to the gradient of the error curve, which point in
δk = Error Gradient at unit k
the direction to the local minimum.BNN is benefit on to which a connection point at unit j.
prediction and classification but processing speed is slower In step four Repeat Iterations until convergence
compared to other learning algorithms[3].
X. EXPERIMENTAL RESULT

In this proposed system, a feed forward multilayer


network is used. Back propagation algorithm used for
training. There must be input layer, at least one hidden layer
and output layer. The hidden layer and output layer nodes
adjust the weights value depending on the error in
classification. In BPN the signal flow will be in feed
forward direction, but the error is back propagated and
weights are update to reduce the error. Database image are
taken from the medicinenet.com. In this paper we are taking
a database of 100 images. In which 50 are cancerous and 50
are non-cancerous. In this case we are taking a 42 × 100
input, hidden layer of 20 neuron and 1 output layer. Neural
network pattern reorganization technique is used for find the
accuracy of training, validation and testing. Database is
dividing into a three set: 50 images for training, 30 images
for validation and 20 images for testing.
Comparing different wavelet, eight different wavelets
are used in this paper followed by some classification test to
compare them and find out the best results.
Figure8: A feed-forward neural network Table1. Classification test with different wavelet
In first step weight initialized by set all the weight and Wavelet training Validation testing Total
node threshold to some small random variable. In second Sym4 94% 85% 73.3% 86%
step calculation of activation done. Sym7 98% 85% 76.7% 89%
Db1 100% 95% 76.7% 92%
Input Unit: - The Activation Level of the input unit Db10 100% 90% 80% 92%
is determined by the instances presented to the Bior3.1 100% 90% 70% 89%
Network Bior5.5 98% 80% 73.7% 87%
Hidden unit and Output unit: - The Activation Coif3 94% 90% 83.3% 90%
Haar 100% 96.7% 75% 94%
Level O j of Hidden unit and Output Unit are
c
determined by: Accuracy = × 100 (17)
O j = F [∑W ji ∗ Oi − θ j ]
n
(12)
c= total no of correct attempt.
Where W ji – weight from input n= total no of attempt.

O j to unit j,

© 2015-19, IJARCS All Rights Reserved 153


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

Figure9. Training confusion matrix for haar wavelet.

Figure. 13.Performance Plot for System

This figure does not indicate any major problems with


the training. The validation and test curves are very similar.
If the test curve had increased significantly before the
validation curve increased, then it is possible that some over
fitting might have occurred.
The below figure shows the error plot for the
samples provided for testing, training and validation
Figure10. validation confusion matrix for haar

Figure11. Test confusion matrix for haar wavelet.

Figure. 14.Error Plot for the system

The blue bars represent training data, the green bars


represent validation data, and the red bars represent testing
data. The histogram can give you an indication of outliers,
which are data points where the fit is significantly worse
than the majority of data.
We are using a multilevel decomposition. Comparing
different level of decomposition, one to five level different
decomposition are used in this paper followed by some
classification test to compare them and find out the best
results
Figure12. All confusion matrix for haar wavelet

© 2015-19, IJARCS All Rights Reserved 154


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

Table2: Classification result with different level. XII. RESULT


Level Training Validation Testing Total
First 90% 73.3% 95% 86%
Second 100% 96.7% 75% 94%
Third 96% 76.7% 85% 88%
Fourth 100% 93.3% 75% 93%

Fifth 98.0% 80.0% 80.0% 89%

XI. RADIAL BASIS FUNCTION NETWORKS

Radial basis function (RBF) networks a type of artificial


neural network for application to problems of supervised
learning. The two main advantages of this approach are
keeping the mathematics simple and the computations
relatively cheap. Radial Basis Function Networks (RBFN) Figure15. Training confusion matrix for bior5.5 wavelet.
consists of 3 layers an input layer, a hidden layer, an output
layer [26].
The hidden units provide a set of functions that
constitute an arbitrary basis for the input patterns. Hidden
units are known as radial centres and represented by the
vectors c1, c2, ....,ch.
It can be regarded as a special Multilayer Perceptron
(MLP) because it combines the parametric statistical
distribution model and nonparametric linear perceptron
algorithm in serial sequence. In the kernel layer, it consists
of a set of kernel basis functions called radial basis
functions[27]. Radial basis functions are means to
approximate multivariable functions by linear combinations
of terms based on a single univariate function.
This is radialised so that in can be used in more than
one dimension. They are usually applied to approximate
Figure16. Test confusion matrix for bior5.5 wavelet
functions or data which are only known at a finite number
of points (or too difficult to evaluate otherwise), so that then
evaluations of the approximating function can take place
often and efficiently[26].

( ) , x∈R
m
s ( x ) = ∑ λ jφ x − x j n
(17)
j =1

x j - are the data points,


x -is a free variable,
ϕ- is a univariate, are scalar parameters.
In this paper we are taking a database of 100 images. In
which 50 are cancerous and 50 are non-cancerous. 50 image
are taken for training and 50 are taken for testing.
Comparing different wavelet, eight different wavelets
are used in this paper followed by some classification test to
compare them and find out the best results. Figure17. All confusion matrix for bior5.5 wavelet.
Table3. Classification test with different wavelet
We are using a multilevel decomposition. Comparing
Wavelet Training Testing Total different level of decomposition, one to five level different
Sym4 88% 90% 87% decomposition are used in this paper followed by some
Sym7 90% 90% 87% classification test to compare them and find out the best
results.
Db1 84% 92% 88%
Db10 84% 86% 88% Table4: Classification result with different level.

Bior3.1 84% 90% 86% Level Training Testing Total


First 92% 88% 84%
Bior5.5 90% 90% 94%
Second 90% 90% 94%
Coif3 82% 92% 91% Third 90% 90% 87%
Haar 84% 92% 88% Fourth 84% 84% 86%
Fifth 88.0% 88.0% 87%

© 2015-19, IJARCS All Rights Reserved 155


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

XIII. CLASSIFICATION RESULT XIV. CONCLUSION

In this paper we are constructed two types of neural Skin cancer is the most dangerous diseases, so early
network back propagation neural network (BPNN) detection of this diseases is necessary. But the detection of
and radial basis function neural network (RBFNN) , both skin cancer is most difficult task. From the literature review
achieved good results. BPNN based network can reach very many techniques are used for the detection of lung cancer
high results, but the RBFNN based network has good but some limitations also exists. Our proposed method
generalization ability. So the BPNN based network can be follows an approach in which first step is feature extraction,
used as simulation of the process for exploring new and then these features are used to train and test the neural
algorithms. Feed forward back-propagation neural network network. Wavelet transform is used to extract the feature of
or back propagation neural network is a simple and effective images. From the results, the proposed technique
model of ANN. It contains three layers, which are input, successfully detects the Skin cancer from images. Our
hidden, and output layers. Its structure is multilayer and has proposed method gives 92% accuracy with BPNN and 88%
a learning process. Radial basis function neural network accuracy with RBFNN using a haar wavelet. If it is detected
(RBFNN) is one of the efficient artificial networks. These correctly in early stages then it increase the key of survival.
types of the networks are mostly used for function In future this technique can be used in the detection of type
approximation. Unlike BPNN, in the structure of RBFNN, of skin cancer in cancerous images.
there is only one hidden layer that makes computation time
very less. Radial Basis Function Neural Network (RBFNN) XV. REFERENCES
is a type of multilayer network[27]. It is different from
BPNN in its training algorithm. The basic RBFNN structure [1]. http://www.skincancer.org
consists of three layers. These are an input layer, a kernel [2]. http://www.medicinenet.com
(hidden) layer, and an output layer. RBFNN can overcome [3]. Haykin, Simon. Neural networks: a comprehensive
some of the limitations of BPNN because it can use a single
foundation. Prentice Hall PTR, 1994
hidden layer for modelling any nonlinear function.
Therefore, it is able to train data faster than BPNN. While [4]. Kopf, Alfred W. "Prevention and early detection of skin
RBFNN has simpler architecture, it still maintains its cancer/melanoma." Cancer 62.S1 (1988): 1791-1795.
powerful mapping capability [5]. Hoshyar, Azadeh Noori, Adel Al-Jumaily, and Riza
Sulaiman. "Review on automatic early skin cancer
detection." Computer Science and Service System (CSSS),
2011 International Conference on. IEEE, 2011.
[6]. Sonali Raghunath Jadhav, D.K.Kamat.
”Segmentation based detection of skin cancer” IRF
international conference, 20- july-2014
[7]. Lau, Ho Tak, and Adel Al-Jumaily. "Automatically Early
Detection of Skin Cancer: Study Based on Nueral Netwok
Classification." Soft Computing and Pattern Recognition,
2009. SOCPAR'09. International Conference of. IEEE, 2009.
[8]. Jaleel, Dr J. Abdul, Sibi Salim, and R. B. Aswin. "Artificial
Neural Network Based Detection of Skin
Cancer." International Journal of Advanced Research in
Electronics and Instrumentation Engineering 1.3 (2012).
[9]. Jianli, Liu, and Zuo Baoqi. "The segmentation of skin cancer
Figure18: comparison between RBFNN and BPNN using a multiple image based on genetic neural network." Computer Science
wavelet. and Information Engineering, 2009 WRI World Congress on.
Vol. 5. IEEE, 2009.
[10]. Sheha, Mariam A., Mai S. Mabrouk, and Amr Sharawy.
"Automatic detection of melanoma skin cancer using texture
analysis." International Journal of Computer Applications 42
(2012).
[11]. Sigurdsson, Sigurdur, et al. "Detection of skin cancer by
classification of Raman spectra." Biomedical Engineering,
IEEE Transactions on 51.10 (2004): 1784-1793.
[12]. Ramteke, Nilkamal S., and Shweta V. Jain. "Analysis of
Skin Cancer Using Fuzzy and Wavelet Technique–Review &
Proposed New Algorithm."International Journal of
Engineering Trends and Technology (IJETT) 4.6 (2013).
[13]. Elgamal, Mahmoud. "Automatic Skin Cancer Images
Classification."International Journal of Advanced Computer
Figure18: comparison between RBFNN and BPNN using a multiple level.
Science & Applications 4.3 (2013).

© 2015-19, IJARCS All Rights Reserved 156


Ekta Singhal et al, International Journal of Advanced Research in Computer Science, 6 (1), Jan–Feb, 2015,149-157

[14]. Sadeghi, Maryam, et al. "Detection and analysis of irregular Sensors, Sensor Networks and Information, 2007. ISSNIP
streaks in dermoscopic images of skin lesions." IEEE Trans. 2007. 3rd International Conference on. IEEE, 2007.
Med. Imaging 32.5 (2013): 849-861. [22]. S. V. Patwardhan, S. Dai, and A. P. Dhawan, “Multi-spectral
[15]. Amelard, R., A. Wong, and D. A. Clausi. "Extracting imageanalysis and classification of melanoma using fuzzy
morphological high-level intuitive features (HLIF) for membershipbased partitions,” Comput. Med. Imag. Graph.,
enhancing skin lesion classification." Engineering in vol. 29, pp. 287–296, 2005.
Medicine and Biology Society (EMBC), 2012 Annual [23]. Tiwari, Shamik, et al. "Blur Classification Using Wavelet
International Conference of the IEEE. IEEE, 2012. Transform and Feed Forward Neural Network." International
[16]. Yuan, Xiaojing, et al. "SVM-based texture classification and Journal of Modern Education and Computer Science
application to early melanoma detection." Engineering in (IJMECS) 6.4 (2014): 16.
Medicine and Biology Society, 2006. EMBS'06. 28th Annual [24]. Tiwari, Shamik, et al. "Texture features based blur
International Conference of the IEEE. IEEE, 2006 classification in barcode images." International Journal of
[17]. Ogorzałek, M. J., et al. "New Approaches for Computer- Information Engineering and Electronic Business
Assisted Skin Cancer Diagnosis." The Third International (IJIEEB) 5.5 (2013): 34.
Symposium on Optimization and Systems Biology, [25]. Tiwari, Shamik, et al. "A Blind Blur Detection Scheme
Zhangjiajie, China, Sept. 2009. Using Statistical Features of Phase Congruency and Gradient
[18]. Maglogiannis, Ilias, and Charalampos N. Doukas. "Overview Magnitude." Advances in Electrical Engineering 2014
of advanced computer vision systems for skin lesions (2014).
characterization." Information Technology in Biomedicine, [26]. Tiwari, Shamik, et al. "Blind Restoration of Motion Blurred
IEEE Transactions on 13.5 (2009): 721-733. Barcode Images using Ridgelet Transform and Radial Basis
[19]. Gonzalez, R.C. and R.E. Woods, 2008. Digital Image Function Neural Network."Electronic Letters on Computer
Processing. 3rd Edn., Prentice- all, Inc., New Jersey, ISBN: Vision and Image Analysis 13.3 (2014): 63-80.
10: 013168728x, pp: 594 [27]. Orr, Mark JL. "Introduction to radial basis function
[20]. Zouridakis, G., et al. "Transillumination imaging for early networks." (1996).
skin cancer detection." Biomedical Imaging Lab, Department [28]. Lin, Pao-Yen. "An introduction to wavelet
of Computer Science, University of Houston, Houston, TX, transform." Graduate Institute of Communication
available online at:, XP002539353 (2005). Engineering National Taiwan University, Taipei, Taiwan,
[21]. Chiem, Andy, Adel Al-Jumaily, and Rami N. Khushaba. "A ROC.
novel hybrid system for skin lesion detection." Intelligent

© 2015-19, IJARCS All Rights Reserved 157

You might also like