Detection of Diabetic Retinopathy Using Clahe Based Transfer Learning
Detection of Diabetic Retinopathy Using Clahe Based Transfer Learning
Detection of Diabetic Retinopathy Using Clahe Based Transfer Learning
https://doi.org/10.22214/ijraset.2022.47054
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
Abstract: Diabetic Retinopathy(DR) has been a noticeable increase in the number of people diagnosed as blind in today’s day
and age. Many of the individuals who were diagnosed as blind had long- term diabetes. As a result, persons who have had
diabetes for a long time may develop diabetic retinopathy, an eye disease. This usually happens when high blood sugar levels
damage the retina's blood vessels. These blood vessels have the potential to enlarge and leak. All of these variables have the
potential to cause blindness. Diabetic Retinopathy affects roughly 117.12 million people, according to WHO (World Health
Organization) figures from 2021 [Global Prevalence of DR and Projection of burden through 2045].Diabetic retinopathy is
divided into four categories: no DR, moderate Nonproliferative (NP), Moderate NPDR, severe NPDR, and Proliferative DR.
Early detection of DR can prevent blindness caused by long-term diabetes, which can be accomplished if DR is detected early.
We employ a preprocessing approach called Contrast Limited Adaptive Histogram Equalization (CLAHE) and CNN algorithms
like Alex net, Resnet, and Train net to classify eyeballs in this research. Normal eyes are classified as mild np, moderate np, and
severe np based on the accuracy predicted. The evaluation results reveal that the suggested pre-processing algorithm CLAHE is
effective in removing noise and redundant features from an input image, and when fed into the Efficient network, this system
achieves a 96 percent accuracy.
Keywords: Image preprocessing CLAHE algorithm, Transfer learning CNN.
I. INTRODUCTION
Diabetes has always been a very common condition. High blood sugar is the cause. Diabetic Retinopathy is an eye condition that
develops in people who have had diabetes for a long time. This occurs when sugar obstructs the retina's small blood veins, causing
them to leak fluid. The number of adults affected by diabetic retinopathy in 2021 is expected to reach 117.12 million, according to
the article Global Prevalence of Diabetic Retinopathy and Projection of Burden through 2045. Diabetic Retinopathy might result in
blindness in severe cases. As a result, it has become one of the most common causes of blindness. The likelihood of getting
diabetic retinopathy is proportional to the length of the condition. It is classified into 4 types: mild NP, moderate NP, severe NP,
Proliferative Diabetic retinopathy eye (PDR).In today's world, there has been a noteworthy increase in the number of people
diagnosed as blind. Many of the individuals whowere diagnosed as blind had long-term diabetes. Asa result, persons with long-
term diabetes maydevelop diabetic retinopathy, an eye condition. This usually happens when high blood sugar levels damage the
retina's blood vessels. These blood vessels have the potential to enlarge and leak. All of these variables have the potential to
cause blindness. DR is thought to have afflicted 93 million people worldwide. In those with long-term diabetes, DR affects about
40% of the population. Diabetic retinopathy can be divided into four categories: (no DR, mild, non-proliferative DR and
proliferative DR).The blindness caused due toprolonged diabetes can be stopped by earlydetection of DR, this can be achieved
if we detect DR at an early stage hence, we require technologies to be available free of cost and be present everywhere. Asra
momeni et al.,[21] (2020) proposed a new diabetic retinopathy monitoring model by using the Contrast Limited Adaptive
Histogram Equalization method to improve the image quality and equalize intensities uniformly as the pre-processing step. Then,
EfficientNet-B5 architecture is used for the classification step.Yitian Zhao et al.,[11] (2019) proposed a useful technique called
CLAHE that is presented for the amplification of the vessels in retinal fundus images as the pre-processing step. Therefore, by
increasing the contrast, the important information inside the images is improved.The efficiency of this network is in uniform
scaling all dimensions ofthe network.
II. RELATED WOKS
Lei Zhang et al.,[1] (2020) work on Diabetic Retinopathy Detection competition that was heldon Kaggle Platform. The images
that were derived were basically classified into 5 categories namely No DR, Mild DR, Moderate DR, Severe DR and
Proliferative DR. It uses the Migration Learning Approach which is one of the new machine learning approaches. There are four
different approaches to implement this method: they haveused Feature based Transfer Learning.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 600
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
The pre- training model they used is based on ImageNet. Since there was a requirement of data augmentation which involved the
usage of image
Data Generator which is a class present in Keras. . They used the pre-training model for feature extraction that was required to
develop a final model to detect DR. The accuracy that was obtained using ResNet50 was 0.50. This paper adopts the method of
migration learning, using Keras built-in pre-training model to fine-tune the new dataset to achieve the classification based on the
degree of diabetic retinopathy.
Idowu paul okuwobi et al.,[2] (2017)mentioned that the database used was borrowed from the dataset provided by Kaggle during
the competition to detect Diabetic Retinopathy Detection. They basically followed a two-step process which includes data pre-
processing and augmentation and convolution layer. The convolutional layer was further divided into a 5-step process which
involved convolutional layer, The basic building block of Convolutional neural extracting weights, Pooling Layer, connecting
layer and logisticclassifier (the values it output represent theprobability of each class). The accuracy that was obtained was 74%.
Lama seoud et al.,[3] (2015) proposed the Prognosis of Microaneurysm and early diagnostics system for non-proliferative
diabetic retinopathy (PMNPDR) capable of effectively creating DCNNs for the semantic segmentation of fundus images which can
improve NPDR detection efficiency and accuracy. A sparse Principal Component Analysis based unregulated classification
approach for detecting microaneurysm was developed. Once a model that represents MA has been developed, any deviating from
the standard MA is detected by statistical monitoring, a scarce Principal Component Analysis is employed to find the latent
structure of microaneurysm data.
Shan shan zhe et al.,[4] (2017) proposed EyeFundusScopeNEO, a Tele- Ophthalmology system that supports opportunistic and
planned screening of Diabetic Retinopathy in primary care. Preliminarytests show the device is safe, that clinicians canacquire
images quickly after reduced training, andthat the non-mydriatic system focuses appropriatelyon eyes with different dioptres. The
characteristics of the system address long-standing issues of current screening programmes, are expected to be able to increase
the reach of screenings once the system is implemented.
Keerthi ram et al.,[5] (2019) described the association between the tortuosity with type 2 diabetes and DR severity was investigated
based on a Chinese population-based cohort. Higher contrast retinal photographs taken by a confocal scanning laser ophthalmoscope
were used to extract retinal arteriolar and venular tortuosity from both main and branching vessels, thus this study is with higher
reliability. Arteriolar and venular tortuosity increased with increasing DR severity, and diabetic patients with more tortuous venules
weremore likely to suffer from moderate NPDR, severe NPDR, and PDR, whereas those with more tortuous arterioles were more
likely to suffer from severe NPDR and PDR, which indicates that retinal vascular tortuosity might be an 6 remarkable indicator of
the retinopathy. Harihar narishma iyer et al.,[6] (2018) emphasizedthe necessity for an automated technique for NPDRretinal image
classification. Moreover, this road map reminds us that the evaluation of the Skeleton could in principle be a promising approach
to get many fractal features like, for instance, information and correlation dimensions in order to have good ideas concerning the
existence of gaps and thebifurcation point as well.
Romany F.Mansour et al.,[7] (2020) proposed model has a Siamese-like architecture which accepts binocular fundus images as
inputs and predicts the possibility of RDR for each eye byutilizing the physiological and pathological correlation of both eyes. The
evaluation result shows that proposed binocular model achieves high performance with an AUC of 0.951 and a sensitivity of
82.2% on the high specificityoperating point and a specificity of 70.7% on the high sensitivity operating point, which outperforms
that of existing monocular model based onInception V3 network. These results also demonstrate that the proposed model has
great potential to assist ophthalmologists to diagnoseRDRmore efficiently and improve the screeningrate of RDR.
Maziyar M.khansari et al.,[8] (2019) proposed a data augmentation scheme to compensate for the lack of PDR cases in DR-
labeled datasets. It builds upon a heuristic-based algorithm for the generation of neovessel-like structures which relies on the
general knowledge of common location and shape of these structures. NVs which present an unusual shape or that are too
slight are still being missed by the model, likely due to its lack of representation in the generated dataset. This study shows the
potential of introducing NVs in retinal images for improving the detection of these proliferative DR signs, thus allowing to
improvethe performance of computer-aided DR gradingmethods and easing their clinical.
Lama seoud et al.,[3] (2015) proposed thePrognosis of Microaneurysm and early diagnosticssystem for non-proliferative diabetic
retinopathy (PMNPDR) capable of effectively creating DCNNs for the semantic segmentation of fundus images which can improve
NPDR detection efficiency and accuracy. A sparse Principal Component Analysis based unregulated classification approach for
detecting microaneurysm was developed. Once a model that represents MA has been developed, any deviating from the standard
MA is detected by statistical monitoring, a scarce Principal Component Analysis is employed to find the latent structure of
microaneurysm data.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 601
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
Shan shan zhe et al.,[4] (2017) proposed EyeFundusScopeNEO, a Tele- Ophthalmology system that supports opportunistic and
planned screening of Diabetic Retinopathy in primary care. Preliminary tests show the device is safe, that clinicians can acquire
images quickly after reduced training, and that the non-mydriatic system focuses appropriately on eyes with different dioptres. The
characteristics of the system address long-standing issues of current screening programmes, are expected to be able to increase the
reach of screenings once the system is implemented.
Keerthi ram et al.,[5] (2019) described the association between the tortuosity with type 2 diabetes and DR severity was investigated
based on a Chinese population-based cohort. Higher contrast retinal photographs taken by a confocal scanning laser ophthalmoscope
were used to extract retinal arteriolar and venular tortuosity from both main and branching vessels, thus this study is with higher
reliability. Arteriolar and venular tortuosity increased with increasing DR severity, and diabetic patients with more tortuous venules
were more likely to suffer from moderate NPDR, severe NPDR, and PDR, whereas those with more tortuous arterioles were more
likely to suffer from severe NPDR and PDR, which indicates that retinal vascular tortuosity might be an 6 remarkable indicator of
the retinopathy.
Harihar narishma iyer et al.,[6] (2018) emphasized the necessity for an automated technique for NPDR retinal image classification.
Moreover, this road map reminds us that the evaluation of the Skeleton could in principle be a promising approach to get many
fractal features like, for instance, information and correlation dimensions in order to have good ideas concerning the existence of
gaps and thebifurcation point as well.
Romany F.Mansour et al.,[7] (2020) proposed model has a Siamese-like architecture which accepts binocular fundus images as
inputs andpredicts the possibility of RDR for each eye byutilizing the physiological and pathological correlation of both eyes. The
evaluation resultshows that proposed binocular model achieves highperformance with an AUC of 0.951 and a sensitivity of 82.2%
on the high specificityoperating point and a specificity of 70.7% on the high sensitivity operating point, which outperforms that of
existing monocular model based on Inception V3 network. These results also demonstrate that the proposed model has
great potential to assist ophthalmologists to diagnoseRDR more efficiently and improve the screeningrate of RDR.
Maziyar M.khansari et al.,[8] (2019) proposed a data augmentation scheme to compensate for the lack of PDR cases in DR-labeled
datasets. It builds upon a heuristic-based algorithm for the generation of neovessel-like structures which relies on the general
knowledge of common location and shape of these structures. NVs which present an unusual shape or that are too slight are still
being missed by the model, likely due to its lack of representation in the generated dataset. This study shows the potential of
introducing NVs in retinal images for improving the detection of these proliferative DR signs, thus allowing to improve the
performance of computer-aided DR grading methods and easing their clinical.
Yizhou et al.,[9] (2014) focused on classifying all the stages of DR, and proposed a CNN ensemble- based framework to detect
and classify the DR’s different stages in color fundus images. Results show that the proposed ensemble model performs better
than other state-of-the-art methods and isalso able to detect all the stages of DR. In future in order to further increase the accuracy
of early- stage, they plan to train specific models for specific stages and then ensemble the outcome in order to increase the
accuracy of early stages.
Juan Wang et al.,[10] (2018) proposed model redesigns the network structure of the traditional LeNet model, adding BN layer
to obtain a new model BNCNN, effectively preventing the gradient diffusion, accelerating the training speed and improving the
accuracy of the model.This paper solves the problem of how one-dimensional irrelevant data is convolution, breaks the traditional
specificity of CNN in the field of image, and also obtains a good prediction result. This study provides a basis for the early
diagnosis of diabetic complicated retinopathy and the optimization of diagnostic procedures. It combines deep learning with
electronic medical record information andachieves good results.
Yitian Zhao et al.,[11] (2019) proposed a useful technique called CLAHE that is presented for the amplification of the vessels
in retinal fundus images as the pre-processing step. Therefore, by increasing the contrast, the important information inside the
images is improved. Also, the new EfficientNet-B5 architecture is employed for the classification step. The efficiency of this
network isin uniform scaling all dimensions of the network.
Jianwu Wang et al.,[12] (2016) delineated the largest publicly available datasets of eye fundus images (EyePACS and APTOS
datasets) were used to train and evaluate the developed model. Limitations of the developed approach 8 which is commonly
encountered in deep learning models is the comprehensiveness of the datasets used and the training time associated with using a
very large number of images. However, once the model is trained, it classifies a test or unknown image in a short time (<0.5 s). A
possible future extension of this work includes the real-time implementation of this model as a smartphone app so that it can easily
be deployed in clinical environments for diabetic retinopathy eye examination.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 602
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
Doshi D et al.,[13] (2018) proposed that when multi-scale shallow CNNs combined with performance integration is introduced to
the early detection of Diabetic Retinopathy through the classification of retinal images. Owing to the feature sensing under various
vision-related receptive fields by different base learners and the repeatable dataset sampling, it can do imageclassification well when
there are not enough high- quality labeled samples. Moreover, the proposed approach also performs well on small datasets when
considering both classification effect and efficiency compared with other approaches.
Pratt H et al.,[14] (2019) mentioned that Natural images in ImageNet have structures different from those of fundus images; they
adapted the hierarchical structure of a pre-trained CNN model to the fundus images by reinitializing the filters of its CONV1
layer using the lesion ROIs extracted from the annotated E-ophtha dataset and then fine- tuned it using the ROIs. For tuning
them to high- level features, reduce the model complexity, and avoid overfitting, they replaced the FC layers with a PCA layer
learned using ROIs and used.
Jude hemanth.D et al.,[15] (2019) proposed the employment of image processing with histogram equalization, and the contrast
limited adaptive histogram equalization techniques. Next, the diagnosis is performed by the classification of a convolutional
neural network. The method was validated using 400 retinal fundus images within the MESSIDOR database, and average 8 values
for different performance evaluation parameters were obtained as accuracy 97%, sensitivity (recall) 94%, specificity 98%, precision
94%, FScore 94%, and GMean 95%.
Yi wei chen et al.,[16] (2018) They propose a recognition pipeline based of cnn deep convolutional neural networks. In our
pipeline, they design lightweight networks called SI2DRNet- v along with six methods to further boost the detection performance.
Without any fine-tuning, their recognition pipeline outperforms state of the art on the Messidor dataset along with 5.26x fewer in
total parameters and 2.48x fewer in total floatingoperations.
Uzair Ishtiaq et al.,[17] (2020) proposed a system that developed image preprocessing techniques, contrast enhancement combined
with green channel extraction contributed the most in classification accuracy. In features, shape- based, texture-based and statistical
features were reported as the most discriminative in DR detection. The Artificial Neural Network was a proven classifier compared
to other machine learning classifiers. In deep learning, Convolutional Neural Network outperformed compared to other deep
learning networks. Finally, to measure the classification performance, accuracy, sensitivity, and specificity metrics were mostly
employed.
Sairaj burewar et al.,[18] (2018) proposed at at detecting the various stages of Diabetic Retinopathy by using U-Net
segmentation withregion merging & Convolutional Neural Network(CNN) to automatically diagnose and thereby classify high-
resolution retinal fundusimages into 5 stages of the disease based on severity. A major difficulty of fundus imageclassification is
high variability especially in the case of Proliferative diabetic retinopathy where there exists retinal proliferation of new blood
vessels and retinal detachment. Hence, The proper analysis of the retinal vessel is required to get the precise result, which can
be done by Retinal Segmentation. Retinal Segmentation is the 10 process of automatic detection of boundaries of blood vessels.
The features lost during segmentation are retained during region merging & passed through the image classifier, with the
accuracy up to 93.33%.
Navoneel Chakrabarty et al.,[19] (2019) they proposed a method to automatically classify patients having diabetic retinopathy and
not having the same, given any High-Resolution Fundus Image of the Retina. For that an initial image processing has been done
on the images which includes mainly, conversion of coloured (RGB) images into perfect greyscale and resizing it. Then, a Deep
Learning Approach is applied in which the processed image is fed into a Convolutional Neural Network to predict whether the
patient is diabetic or not. This methodology is applied on a dataset of 30 High Resolution Fundus Images of the retina. The results
obtained are a 100 % predictive accuracy and a Sensitivity of 100 % .
Abhishek samanta et al.,[20] (2020) proposed a transfer learning based CNN architecture on color fundus photography that
performs relatively well on a much smaller dataset of skewed classes of 3050 training images and 419 validation images in
recognizing classes of Diabetic Retinopathy from hard exudates, blood vessels and texture. This model is extremely robust and
lightweight, garnering a potential to work considerably well in small real time applications with limited computing power to speed
up the screening process. The dataset was trained on Google Colab. We trained our model on 4 classes - I)No DR ii)Mild DR
iii)Moderate DR iv)Proliferative DR, and achieved a Cohen's Kappa score of 0.8836 on the validation set along with 0.9809 on the
training set.
Asra momeni et al.,[21] (2020) proposed a new diabetic retinopathy monitoring model by using the Contrast Limited Adaptive
Histogram Equalization method to improve the image quality and equalize intensities uniformly as the pre-processing step. Then,
EfficientNet-B5 architecture is used for the classification step. The efficiency of this network is in uniformly scaling all dimensions
of the network.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 603
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
The final model is trained once on a mixture of two datasets, Messidor-2 and IDRiD, and evaluated on the Messidor dataset. The
area under the curve (AUC) is enhanced from 0.936, which is the highest value in all recent works, to 0.945. Also, once again, to
further evaluate the performance of the model, it is trained on a mixture of two datasets, Messidor-2 and Messidor, and evaluated
on the IDRiD dataset. In this case, the AUC is enhanced from 0.796, which is the highest value in all recent works, to 0.932. In
comparison to other studies, our proposed model improves the AUC.
A. System Architecture
The main idea of this project is to change the wayof detecting Diabetic Retinopathy so that each person who wants their eyes to
be analyzed need not wait and his/her eye can be analyzed accurately. Here the main requirement for the detection of Diabetic
Retinopathy is the Fundus Image. Here we are using Kaggle dataset which has the fundus images of eyes. It has over 8000 images
The fundus image is hence then undergoing the pre-processing stage using CLAHE algorithm where the unnecessary features like
noise are removed. Then the preprocessed image is fed into transfer learning techniques like Alex net, Efficient net and Resnet.
Transfer learning can take the accurate models produced from large trainingdatasets and help apply it to smaller sets of images. The
CLAHE algorithm is used for improving the quality of the image. This method is applied in background and foreground to limit the
noise and enhance the contrast.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 604
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
B. Architectural Description
The Retinal image dataset of an eye is presented as input in this figure 1 architectural diagram of diabetic retinopathy identification.
The CLAHE algorithm is then used to pre-process the image. During pre-processing, unwanted elements such as noise, edges, and
picture contrast are removed, and the image is shrunk. The pre-processed image is then fed into the CNN classifier algorithm. To
extract features, various filters convolve an image. To minimize the size of feature maps, the pooling layer is usually applied after
the CONV layer. There are other pooling strategies, but the most common are average pooling and maximum pooling. FC layers are
a little feature that can be used to characterize the entire input image. The most commonly utilized classification function is the
SoftMax activation function. The most commonly utilized classification function is the SoftMax activation function. The accuracy
predicted is used to classify the pictures and retinal images. As an output, the eye images are categorized as Normal eye, Mild NP,
Moderate NP, Severe NP, and Proliferative eye. The architectural description of diabetic retinopathy is given in fig 1, and the flow
of this system begins when a retinal image dataset is fed into a pre-processing algorithm called clahe, which removes the noise and
redundant features present in the backdrop of the picture for greater accuracy. Thus, transfer learning techniques alexnet, efficient
net, and resnet are used to classify the pre-processed images. The aforementioned classification will result in photos being
produced into five categories: normal, mild np, moderate np dr, severe np dr, and proliferative.
C. Preprocessing Module
The flow of this system begins when a retinal image dataset is supplied into a pre-processing algorithm called clahe, which
removes the noise and redundant characteristics present in the picture's backdrop for improved accuracy. To classify the pre-
processed photos, transfer learning cnn employ alexnet, efficient net, and resnet is utilized. Normal, mild np, moderate np dr,
severe np dr, and proliferative are the five groups that will be formed as a result of the aforementioned classification. Clip Limit and
TileGridSize are the two key parameters of the CLAHE Algorithm. The image is separated into little tiles, each of which is
histogram equalized as normal. As a result, Histogram would be confined to a small area in a small space. Noise will be intensified
if it exists. If any histogram bin exceeds the specified clip limit, the pixels are clipped and evenly distributed to other bins. Bilinear
interpolation is used after equalization to remove artifacts in the tile boundaries.
1) CLAHE Algorithm
a) Input: The input given to the system is Image.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 605
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
b) Output
After Pre-processing
After Classification
Classification of the given image and categorizes as no dr, mild dr, moderate np dr, proliferative.
Step 1: Divide the original intensity of the image into a non overlapping contextual region. Number of image tiles is equal to
A*B
Step 2: Calculate the histogram of each contextual region based on gray level in image.
Step 3: Calculate the contrast limited histogram for each contextual region by CL value
Bavg = (BrX*BrY)/Bgray Bavg is average number of pixel,Bgray is number of gray levels,
BrX and BrY are number of pixels in X and Ydimension
Actual CL is BCL = Bclip*Bavg
where BCL is actual CL,Bclip is normalized CL in range of [0,1]
If number of pixel is greater than BCL the pixels are clipped.The total number of clipped pixel is Bsclip,Then average of pixel
to distribute to each gray level is
Bavggray=Bsclip/Bgray
The histogram clipping rule is
Step 3.1:If Hreg(i) > BCL then Hreg_clip(i) = BCL
Step 3.2:Else if (Hreg(i)+Bavggray) > BCL then Hreg_clip(i) = NCL
Step 3.3:Else
Hreg_clip(i) = Hreg(i)+BCL
Where Hreg and Hreg_clip are the originalhistogram and clipped histogram.
Step 4: Redistribute the remain pixels until the other pixels are distributed D=Bgray/Bremain where Bremain is remaining of
clipped pixels D isthe positive integer at least 1
Step 5: Enhancing intensity values in each region by rayleigh transform.The clipped histogram is transformed to cumulative
probability.
Step 6: Calculate the new gray level assignment of pixels within the contextual region by interpolation
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 606
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
2) Efficient Net
The retinopathy dataset in figure 4 includes both infected and non-infected images. The tainted data is sorted into four categories
and listed in that order. These datasets are used to train the model, which is then applied to finding fresh photo sets. The input layer
gathers and pre-processes all of the image datasets. The second layer is 3*3 conv 3d. By convolving the layer input with a
convolution kernel, this layer yields a tensor of outputs. The photographs are then processed using several custom layers, with the
output layer providing the final result.
3) Alexnet
Alex Net is a deep convolutional network that was created to accommodate large coloured images (224x224x3). It had over 62
million trainable parameters in total. There are 11 layers in all, including 5 convolutional layers, 3 maximum pooling layers, and
three fully linked layers. The convolutional layer uses kernels to conduct a convolution operation on a pre-processed picture. Then
there's the max pooling layer, which uses the first layer's output as an input. Similarly, the previous layer's output is used as the
input for the next layer. The output tensor is produced by the fully connected layer, which then passes through the softmax
activation function. The network prediction is contained in the softmax activation function's output.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 607
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
E. Accuracy Prediction
Finally, using the methodologies described above, the accuracy of the retinal picture is anticipated, and the eye is classified as
normal, mild NP, moderate NP, severe NP, and proliferative eye as 0,1,2,3,4. This ultimate accuracy is based on how many
retinopathy features we took into account.
The above figure 4.1 is plotted against several networks we have implemented and the accuracy that has been obtained while
implementing the networks. In this project we have achieved the accuracy of 60% for Train net,78% for Alex net,96%for
Efficient net and 87% for Resnet.
We compared current publications to our study in Figure 4.2 and achieved the above accuracy. The CLAHE technique was not
implemented in most prior publications' pre-processing modules, allowing our approach to attain superior accuracy.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 608
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
REFERENCES
[1] Zhang, L., Li, Q., You, J. and Zhang, D., 2009. A modified matched filter with double-sided thresholding for screening proliferative diabetic retinopathy.
IEEE Transactions on Information Technology in Biomedicine, 13(4), pp.528-534.
[2] Okuwobi, I.P., Fan, W., Yu, C., Yuan, S., Liu, Q., Zhang, Y., Loza, B. and Chen, Q., 2018. Automated segmentation of hyperreflective foci in spectral domain
optical coherence tomography withdiabetic retinopathy. Journal of medical imaging, 5(1), p.014002.
[3] Seoud, L., Hurtut, T., Chelbi, J., Cheriet, F. and Langlois, J.P., 2015. Red lesion detection using dynamic shape features for diabetic retinopathy screening.
IEEE transactions on medical imaging, 35(4), pp.1116-1126.
[4] Wu, J.H., Gao, Y., Ren, A.J., Zhao, S.H., Zhong, M., Peng, Y.J., Shen, W., Jing, M. and Liu, L., 2012. Altered microRNA expression profiles in retinas with
diabetic retinopathy. Ophthalmic research, 47(4), pp.195-201.
[5] Ram, K., Joshi, G.D. and Sivaswamy, J., 2018. A successive clutter-rejection-based approach for early detection of diabetic retinopathy. IEEE Transactions
on Biomedical Engineering, 58(3), pp.664-673.
[6] Narasimha-Iyer, H., Can, A., Roysam, B., Stewart, V., Tanenbaum, H.L., Majerovics, A. and Singh, H., 2016. Robust detection and classification of
longitudinal changes in colorretinal fundus images for monitoring diabeticretinopathy. IEEE transactions on biomedical engineering, 53(6), pp.1084-1098.
[7] Romany F Mansour., 2020. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomedical engineering letters, 8(1),
pp.41-57.
[8] Khansari, M.M., Zhang, J., Qiao, Y., Gahm, J.K., Sarabi, M.S., Kashani, A.H. and Shi, Y., 2019. Automated deformation-based analysis of 3D optical
coherence tomography in diabetic retinopathy. IEEE transactions on medical imaging,39(1), pp.236-245.
[9] Zhou, Y., Wang, B., Huang, L., Cui, S. and Shao, L., 2020. A benchmark for studying diabetic retinopathy: segmentation, grading, and transferability. IEEE
Transactions on Medical Imaging, 40(3), pp.818-828.
[10] Wang, J., Bai, Y. and Xia, B., 2020. Simultaneous diagnosis of severity and features of diabetic retinopathy in fundus photography using deep learning. IEEE
Journal of Biomedical and Health Informatics, 24(12), pp.3397-3407.
[11] Zhao, Y., Zheng, Y., Liu, Y., Yang, J., Zhao, Y., Chen, D. and Wang, Y., 2016. Intensity and compactness enabled saliency estimation for leakage detection in
diabetic and malarial retinopathy.IEEE transactions on medical imaging, 36(1), pp.51-63.
[12] Chen, W., Yang, B., Li, J. and Wang, J., 2020. An approach to detecting diabetic retinopathy based on integrated shallow convolutional neural networks. IEEE
Access, 8, pp.178552-178562.
[13] Doshi, D., Shenoy, A., Sidhpura, D. and Gharpure, P., 2016, December. Diabetic retinopathy detection using deep convolutional neural networks. In 2016
International Conference on Computing, Analytics and Security Trends ( CAST) (pp. 261-266). IEEE.
[14] Pratt, H., Coenen, F., Broadbent, D.M., Harding, S.P. and Zheng, Y.,2016.Convolutional neural networks for diabetic retinopathy. Procedia computer science,
90, pp.200-205.
[15] Hemanth, D.J., Deperlioglu, O. and Kose, U., 2020. An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural
networks. Neural Computing and Applications, 32(3), pp.707-721.
[16] Chen, Y.W., Wu, T.Y., Wong, W.H. and Lee, C.Y., 2018, April. Diabetic retinopathy detection based on deep convolutional neural networks. In 2018 IEEE
international conference on acoustics, speech and signal processing (ICASSP) (pp. 1030- 1034). IEEE.
[17] Ishtiaq, U., Abdul Kareem, S., Abdullah, E.R.M.F., Mujtaba, G., Jahangir, R. and Ghafoor, H.Y., 2020. Diabetic retinopathy detection through artificial
intelligent techniques: a review and open issues. Multimedia Tools and Applications, 79(21), pp.15209-15252.
[18] Burewar, S., Gonde, A.B. and Vipparthi, S.K., 2018, December. Diabetic retinopathy detection by retinal segmentation with region merging using CNN. In
2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS) (pp. 136-142). IEEE.
[19] Chakrabarty, N., 2018, November. A deep learning method for the detection of diabetic retinopathy. In 2018 5th IEEE Uttar Pradesh Section International
Conference on Electrical, Electronics and Computer Engineering (UPCON) (pp. 1-5). IEEE.
[20] Samanta, A., Saha, A., Satapathy, S.C., Fernandes, S.L. and Zhang, Y.D., 2020. Automated detection of diabetic retinopathy using convolutional neural
networks on a small dataset. Pattern Recognition Letters, 135, pp.293-298.
[21] Pour, A.M., Seyedarabi, H., Jahromi, S.H.A. and Javadzadeh, A., 2020. Automatic detection and monitoring of diabetic retinopathy using efficient
convolutional neural networks and contrast limited adaptive histogram equalization. IEEE Access, 8, pp.136668-136673.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 609