A Mobile Solution For Lateral Segment Photographed Images Based Deep Keratoconus Screening Method

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

INTERNATIONAL JOURNAL OF INTEGRATED ENGINEERING VOL. 13 NO.

5 (2021) 18-27

© Universiti Tun Hussein Onn Malaysia Publisher’s Office


The International
Journal of
IJIE Integrated
Journal homepage: http://penerbit.uthm.edu.my/ojs/index.php/ijie
Engineering
ISSN : 2229-838X e-ISSN : 2600-7916

A Mobile Solution for Lateral Segment Photographed Images


Based Deep Keratoconus Screening Method
W Mimi Diyana W Zaki1*, Marizuana Mat Daud2, Assyareefah Hudaibah
Saad1, Aini Hussain1, Haliza Abdul Mutalib3
1
Centre for Integrated Engineering Systems and Advanced Technologies, EESE Dept.,
Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Selangor, 43600, MALAYSIA
2
Advanced Informatics Lab,
MIMOS Berhad, Jalan Inovasi 3, Taman Teknologi Malaysia, Kuala Lumpur, 57000, MALAYSIA
3
Optometry and Visual Science Programme, Faculty of Health Science,
Universiti Kebangsaan Malaysia, Kuala Lumpur, 50300, Malaysia

*Corresponding Author

DOI: https://doi.org/10.30880/ijie.2021.13.05.003
Received 15 April 2021; Accepted 30 April 2021; Available online 31 July 2021

Abstract: Keratoconus (KC) is a condition of the bulging of the eye cornea. It is a common non-inflammatory
ocular disorder that affects mostly the younger populace below the age of 30. The eye cornea bulges because of
the conical displacement of either outwards or downwards. Such condition can greatly reduce one’s visual ability.
Therefore, in this paper, we afford a mobile solution to mitigate the KC disorder using the state-of-the-art deep
transfer learning method. We intend to use the pre-trained VGGNet-16 model and a conventional convolutional
neural network to detect KC automatically. The experimental work uses a total of 4000 side view lateral segment
photographed images (LSPIs) comprising 2000 of KC and non-KC or healthy each involving 125 subjects. The
LSPIs were extracted from the video data captured using a smartphone. Fine tuning of three hyperparameters
namely the learning rate (LR), batch size (BS) and epoch number (EN) were carried out during the training phase
to generate the best model of which, the VGGNet-16 model fulfilled it. For the KC detection task, our proposed
model achieves an accuracy of 95.75%, a sensitivity of 92.25%, and specificity of 99.25% using the LR, BS and
EN of 0.0001, 16, and 70, respectively. These results confirmed the high potential of our proposed solution to
apprehend KC prevalence towards an automated KC screening procedure.
Keywords: Keratoconus, convolutional neural network, VGGNet-16, lateral segment photographed images

1. Introduction
Keratoconus (KC) is a type of non-inflammatory disorder leading to a thinning of the corneal layer and ectatic
protrusion [1]. It causes the bulging eye cornea, which transforms to a canonical shape from a symmetrical dome shape,
as illustrated in Fig. 1(b). KC patients will experience the conical displacement heading outwards or downwards, where
this condition is a severe visual disturbance. KC disease is challenging to detect in the early stages; most patients only
know the real state when they are at a severe stage. This consequence in the quality of life, especially KC patients who
are mostly 30 years old and below, is affected due to intense vision impairment. Thus, KC disease is not a frivolous
disease.
Based on a WHO’s statistical study on October 8, 2019, at least 2.2 billion people have a vision impairment or
blindness, of whom includes 1 billion people with moderate or severe distance vision impairment or blindness [2].
Keratoconus (KC) patients will experience distorted and severe vision and may lose vision if left untreated [3]. Because
*Corresponding author: [email protected] 18
2021 UTHM Publisher. All rights reserved.
penerbit.uthm.edu.my/ojs/index.php/ijie
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

KC is challenging to detect in the early stages of its presence, most patients only know the actual condition when they
are at a severe stage. The KC prevalence study conducted by Bariah et al. in 2012 and 2015 reported an increase in KC
disease within three years, with prevalence rates of 1.2% to 30.2% [4]. KC aetiology is multifactorial and still not clearly
understood. Among the etiologies that positively correlate with KC disease are excessive eye rubbing, atopy, myopia,
eczema, down syndrome, genetics and geographical factors [3].
An optometrist is a primary examiner for an individual’s eye condition. Nevertheless, if any abnormalities in the eye
are identified and require further treatment, an ophthalmologist will do the eye examination with imaging equipment.
According to the Ministry of Health Malaysia (2017) Health Indicator report by the Ministry of Health Malaysia in 2017,
an optometrist’s ratio to the Malaysian population is 17,580. This figure clearly shows that Malaysia is experiencing a
shortage of optometrists. Most of the settlements in Sabah and Sarawak are relatively far from the major cities and far
inland. This situation has led to the constraint of the population’s access to proper eye care by the latest ophthalmologists
and imaging equipment.
Smartphones can be found everywhere easily, and people use them in almost various fields and uses. Besides, various
industries have widely adopted smartphone technology and the healthcare industry that does not want to be left behind.
Manpreet et al. [5] have proposed a low-cost cataract detection system using a smartphone-based, integrated innovative
system with an attached microscopic lens. The system allows patients in remote areas to conduct regular eye
examinations and monitor the disease diagnosis progress. The use of progressive technology enables smartphones to
work alone without the use of external attachment devices. The system will be able to work with machine learning and
deep learning.
KC disease can be detected using modern high-tech imaging devices such as corneal topography and confocal
microscopy in vivo [6, 7] at hospitals or eye clinics. The machines are immobile, expensive and require operation by
experts. These limitations result in longer waiting lists and more difficult access by communities regardless of those
living in urban or rural areas. Therefore, this work proposes an automated KC detection method using the eye of side-
view images, known as lateral segment photographed images (LSPI), taken from smartphone cameras. The proposed
screening approach employs deep learning algorithms to help the public make eye examinations faster and easier with
painless mobile devices. It also does not require skilled human resources to handle it.

2. Related Work
There few researchers developed automatic detection techniques for eye diseases such as cataracts, glaucoma, and
reddened eyes to the best of our knowledge. They used digital image processing (DIP) approaches on the anterior
segment photographed images of the eyes [8-10]. Besides, various semi-automatic eye detection studies use data or
images extracted from specific equipment [11-14]. They developed computer vision algorithms using the DIP
approaches integrated with machine learning to classify eye diseases. For instances, Fuadah et al. [15] proposed an
automated cataract detection using the nearest k-neighbourhood method (k-NN) to classify normal eyes, or cataract
eyes captured using smartphones. The pre-processing steps involve image cropping and grayscale conversion before
applying the Gray Level Co-occurrence Matrix method to extract the texture features. The extracted features are such
as inconsistency, contrast and uniformity. Their proposed classification system achieved 97.5% accuracy, and the
results had been validated by ophthalmologists using slit lights on 80 healthy and 80 cataract images [15].

On the other hand, Ik et al. [16] proposed a red reflex method using smartphone camera flash to detect possible
cataract symptoms. Then, the captured pupil images are classified using an artificial neural network (ANN). The
proposed method allows for self-examination by an individual, making it suitable for a mobile screening solution in
locations with limited medical experts and facilities. Long et al. [17] used a deep learning algorithm to build a CC-
Cruiser that collaborated in a cloud-based platform for multi-hospitals to manage congenital cataract. Artificial
intelligence (AI) agents involved the convolutional neural network (CNN) in identifying potential patients from large
populations in the cloud-based platform.

There are limited studies conducted to detect the presence of KC using the DIP approach. Based on a study
summary presented in Table 1, KC detection using the DIP approach depends on information captured using modern
imaging tools such as Pentacam and Placido. Nabhan et al. [18] in 2018 had patented a method that resembled a
Placido disc ring taken using a smartphone. Also, there is a study conducted by Behnam et al. [19] to detect KC in
ASPI captured using smartphones. Nevertheless, Behnam et al. [19] performed KC detection and tested only with a
small number of datasets. These findings indicate that comprehensive studies on KC disease are still relatively limited.
This scenario has opened up space for KC detection studies of the anterior and lateral eye images using the DIP and
machine learning.

19
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

Table 1 - Previous work on keratoconus detections


Previous work Description
Toutounchian et al. o Features extracted from topography Pentacam images
[13] o Use ANN, SVM and decision tree classification
o Tested using 47 normal and 35 KC images and achieve 91%
accuracy
Arbelaez et al. [20] o Features extracted from Placido of Scheimpflug camera
o Detect the Placido ring edges and apply SVM classification
o Tested using 1059 normal, 677 KC and 226 KC subclinical
images
o Achieve 96.9% accuracy, 92.8%sensitivity and 98.2% specificity
Nabhan [18] o Patent an approach that similar to the Placido disk ring concept
o Use smartphone images and ring images (non-ASPI or LSPI)
Behnam et al. [19] o Use to detect and grade KC
o Tested only using 14 normal, and 6 KC LSPI captured using
smartphones images
o Accuracies achieved for very severe level (93%), severe (86%)
and moderate (67%)

In recent years, most of the domain are preparing steps towards AI, including deep neural network, to make a better
world. In medical and healthcare, deep learning has employed in medical imaging analysis, including malaria
identification [21] and malignant melanoma [22] using photograph images, chest X-rays [23]. In ophthalmology, deep
learning has started to grow, showing a bright potential to be an important diagnostic tool where this has started to gain
medical practitioners’ attention. Some examples of ophthalmic disease detections that have employed deep learning are
diabetic retinopathy [24, 25], age-related macular degeneration (AMD) [26] and glaucoma [27]. Triwijoyo et al. [28]
implemented convolutional neural network (CNN) as classifiers for retinal images with the highest accuracy of 80.93%.
They are using the STARE dataset, which consists of 15 classes of retinal eye diseases.
With the breakthrough and prominence of deep learning, KC disease detection has employed the deep learning
approach. A recent study by Kamikuya et al. [29] evaluates KC’s diagnostic accuracy using ResNet-18 of the colour-
coded maps. The results indicate that the deep learning approach can discriminate KC from normal corneas and classify
the disease’s severity. Kuo et al. [30] proposed and compared deep learning algorithms to detect KC based on corneal
topography images and validate with visualisation methods. They compare three well-known deep learning methods,
namely VGG16, InceptionV3 and ResNet152 models, on 354 images. Lavric et al. [31] has proposed a CNN algorithm
to classify KC based on a topographic map. The algorithm gives 99.33% accuracy tested on 1500 healthy and 1500 KC
cases. However, no study found that KC detection or classification in deep learning uses photographed images to the
best of our knowledge.

3. Methodology
This section is divided into four parts: the LSPI dataset preparation, KC detection using transfer learning (TL), KC
detection using deep learning (DL) approach, and model efficiency evaluation. In both TL and DL approaches, three
hyper-parameters were tuned and used to generate models. Fine-tuning of three hyper-parameters involves the batch
size of 16, 32 and 64, the learning rate of 0.001, 0.0001 and 0.00001, with epoch number of 30, 50, 70 and 100. The
main motivation of this work is to develop a fully automated KC detection, and it is an extension of our previous work,
as reported in [32]. The work in [32] presented a semi-automated KC detection approach based on DIP and random
forest classifier (number of trees =50) using both anterior and lateral segment photographed eye images.

3.1 LSPI Dataset Preparation


This experiment uses 2000 KC and 2000 normal LSPI extracted from videos captured from the side view of 125
patients using an iPhone SE smartphone. These LSPIs are raw images with noises and artefacts, such as eyelashes,
eyelid, skin colour, undefined corneal edge and reflection. The dataset was collected at the Ophthalmology Department
in Hospital Kuala Lumpur (HKL), Malaysia, with an appointed optometrist to validate the KC and healthy eyes using
OCULUS Easygraph Topographer. The dataset can be found in [33]. The optometrist validated each subject’s eye
image based on the subject’s topographic map. As shown in Fig. 1, corneal curvature data are analysed based on a
colour scale topographic map, such as orange to violet (warmer colour) represents a steeper cornea, and blue to yellow
(cooler colour) represents a flatter cornea [34]. Fig. 1a) and Fig. 1b) illustrate the pattern of corneal topographic maps
of normal corneas, whereas Fig. 1c) and Fig. 1d) are the topographic maps of keratoconic corneas. Interestingly, in Fig.
1d) the topographer machine cannot capture the corneal curvature since the subject is in the advanced stage of KC.

20
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

The dataset is split into three parts, namely 60% for training, 20% for testing and 20% for validation. During the
training phase of KC detection using TL and DL approaches, all LSPIs has gone through the augmentation process
implemented in Tensorflow and Keras platforms. The data augmentation process artificially expands the training
dataset’s size by a factor equal to the number of transformations performed. It creates modified image versions by
transforming the images using rescale, rotation, shift and flip operations. This process can improve imbalanced classes’
performance [35] and act as a regulariser to avoid the overfitting problem in neural networks [36].

d) Image 1 c) Image 2 b) Image 3 a) Image 4

Fig. 1 - LSPI of normal and keratoconus cases and their corresponding topography

3.2 KC Detection using Transfer Learning


VGGNet-16 is a lightweight and compact IMAGENET model used as a pre-trained network in this work.
Implementation of this transfer learning approach enables a fully automated detection of KC disease. The convolutional
layers in VGGNet-16 model can capture the smallest possible features as the layers use a small receptive field of 3x3.
The VGGNeT-16 model already has rectified linear unit (ReLu) activation function and its optimiser; thus, it is not
required to tune the activation function and its optimiser during the training and model generation. Firstly, all LSPIs are
loaded into the model and pre-processed. During the pre-processing step, the images are augmented to increase the
number of training datasets artificially. Then, the pre-trained model is called to load the pre-trained weight. The first five
bottom layers are freeze, while another eleven layers are unfreezing to create a new KC detection model. Fig. 2 portrays
an overview of the proposed VGGNet-16 model for the KC detection. During the training process, the model would
converge on the KC dataset, and the hyperparameters are fine-tuned until the most optimised model is generated.
As illustrated in Fig. 2, a dropout layer with a probability of 0.5 is added before activating a sigmoid function
activated in the fully connected layer (FCL). The addition of the dropout layer is to avoid overfitting problems. This
layer is tuned to its maximum regularisation value, when the regularisation parameter, 𝑝(1 − 𝑝) in Equation (1) is
maximum at 𝑝 = 0.5. In Tensorflow and Keras, it is proven that the dropout layer of 0.5 is ideal for the large networks
option [37]. Stochastic gradient descent (SGD) is chosen as an optimiser to generate a new model. Even though the SGD
performs significantly slower than other optimisers, it can converge better due to its capability of escaping locally
optimal traps in the cost surface.

1
𝐸𝑅 = (𝑡 − ∑𝑛𝑖=𝑖 𝑝𝑖 𝑤𝑖 𝐼𝑖 )2 + ∑𝑛𝑖=1 𝑝𝑖 (1 − 𝑝𝑖 )𝑤𝑖2 𝐼𝑖2 (1)
2

3.3 KC Detection using Conventional CNN


For comparison purposes, a shallow conventional CNN model has been developed to train a deep network with a
small dataset. The proposed models are trained with five convolutional layers, one FC layer and batch normalisation are
applied at every layer, as illustrated in Fig. 3. A dropout layer of 0.2 is added at the last layer to avoid overfitting. The
batch normalisation will normalise the input layer to the next layer and prevent an internal covariate shift problem.
ReLu is used as an activation function at the middle layer to overcome the vanishing gradient problem, allowing the
proposed model to learn faster and perform better [38]. Mainly, the proposed CNN model uses adaptive moment
optimisation (Adam) optimiser to improve the model efficiency with less memory usage, and it is also fit for static data.
The most optimised CNN model is proposed after the three hyperparameters, namely the batch size, learning rate and
epoch, are fine-tuned during the training phase.

21
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

Fig. 2 - Overview of VGGNet-16 using pre-trained weights to train a new model

Fig. 3 - Overview of the conventional CNN architecture used to generate the models

3.4 Model Measurement Metrics


Performances of the KC detection approach using the TL and conventional CNN generated models are evaluated
using sensitivity, specificity, and accuracy. Equation (2), (3) and (4) represent the three-measurement metrics.
Accuracy shows the generated model’s overall performance to detect KC and healthy cases over the entire dataset.
Sensitivity illustrates a model’s ability to detect KC cases, while specificity represents a model’s ability to detect
normal eye.

𝑇𝑃
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = (2)
𝑇𝑃+𝐹𝑁

22
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

𝑇𝑁
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = (3)
𝑇𝑁+𝐹𝑃
𝑇𝑃+𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (4)
𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁

True-positive (TP) is the method to detect KC eye as KC case while a true-negative (TN) is a normal eye detected as
normal. The false-positive (FP) results are when normal eyes detected as KC eyes and false-negative (FN) is vice versa.

4. Results and Discussion


The LSPI of eye images is input to the VGGNet-16 and CNN models, as explained in the methodology section.
There are three hyper-parameters; batch size, learning rate, and epoch that have been tuned to achieve the best model for
detecting the KC case and healthy cases. The outcomes are tabularised in Table 2 for VGGNet-16 and CNN models,
showing the testing data performance rate based on the batch size of 16, 32 and 64. This experiment was conducted using
a fixed epoch of 100, a learning rate of 0.0001. From the experiments, the batch size of 16 achieves the best performance
in all measurement metrics for both VGGNet-16 and CNN models.
Using the batch size of 16 and the same fixed epoch of 100, the VGGNet-16 and CNN models are fine-tuned the
learning rate hyperparameter shown in Table 3. Lastly, the models are fine-tuned by varying the epoch hyperparameter,
as shown in Table 4. In the same experimental work, the CNN model performs the best with epoch 70, while the batch
size and learning rate are fixed to 16 and 0.0001, respectively. The VGGNet-16 model achieves the highest average
accuracy of 95.75%, the sensitivity of 92.25% and specificity of 99.25% with an epoch number of 70.

Table 2 - Measurement metrics of generated models with fixed learning rate of 10 and epoch number of 100

Batch size VGGNet-16 CNN


Acc (%) Sen (%) Spe (%) Acc (%) Sen (%) Spe (%)
16 94.13 89.75 98.5 90.12 93.00 92.25
32 90.62 81.25 100.0 85.37 87.0 83.75
64 90.13 84.5 95.75 81.38 89.5 73.25

Table 3 - Measurement metrics of generated models with a fixed batch size of 16 and epoch number of 100

Learning VGGNet-16 CNN


rate Acc (%) Sen (%) Spe (%) Acc (%) Sen (%) Spe (%)
0.001 50.00 46.45 54.25 75.00 59.50 90.50
0.0001 94.13 89.75 98.5 90.12 93.00 92.25
0.00001 84.25 78.25 90.25 0.8525 87.75 82.75

Table 4 - Measurement metrics of generated models with a fixed batch size of 16 and learning rate of 0.0001

Epochs VGGNet-16 CNN


Acc (%) Sen (%) Spe (%) Acc (%) Sen (%) Spe (%)
100 94.13 89.75 98.50 90.12 93.00 92.25
70 95.75 92.25 99.25 92.12 93.75 90.00
50 93.62 90.00 97.25 81.88 83.75 80.00
30 89.25 87.00 91.50 81.5 85.75 77.25

23
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

In this experiment, during the training phase of the pre-trained VGGNet-16 model, we have unlocked the top-11
layers and reuse the weight of the bottom five-layer with an additional FCL on top. The bottom layers contain generic
features that generalise to almost all types of images. As we fine-tuned the last layer of the dense layer, the training
model will learn specific features to classify KC and normal eyes. The results show that the proposed VGGNET-16
performs the best KC classification output with a batch size of 16, a learning rate of 0.0001 and an epoch number of 70.
A lower learning rate value would cause a slower gradient descent and more learning time needed; nevertheless, the
model performs better. On the other hand, a higher learning rate value would overshoot the gradient descent, making it
fail to converge, as shown in Table 3. As the batch size increased, the accuracy would be decreased due to poor
generalisation. Empirically, a smaller batch size has faster convergence and allow the model to start learning before
having to see all the data. In the end, the batch size depends on the size of the entire dataset.
The VGGNet-16 model consists of 16 hidden layers. As such, the network can be considered as a lightweight and
compact architecture. Hence, it is suitable to be implemented in this study that has a limited number of datasets. Also,
due to the low complexity nature of VGGNet-16, we found that using pre-trained weight allows faster convergence since
the features are significantly being reused, resulting in speedup that occurs in the lower layers during the training phase.
This transfer learning method has to adapt these specialised features to work with the lateral eye dataset. Furthermore,
this work shows that compared to the random initialisation approach, the conventional CNN model, pre-trained transfer
learning gives better performance with accuracy improvements 5% to 7%.
As reported in our previous work, using a top-16 combination features with Random Forest classifier (number of
trees=50), the KC detection method gave the best performance of 96.05% average accuracy, 98.41% sensitivity, and
93.65% specificity. Nevertheless, the detection method required a human intervention, where the ground truth of anterior
corneal segmentation had to be prepared by an optometrist [32]. Comparing with our proposed approach, the pre-trained
VGGNet-16 model can automatically classify the KC and normal cases in LSPI. As shown in Table 5, the lightweight
and compact VGGNet-16 model’s performance is comparable with [32].
Fig. 4 illustrates a few examples of VGGNet-16 classification outputs of the left and right LSPIs captured from
various side view angles. The confidence level (CL) indicates the probability that an image is classified as KC or normal
cases. Nevertheless, it is found that CL values below 90% are not reliable in such the images are falsely detected. For
examples, the images with red dotted boxes are the FP cases with the CL values of 77.3%, 83.8% and 89.2%. These false
detection cases contribute to the overall performance of the proposed KC detection. These anomalies will be further
investigated in our future works.

Table 5 - Measurement metrics of generated models with a fixed batch size of 16 and learning rate of 0.0001

Method Acc (%) Sen (%) Spe (%)


DIP and machine learning approaches [20] 96.05 98.41 93.65
VGGNet-16 95.75 92.25 99.25

Fig. 4 - Sample of VGGNet-16 classification results

24
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

5. Conclusion
Performance between a conventional CN and the transfer learning model, namely VGGNet-16, has been
investigated in this work. The experimental work found that the VGGNet-16 model with a learning rate of 0.0001,
batch size of 16, and the epoch number of 70 gives the best performance to detect KC automatically. The proposed
deep learning approach does not require skilled human resources to handle it; thus, it can instantly produce the
detection results. The patient could have an immediate diagnosis and take fast action if they need additional follow up
treatment. The proposed model has a bright potential to be integrated into the decision support system to detect anterior
ocular diseases, namely to detect KC, as early detection is crucial. It may also become a cost-effective tool for low-
income or high-risk groups who may not have easy access to eye clinics or ophthalmologists. Besides, this detection
approach can promote telehealth or screening technology, which is crucial and relevant during this pandemic.

Acknowledgement
This project is supported by the Universiti Kebangsaan Malaysia internal grant DIP-2018-020. Special thanks to the
Ophthalmology Department in Hospital Kuala Lumpur, Kuala Lumpur, to provide the facilities and experts.

References
[1] Rabinowitz, Y. S. (1998). Keratoconus. Survey of Ophthalmology, 42(4), 297-319.
https://doi.org/10.1016/S0039-6257(97)00119-7.
[2] WHO. (2019). Blindness And Vision Impairment, 11(2018), 1-5. Retrieved from https://www.who.int/news-
room/fact-sheets/detail/blindness-and-visual-impairment.
[3] Sharif, R., Bak-Nielsen, S., Hjortdal, J., & Karamichos, D. (2018). Pathogenesis of Keratoconus: The intriguing
therapeutic potential of Prolactin-inducible protein. Progress in Retinal and Eye Research, 67(April), 150-167.
https://doi.org/10.1016/j.preteyeres.2018.05.002.
[4] Mohd-Ali, B., Muthiah, P., Retnasabapathy, S., & Mohidin, N. (2015). Clinical characteristics of keratoconus
patients in Malaysia: A 5-year retrospective study from a cornea specialist centre in Selangor (2008-2013).
Contact Lens and Anterior Eye, 38, e25. https://doi.org/10.1016/j.clae.2014.11.031.
[5] Kaur, M., Kaur, J., & Kaur, R. (2016). Low cost cataract detection system using smart phone. Proceedings of
the 2015 International Conference on Green Computing and Internet of Things, ICGCIoT 2015, 1607-1609.
https://doi.org/10.1109/ICGCIoT.2015.7380724.
[6] Cavas-Martínez, F., De la Cruz Sánchez, E., Nieto Martínez, J., Fernández Cañavate, F. J., & Fernández-
Pacheco, D. G. (2016). Corneal topography in keratoconus: state of the art. Eye and Vision, 3(1), 1-12.
https://doi.org/10.1186/s40662-016-0036-8.
[7] Song, P., Wang, S., Zhang, P., Sui, W., Zhang, Y., Liu, T., & Gao, H. (2016). The Superficial Stromal Scar
Formation Mechanism in Keratoconus: A Study Using Laser Scanning in Vivo Confocal Microscopy. BioMed
Research International, 2016. https://doi.org/10.1155/2016/7092938.
[8] Ma, L., Tan, T., Wang, Y., & Zhang, D. (2004). Efficient iris recognition by characterizing key local variations.
IEEE Transactions on Image Processing, 13(6), 739-750. https://doi.org/10.1109/TIP.2004.827237.
[9] Supriyanti, R., Habe, H., & Kidode, M. (2012). Utilization of Portable DigitalSupriyanti, Retno, Hitoshi Habe,
and Masatsugu Kidode. 2012. “Utilization of Portable Digital Camera for Detecting Cataract.” In . Camera for
Detecting Cataract.
[10] Sivapriya, C., Tejaswini, V., Vignesh, R., & Vijay, G. (2015). Eye Disease Detection using Daubechies
Wavelet Transform. International Journal for Innovative Research in Science & Technology, 1(12), 372-376.
[11] Gasparini, F., & Schettini, R. (2009). A review of redeye detection and removal in digital images through
patents. Recent Patents on Electrical Engineering, 2(1), 45-53. https://doi.org/10.2174/1874476110902010045.
[12] Saari, J. M. (2007). Digital Photography in the Diagnosis and Follow-up of Ocular Diseases. Ophthalmology.
Retrieved from https://oa.doria.fi/handle/10024/28144.
[13] Toutounchian, F., Shanbehzadeh, J., & Khanlari, M. (2012). Detection of keratoconus and suspect keratoconus
by machine vision. Lecture Notes in Engineering and Computer Science, 2195(November), 89-91.
[14] Umesh, L., Mrunalini, M. M., & Shinde, S. (2016). Review of Image Processing and Machine Learning
Techniques for Eye Disease Detection and Classification. International Research Journal of Engineering and

25
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

Technology (IRJET), 3(3), 547-551.


[15] Fuadah, Y. N., Setiawan, A. W., Mengko, T. L. R., & Budiman. (2016). Mobile Cataract Detection Using
Optimal Combination Of Statistical Texture Analysis. Proceedings - 2015 4th International Conference on
Instrumentation, Communications, Information Technology and Biomedical Engineering, ICICI-BME 2015,
232-236. https://doi.org/10.1109/ICICI-BME.2015.7401368.
[16] Ik, Z. Q., Lau, S. L., & Chan, J. B. (2016). Mobile cataract screening app using a smartphone. 2015 IEEE
Conference on e-Learning, e-Management and e-Services, IC3e 2015, (August), 110-115.
https://doi.org/10.1109/IC3e.2015.7403496.
[17] Long, E., Lin, H., Liu, Z., Wu, X., Wang, L., Jiang, J., An, Y., Lin, Z., Li, X., Chen, J., Li, J., Cao, Q., Wang,
D., Liu, X., Chen, W., & Liu, Y. (2017). An Artificial Intelligence Platform For The Multihospital
Collaborative Management Of Congenital Cataracts. Nature Biomedical Engineering, 1(2), 1-8.
https://doi.org/10.1038/s41551-016-0024.
[18] Nabhan, T. I. (2018). System And Method For Ophtalmological Imaging Adapted To A Mobile Processing
Device.
[19] Askarian, B., Tabei, F., Askarian, A., & Chong, J. W. (2018). An affordable and easy-to-use diagnostic method
for keratoconus detection using a smartphone. In N. Petrick & K. Mori (Eds.), Medical Imaging 2018:
Computer-Aided Diagnosis (Vol. 10575, pp. 238-243). SPIE. https://doi.org/10.1117/12.2293765.
[20] Arbelaez, M. C., Versaci, F., Vestri, G., Barboni, P., & Savini, G. (2012). Use of a support vector machine for
keratoconus and subclinical keratoconus detection by topographic and tomographic data. Ophthalmology,
119(11), 2231-2238. https://doi.org/10.1016/j.ophtha.2012.06.005.
[21] Islam, C. S., & Mollah, M. S. H. (2019). A novel idea of malaria identification using Convolutional Neural
Networks (CNN). 2018 IEEE EMBS Conference on Biomedical Engineering and Sciences, IECBES 2018 -
Proceedings, 7-12. https://doi.org/10.1109/IECBES.2018.8626669.
[22] Banerjee, S., Singh, S. K., Chakraborty, A., Das, A., & Bag, R. (2020). Melanoma Diagnosis Using Deep
Learning and Fuzzy Logic, 10(8), 577.
[23] Lee, S. M., Seo, J. B., Yun, J., Cho, Y., Vogel-claussen, J., Schiebler, M. L., Gefter, W. B., Beek, E. J. R. V.,
Goo, J. M., Lee, K. S., & Hatabu, H. (2019). Deep Learning Applications in Chest Radiography and Computed
Tomography Current State of the Art, 34(2), 75-85. https://doi.org/10.1097/RTI.0000000000000387.
[24] Ting, D. S. W., Cheung, C. Y. L., Lim, G., Tan, G. S. W., Quang, N. D., Gan, A., Hamzah, H., Garcia-Franco,
R., Yeo, I. Y. S., Lee, S. Y., Wong, E. Y. M., Sabanayagam, C., Baskaran, M., Ibrahim, F., Tan, N. C.,
Finkelstein, E. A., Lamoureux, E. L., Wong, I. Y., Bressler, N. M., Sivaprasad, S., Varma, R., Jonas, J. B., He,
M. G., Cheng, C. Y., Cheung, G. C. M., Aung, T., Hsu, W., Lee, M. L., & Wong, T. Y. (2017). Development
And Validation Of A Deep Learning System For Diabetic Retinopathy And Related Eye Diseases Using
Retinal Images From Multiethnic Populations With Diabetes. JAMA - Journal of the American Medical
Association, 318(22), 2211-2223. https://doi.org/10.1001/jama.2017.18152.
[25] Gargeya, R., & Leng, T. (2017). Automated Identification of Diabetic Retinopathy Using Deep Learning.
Ophthalmology, 124(7), 962-969. https://doi.org/10.1016/j.ophtha.2017.02.008.
[26] Lee, C. S., Tyring, A. J., Deruyter, N. P., Wu, Y., Rokem, A., & Lee, A. Y. (2017). Deep-Learning Based,
Automated Segmentation Of Macular Edema In Optical Coherence Tomography. Biomedical Optics Express,
8(7), 3440. https://doi.org/10.1364/boe.8.003440.
[27] Li, Z., He, Y., Keel, S., Meng, W., Chang, R. T., & He, M. (2018). Efficacy of a Deep Learning System for
Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology, 125(8),
1199-1206. https://doi.org/10.1016/j.ophtha.2018.01.023.
[28] Triwijoyo, B. K., Sabarguna, B. S., Budiharto, W., & Abdurachman, E. (2020). Deep learning approach for
classification of eye diseases based on color fundus images. Diabetes and Fundus OCT. Elsevier Inc.
https://doi.org/10.1016/b978-0-12-817440-1.00002-4.
[29] Kamiya, K., Ayatsuka, Y., Kato, Y., Fujimura, F., Takahashi, M., Shoji, N., Mori, Y., & Miyata, K. (2019).
Keratoconus Detection Using Deep Learning Of Colour-Coded Maps With Anterior Segment Optical
Coherence Tomography: A Diagnostic Accuracy Study. BMJ Open, 9(9), 1-7.
https://doi.org/10.1136/bmjopen-2019-031313.
[30] Kuo, B. I., Chang, W. Y., Liao, T. S., Liu, F. Y., Liu, H. Y., Chu, H. S., Chen, W. L., Hu, F. R., Yen, J. Y., &
Wang, I. J. (2020). Keratoconus Screening Based On Deep Learning Approach Of Corneal Topography.
Translational Vision Science and Technology, 9(2), 1-11. https://doi.org/10.1167/tvst.9.2.53.
[31] Lavric, A., & Valentin, P. (2019). KeratoDetect: Keratoconus Detection Algorithm Using Convolutional Neural
Networks. Computational Intelligence and Neuroscience, 2019. https://doi.org/10.1155/2019/8162567.

26
W Mimi Diyana W Zaki et al., Int. J. of Integrated Engineering Vol. 13 No. 5 (2021) p. 18-27

[32] Daud, M. M., Zaki, W. M. D. W., Hussain, A., & Mutalib, H. A. (2020). Keratoconus Detection Using the
Fusion Features of Anterior and Lateral Segment Photographed Images. IEEE Access, 8, 142282-142294.
https://doi.org/10.1109/ACCESS.2020.3012583.
[33] Wan Zaki, W. M. D. (2021). myMata. Retrieved from http://www.mymata.my/index.php/database.
[34] Matalia, H., & Swarup, R. (2013). Imaging Modalities In Keratoconus. Indian Journal of Ophthalmology,
61(8), 394-400. https://doi.org/10.4103/0301-4738.116058.
[35] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic Minority Over-
sampling Technique. Journal of Artificial Intelligence Research, 16(1), 321-357.
[36] Simard, P. Y., Steinkraus, D., & Platt, J. C. (2003). Best Practices For Convolutional Neural Networks Applied
To Visual Document Analysis. Proceedings of the International Conference on Document Analysis and
Recognition, ICDAR, 2003-Janua, 958-963. https://doi.org/10.1109/ICDAR.2003.1227801.
[37] Baldi, P., & Sadowski, P. (2013). Understanding Dropout. Advances in Neural Information Processing
Systems, (1), 1-9. https://doi.org/10.17744/mehc.25.2.xhyreggxdcd0q4ny.
[38] Krizhevsky, B. A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional
Neural Networks. Communications of the ACM, 60(6), 84-90.

27

You might also like