0% found this document useful (0 votes)
28 views6 pages

2021discrimination of Diabetic Retinopathy From Optical Coherence Tomography Angiography Images Using Machine Learning Methods

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

Received December 31, 2020, accepted January 18, 2021, date of publication February 2, 2021, date of current version

April 9, 2021.
Digital Object Identifier 10.1109/ACCESS.2021.3056430

Discrimination of Diabetic Retinopathy From


Optical Coherence Tomography Angiography
Images Using Machine Learning Methods
ZHIPING LIU1,2 , CHEN WANG 3, XIAODONG CAI3 , HONG JIANG 2,4 , AND
JIANHUA WANG 2,3
1 Ophthalmic Center, The Second Affiliated Hospital, Guangzhou Medical University, Guangzhou 510260, China
2 Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
3 Department of Electrical and Computer Engineering, University of Miami, FL 33146, USA
4 Department of Neurology, University of Miami Miller School of Medicine, Miami, FL 33136, USA

Corresponding author: Jianhua Wang ([email protected])


This work was supported in part by the National Institutes of Health (NIH) Center under Grant P30EY014801 and
Grant NINDS1R01NS111115-01, and in part by the Research to Prevent Blindness (RPB). The work of Zhiping Liu was supported in part
by the Guangzhou Municipal Science and Technology Project under Grant 201804010038, and in part by the Guangdong Basic and
Applied Basic Research Foundation under Grant 2020A1515010276.

ABSTRACT The goal was to discriminate between diabetic retinopathy (DR) and healthy controls (HC)
by evaluating Optical coherence tomography angiography (OCTA) images from 3 × 3 mm scans with the
assistance of different machine learning models. The OCTA angiography dataset of superficial vascular
plexus (SVP), deep vascular plexus (DVP), and retinal vascular network (RVN) were acquired from 19 DR
(38 eyes) patients and 25 HC (44 eyes). A discrete wavelet transform was applied to extract texture features
from each image. Four machine learning models, including logistic regression (LR), logistic regression
regularized with the elastic net penalty (LR-EN), support vector machine (SVM), and the gradient boosting
tree named XGBoost, were used to classify wavelet features between groups. The area under the receiver
operating characteristics curve (AUC), sensitivity, specificity, and diagnostic accuracy of the classifiers were
obtained. The OCTA image dataset included 114 and 132 images from DR and HC subjects, respectively.
LR-EN and LR using all three images, SVP, DVP, and RVN, provided the highest sensitivity of 0.84 and
specificity of 0.80, the best diagnostic accuracy of 0.82, and an AUC of 0.83 and 0.84, respectively, which
were slightly lower than that of LR using one image SVP (0.85) or two images DVP and SVP (0.85). The
LR-EN and LR classification algorithms had the high sensitivity, specificity, and diagnostic accuracy in
identifying DR, which may be promising in facilitating the early diagnosis of DR.

INDEX TERMS Diabetic retinopathy, machine learning, logistic regression, logistic regression regularized
with the elastic net penalty, support vector machine.

I. INTRODUCTION to support clinical decision-making process without inter and


Diabetic retinopathy (DR) is one of the main causes of blind- intra-expert variability.
ness among the working-age population [1]. Since it becomes Artificial intelligence (AI) can analyze a large amount
incurable in its late stages, early diagnosis of DR is important. of data simultaneously and can correct automatically and
However, it is difficult to achieve early diagnosis because of learn continuously to improve upon its sensitivity and speci-
the paucity of experienced retinal doctors and the increased ficity as a diagnostic tool and predict disease progression
number of DR patients [2]. Recently, several automated diag- in medicine [3]–[5]. An AI system is designed to mimic
nosis systems have been developed to assist in the early human brain perception for data processing and decisions
diagnosis of DR, which are objective and reliable tools used making. Recently, there has been an increasing interest in
using machine learning models for medical applications,
The associate editor coordinating the review of this manuscript and which are branches of AI that can learn from data, identify
approving it for publication was Alberto Cano . patterns, and make decisions automatically [6].

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 9, 2021 51689
Z. Liu et al.: Discrimination of DR From OCTA Images Using Machine Learning Methods

In ophthalmic research, the application of AI systems has were treated according to the Declaration of Helsinki. Written
led to robust diagnostic accuracy for several ocular conditions informed consent was obtained.
such as DR [7], age-related macular degeneration (AMD) [8], Nineteen patients with DR were recruited from Bas-
cataract [9], glaucoma [10], and keratoconus [11]. com Palmer Eye Institute, University of Miami, from
Machine learning models using fundus images have been August 2017 to June 2019. These patients underwent a com-
developed by researchers for retinopathy staging as well as plete fundus examination and were diagnosed by a retinal
identifying multiple ocular diseases. However, it is difficult specialist (JT). Twenty-five age and sex-matched healthy
to detect micro-vasculature alterations in different retinal control (HC) subjects were recruited.
layers and the area around the foveal avascular zone using
fundus images. Additionally, a large and well-documented B. OCTA, IMAGE SEGMENTATION, AND DATA
database of fundus images is needed to train and optimize ACQUISITION
convolutional neural networks. It is also extremely difficult
The angiograph dataset of the 3 × 3 mm scans was acquired
to provide strong accuracy metrics because of the database
using the Optovue OCTA device (Optovue, Fremont, CA,
variances from different imaging centers [7], [12].
USA), which consists of 70,000 Hz A-scans rate, an 840 nm
A deep learning system (DLS) is a branch of machine
central wavelength, a 5 µm resolution, and a 22 µm
learning using AI and representation learning methods
beamwidth. Each scan of retinal angiography included a
to process large data and extract meaningful patterns
raster scan with 304 (A-scan) x 304 (B-scan). The OCTA
that contribute to the excellent performance for discov-
image quality was quantified with the scan quality score
ering the intricate structures of high-dimensional data.
metric provided in the Angiovue’s software interface. Any
Currently, DLS for the automatic classification and seg-
OCTA image with a scan quality score of less than 5 was
mentation of optical coherence tomography angiogra-
excluded. The SVP was segmented from the inner limited
phy (OCTA) images in ophthalmology affords excellent
membrane layer (ILM) to the inner plexus layer (IPL); the
results. In DLS, convolutional neural network (CNN), and
DVP was segmented from the IPL to the outer plexus layer
multi-scaled encoder-decoder neural network (MEDnet) have
(OPL), and RVN was segmented from the ILM to the OPL.
been utilized to classify referable DR on optical coher-
ence tomography angiography datasets with high predictive
accuracy [13]–[18]. C. DATA PRE-PROCESSING AND OCTA FEATURE
Abramoff and associates conducted a pivotal trial in which EXTRACTION
an autonomous AI-based diagnostic system detected DR, The first step of applying a machine learning model to pre-
and that system has now been authorized by the Food and dict DR from images is to extract useful features from the
Drug Administration for use by health care providers to images. It has been shown that wavelet transform can capture
detect mild DR and diabetic macular edema [19]. Alam et al. texture features of an image at multiple resolutions and that
presented the feasibility of a supervised machine learning such features can be used to classify images with high accu-
method to detect DR with excellent diagnostic accuracy using racy [22], [23]. Wavelet transform has also been applied to
6 × 6 mm OCTA images from small datasets [20]. Recently, fundus images for the diagnosis of glaucoma [24], [25].
Sandhu et al. introduced a new automated system for detect- We first applied the two-dimensional discrete wavelet
ing DR using OCTA. The system demonstrated a high degree transform (DWT) [26] to images to extract features in
of accuracy, sensitivity, and specificity in analyzing vessel different frequency bands. Specifically, we employed the
density, vessel caliber, and FAZ area [21]. wavedec2 function in Matlab to implement the 2-dimensional
In the present study, we applied different machine learn- DWT with the biorthogonal wavelet bior1.1. Of note, DWT
ing models to discriminate between healthy individuals and had also been employed to extract features for the classifica-
DR by evaluating OCTA images from the 3 × 3 mm scans tion of fundus images of glaucoma [24], [27]. Since images
of superficial vascular plexus (SVP), deep vascular plexus were of size 304 × 304 pixels, we performed the 8-level
( log2 (304) = 8 ) DWT. As shown in Figure 1, at the ith

(DVP), and retinal vascular network (RVN). We use sensi-
tivity, specificity, and accuracy as metrics to compare the level (i = 1, 3,. . . , 8), we used biorthogonal wavelet bior1.1 to
performance of different methods. Sensitivity is the ability decompose the image in the LLi−1 band into four images
of a method to correctly identify those with the disease (true in four frequency bands: HHi, HLi , LHi , and LLi , where
positive), whereas specificity is the ability of the method to LL0 was the original image. Therefore, each image yielded
identify those without the disease (true negative) correctly. 25 images of different resolutions with a total of 25 frequency
Accuracy is the proportion of true results, either true positive bands. For each image, the energy in each of 25 frequency
or true negative, identified by a method. bands was calculated and standardized using the z-score,
which generated 25 features that would be further used to
II. METHODS classify images. Specifically, the image of the ith individual
A. STUDY DESIGN, SETTING, AND POPULATION in the jth frequency band was represented Pas a p×q matrix Dij .
p Pq
This study was approved by the institutional review board for The energy of the band was Eij = 1 2 l=1 k=1 D2ij (l, k),
(pq)
human research of the University of Miami. All participants where Dij (l,k) was the entry of matrix Dij on the lth row

51690 VOLUME 9, 2021


Z. Liu et al.: Discrimination of DR From OCTA Images Using Machine Learning Methods

solved to find (β0 , β)


T β)
min log (1 + e−yi (β0 +xi ) + λ(α kβk1 + (1 − α) kβk2 )
β0 ,β
(1)
where kβkk , k = 1, 2, is the lk -norm of β, and λ > 0 and
0 ≤ α ≤ 1 are two hyperparameters that can be determined
with cross-validation (CV). If we set λ = 0 in (1), the solution
of (1) gives the parameters of LR. Since the number of
data samples is relatively small, and with the issue of quasi
FIGURE 1. Block diagram of two-dimensional DWT. gi [n] and hi [n] are
impulse responses of a low-pass wavelet filter and a high-pass wavelet complete separation [35] in the datasets under consideration,
filter, respectively. The output of each filter is down-sampled by a factor the solution of (1) with λ = 0 is not stable. To overcome
of 2.
this problem, we implemented LR using (1) with α = 0
and λ >0. The hyperparameter λ was determined using cross
validation [36].
and the kth column. Suppose that there P were n individ- SVM finds the decision boundary β0 +x T β = 0 by solving
uals in the dataset,defined Ēj = 1n ni=1 Eij and σj2 = the following optimization problem
1 Pn 2 Eij −Ēj
i=1 (Eij − Ēj ) , then the z-score of Eij is zij = σj . C Xn n  o
n−1
Finally, we represented each image of the ith individual as a min max 0, 1−yi β0 + xiT β + kβk22 (2)
β0 ,β n i=1
25 × 1 feature vector xi = [zi,1 , . . . , zi,25 ]T , which was then
used by a machine learning model for image classification. where C > 0 is a hyperparameter. The SVM in (2) finds a
In our dataset, each eye has three images, including DVP, linear decision boundary in the feature space. We can also
RVN, and SVP. We can use any one, two, or three images use the kernel trick to find a nonlinear decision boundary in
of all the three images to predict DR. If we use k (k = 2 the feature space. In this paper,
 we used the Gaussian kernel,
2
or 3) images, we concatenate the feature vectors of these which is defined as K x, x 0 = exp(−γ x − x 0 2 ), where
k images into a 25k × 1 vector as the input to a machine γ > 0 is another hyperparameter of the model.
learning model. Of note, prior studies have shown that the We used the nested CV procedure [37] to train the four
energy distribution in different frequency bands provided models and estimate the classification errors. The nested CV
discriminative power for image classification [22], [23]. procedure has two loops, both of which employed a leave-
one-out (LOO) approach. The inner loop was used to tune
the hyperparameters and train the model, while the outer
D. DATA ANALYSIS loop was used to estimate the classification error. Since the
We employed four machine learning models to classify samples used in the outer loop for error estimation were never
images based on wavelet features. These four models are used by the training process in the inner loop, the nested CV
logistic regression (LR), support vector machine (SVM), provided an unbiased estimate of the generalization error.
logistic regression regularized with the elastic net penalty First, one of the 82 samples was held out. The remaining
(LR-EN), and the gradient boosting tree named XGBoost. 81 samples were used by a LOO procedure, which was
LR and LR-EN were implemented with software glmnet [28]; referred to as the inner CV loop to be described in detail later,
SVM and XGBoost were implemented with the SVM func- to train the model and select the optimal values of the hyper-
tion in R package e1071 [29] and the software XGBoost [30], parameters. After the values of the hyperparameters were
respectively. Of note, Boosting trees are particularly pow- selected, the 81 samples were used to fit the model, which
erful in supervised learning and have frequently won com- was then used to find the error on the one sample that was held
petitions [30], [31], sometimes outperforming deep neural out. This process, referred to as the outer loop, was repeated
networks. The mathematical formulation of LR and SVM 82 times until all samples were held out and used to calculate
can be found in a machine learning textbook [33], [34]; the error. This error is the estimated classification error. Now,
a detailed description of LR-EN and XGBoost is in [28] let us describe the inner CV loop. One of the 81 samples was
and [30], respectively. Next, we will describe these four left out, and the remaining 80 samples were used to train the
methods briefly. model for a set of values within the hyperparameters, and the
As mentioned earlier, the feature vector in the ith individual trained model was used to predict the label of the sample left
is denoted as xi . Let yi = −1 or 1 be the class label of the out. This process was repeated 81 times until every sample
ith individual. The logistic regression model assumes the fol- was left out, and its label was predicted. The predicted labels
lowing probability p(yi |xi ) = 1/(1 + e−yi (β0 +xi β ) ), where the
T
of the 81 samples were used to calculate the CV error, and the
vector β and the scalar β0 contains model parameters. LR-EN value of the hyperparameter(s) that yielded the smallest CV
finds (β0 , β) by minimizing the negative log-likelihood func- error was selected as the optimal value [37].
tion of the data regularized by the elastic net penalty [24]. We use sensitivity, specificity, and accuracy as perfor-
More specifically, the following optimization problem is mance measures. Sensitivity measures the proportion of true

VOLUME 9, 2021 51691


Z. Liu et al.: Discrimination of DR From OCTA Images Using Machine Learning Methods

TABLE 1. Sensitivity and specificity of four machine learning methods for


predicting DR.

positives (TP) that are correctly identified with false nega-


FIGURE 2. Receiver operating characteristic curve of LR-EN using one,
tives (FN), and is defined as sensitivity = TP/ (TP+FN). two, or three images to predict DR.
Specificity measures the proportion of true negatives that
are correctly identified with false positives (FP), and can
be computed as specificity=TN/ (TN+FP). Accuracy is
the degree of closeness of measurements of a quantity
to that quantity’s true value and can be calculated as
accuracy=(TP+TN)/(TP+FN+TN+FP) [38].

III. RESULTS
The OCTA image dataset included 114 images from 38 eyes
of 19 DR patients and 132 images from 44 eyes of 25 HC
subjects. Tables 1 and 2 show the performance of the four
machine learning models in predicting DR from one, two,
or three images. As shown in Table 1, LR-EN and LR using
the three images (DVP+RVN+SVP) offered the highest sen-
sitivity (0.84) and specificity (0.80). As shown in Table 2,
the AUCs of LR-EN and LR using the three images are
0.83 and 0.84, respectively, which are slightly lower than the
AUCs of LR using one image SVP (0.85) or two images DVP FIGURE 3. Receiver operating characteristic curve of four machine
learning methods using three images (DVP+RVN+SVP) to predict DR.
and SVP (0.85), but are higher than or equal to the AUCs of LR-EN and LR have almost the same performance, and they outperform
LR-EN and LR using other images, and the AUCs of SVM SVM and XGBoost.
and XGBoost. Table 3 shows that LR-EN and LR using the
three images provided the best diagnostic accuracy (0.82) 3 × 3 mm scans of optical coherence tomography angiogra-
among all four classifiers. phy (OCTA) images. Subtle retinal microvascular alterations
Figure 2 depicted the receiver operating characteris- due to DR can be analyzed quantitatively based on OCTA
tic (ROC) curves and AUCs of LR-EN using different images. images. The LR-EN and LR classifiers demonstrated the
It was seen that the LR-EN using the three images offers best diagnostic performance in all classification tasks in our
almost the same performance as using one image, the SVP, current study. The LR-EN or LR algorithm would provide
and better performance than LR-EN using other images. an effective strategy for clinicians to diagnose early stages
Figure 3 showed the ROC curves and AUCs of LR-EN, LR, of DR.
SVM, and XGBoost using the three images. It is clearly seen Generally, in our current dataset, using all three images
from Figure 2 that LR-EN and LR have almost the same improved the performance in terms of diagnostic accuracy
performance but outperform SVM and XGBoost. and sensitivity when compared to the cases using only one
or two images. However, using three images did not neces-
IV. DISCUSSION sarily improve the performance in terms of the AUC when
We herein clarified the applicability of machine learn- compared to the cases using the one image of SVP, or two
ing algorithms to predict diabetic retinopathy (DR) using images of DVP and SVP. This implies that the SVP image

51692 VOLUME 9, 2021


Z. Liu et al.: Discrimination of DR From OCTA Images Using Machine Learning Methods

TABLE 2. AUC of four machine learning methods for predicting DR. image at multiple resolutions, which provides more informa-
tion and improves discriminative power. Second, the machine
learning models were designed based on certain optimal-
ity criteria to discriminate images in the vector space of
features, which can exploit the discriminative information
appropriately [23], [24], [42]. XGBoost is a very efficient
implementation of the gradient boosting tree [29]. Instead
of fitting a tree to the gradient of the loss function at each
iteration, as normally done by gradient boosting, XGBoost
uses the second-order approximation of the loss function to
build trees. The hyperparameters of XGBoost include the
maximum depth of each tree, the number of trees, and the
learning rate. In this work, the LR-EN and LR have almost the
TABLE 3. Diagnostic accuracy measured in four machine learning same performance, and they outperform SVM and XGBoost.
methods.
Although our findings indicated the LR-EN and LR clas-
sification algorithms had high sensitivity, specificity, and
diagnostic accuracy in identifying DR, there are several limi-
tations in this study. Firstly, the sample size is relatively small
for each cohort. In future studies, we plan to include multiple
imaging centers and a much more extensive OCTA database
to test our AI screening algorithm for practical performance
in retinal clinics. Additionally, we did not use retinal
images to test the availability of our algorithm for detecting
DR. There are many important components that should be
contains the most information that can help discriminate DR included in the grading system for detecting DR using retinal
from normal tissue, and can explain why adding the other two images, such as microaneurysms, hemorrhages, hard and soft
images did not significantly change the performance of the exudations, as well as blood vessel morphology [43]–[45].
classifier. Thirdly, we only analyzed the 3 × 3mm OCTA images from
Sensitivity is an important criterion for any screening and mild DR patients. Patients with moderate and severe DR
diagnostic prediction system [15]. Studies have shown that were not included. Since mild DR is hard to distinguish from
the sensitivity of automatic DR screening ranges from 75% to normal individuals, the sensitivity and accuracy for detecting
97.5%, and the accuracy is comparable [7], [12], [19], [20], DR may have been found to be relatively lower. Future studies
[39], [40]. The 84% sensitivity of our system represented should enlarge the sample size and include moderate and
the capability to identify individual eyes with DR from a severe DR patients. Finally, we did not use other data such as
healthy control. Similarly, specificity is also an important thickness maps of intra-retinal layers and retinal angiography,
factor because it will represent the capability of detecting which, if incorporated into the machine learning process, may
subjects that do not require a referral to an ophthalmologist. further improve the diagnostic accuracy and applicability.
Our study indicated that LR-EN and LR algorithm could
V. CONCLUSION
achieve the best performances for maximum diagnostic accu-
In summary, in this evaluation of the 3 × 3 mm scans of
racy in all classification tasks. As supported by the results
OCTA angiography images from DR patients and healthy
from Table 3, we can observe that, in all performance metrics,
individuals, the LR-EN algorithm had high sensitivity and
the LR-EN and LR algorithms demonstrated better diagnostic
specificity in identifying DR, which may be promising in
proficiency than SVM and XGBoost algorithms.
facilitating the early diagnosis of DR. Further large scale and
Machine learning models for analyzing angiography
multicenter studies are necessary to assess the applicability of
images did not require alignment since the wavelet features
LR-EN algorithm in DR and related eye diseases to improve
were directly processed from the raw angiography images.
vision outcomes.
The machine learning models, particularly LR-EN and LR,
provided better performance than the traditional vessel den- ACKNOWLEDGMENT
sity analysis (Dbox). The traditional vessel density analysis All authors have no proprietary interest in any materials or
provided a sensitivity of 0.70 and specificity of 0.65 and methods.
used the vessel density of DVP (VDd), to discriminate DR REFERENCES
from HC [41]. The improvement of machine learning models [1] A. M. Hendrick, M. V. Gibson, and A. Kulshreshtha, ‘‘Diabetic retinopa-
was mainly due to the following two factors. First, machine thy,’’ Prim Care, vol. 42, no. 3, pp. 451–464, 2015.
learning models exploited wavelet features of the angiogra- [2] D. S. W. Ting, G. C. M. Cheung, and T. Y. Wong, ‘‘Diabetic retinopathy:
Global prevalence, major risk factors, screening practices and public health
phy images of each eye, whereas the VDd analysis used the challenges: A review,’’ Clin. Exp. Ophthalmol., vol. 44, no. 4, pp. 260–277,
average density. Wavelet features can capture patterns in an May 2016.

VOLUME 9, 2021 51693


Z. Liu et al.: Discrimination of DR From OCTA Images Using Machine Learning Methods

[3] E. J. Topol, ‘‘High-performance medicine: The convergence of human and [23] K. Huang and S. Aviyente, ‘‘Wavelet feature selection for image clas-
artificial intelligence,’’ Nature Med., vol. 25, no. 1, pp. 44–56, 2019. sification,’’ IEEE Trans. Image Process., vol. 17, no. 9, pp. 1709–1720,
[4] J. He, S. L. Baxter, J. Xu, J. Xu, X. Zhou, and K. Zhang, ‘‘The practical Sep. 2008.
implementation of artificial intelligence technologies in medicine,’’ Nature [24] S. Dua, U. R. Acharya, P. Chowriappa, and S. V. Sree, ‘‘Wavelet-based
Med., vol. 25, no. 1, pp. 30–36, Jan. 2019. energy features for glaucomatous image classification,’’ IEEE Trans. Inf.
[5] P. Hamet and J. Tremblay, ‘‘Artificial intelligence in medicine,’’ Technol. Biomed., vol. 16, no. 1, pp. 80–87, Jan. 2012.
Metabolism, vol. 69, pp. S36–S40, Apr. 2017. [25] S. Maheshwari, R. B. Pachori, and U. R. Acharya, ‘‘Automated diagnosis
of glaucoma using empirical wavelet transform and correntropy features
[6] R. C. Deo, ‘‘Machine learning in medicine,’’ Circulation, vol. 132, no. 20,
extracted from fundus images,’’ IEEE J. Biomed. Health Informat., vol. 21,
pp. 1920–1930, 2015.
no. 3, pp. 803–813, May 2017.
[7] D. S. W. Ting et al., ‘‘Development and validation of a deep learning system [26] D. Sundararajan, Discrete Wavelet Transform: A Signal Processing
for diabetic retinopathy and related eye diseases using retinal images from Approach, 2016th ed. Hoboken, NJ, USA: Wiley, 2020.
multiethnic populations with diabetes,’’ J. Amer. Med. Assoc., vol. 318, [27] B. S. Kirar and D. K. Agrawal, ‘‘Computer aided diagnosis of glaucoma
no. 22, pp. 2211–2223, 2017. using discrete and empirical wavelet transform from fundus images,’’ IET
[8] D. K. Hwang, C. C. Hsu, K. J. Chang, D. Chao, C. H. Sun, Image Process., vol. 13, no. 1, pp. 73–82, Jan. 2019.
Y. C. Jheng, A. A. Yarmishyn, J. C. Wu, C. Y. Tsai, M. L. Wang, [28] J. Friedman, T. Hastie, and R. Tibshirani, ‘‘Regularization paths for gener-
C. H. Peng, K. H. Chien, C. L. Kao, T. C. Lin, L. C. Woung, S. J. Chen, alized linear models via coordinate descent,’’ J. Stat. Softw., vol. 33, no. 1,
and S. H. Chiou, ‘‘Artificial intelligence-based decision-making for age- pp. 1–22, 2010.
related macular degeneration,’’ Theranostics, vol. 9, no. 1, pp. 232–245, [29] D. Meyer, ‘‘Support vector machines,’’ in The Interface to LIBSVM
2019. in Package. 2019, p. e1071. [Online]. Available: [Online]. Available:
[9] H. Lin et al., ‘‘Diagnostic efficacy and therapeutic decision-making capac- https://cran.r-project.org/web/packages/e1071/vignettes/svmdoc.pdf
ity of an artificial intelligence platform for childhood cataracts in eye [30] C. Tianqi and C. Guestrin, ‘‘XGBoost: A scalable tree boosting system,’’
clinics: A multicentre randomized controlled trial,’’ EClinicalMedicine, in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining,
vol. 9, pp. 52–59, Mar. 2019. 2016, pp. 785–794.
[10] A. A. Jammal, A. C. Thompson, E. B. Mariottoni, S. I. Berchuck, [31] C. Sutton, L. M. Ghiringhelli, T. Yamamoto, Y. Lysogorskiy,
C. N. Urata, T. Estrela, S. M. Wakil, V. P. Costa, and F. A. Medeiros, L. Blumenthal, T. Hammerschmidt, J. R. Golebiowski, X. Liu, A. Ziletti,
‘‘Human versus machine: Comparing a deep learning algorithm to human and M. Scheffler, ‘‘Crowd-sourcing materials-science challenges with the
gradings for detecting glaucoma on fundus photographs,’’ Amer. J. Oph- NOMAD 2018 Kaggle competition,’’ NPJ Comput. Mater., vol. 5, no. 1,
thalmol., vol. 211, pp. 123–131, Mar. 2020. pp. 1–11, Dec. 2019.
[11] S. D. Klyce, ‘‘The future of keratoconus screening with artificial intelli- [32] T. W. Neller, ‘‘AI education matters: Lessons from a Kaggle click-through
gence,’’ Ophthalmology, vol. 125, no. 12, pp. 1872–1873, Dec. 2018. rate prediction competition,’’ AI Matters, vol. 4, no. 3, pp. 5–7, 2018.
[33] S. Ben-David and S. Shalev-Shwartz, Understanding Machine Learning:
[12] N. Asiri, M. Hussain, F. Al Adel, and N. Alzaidi, ‘‘Deep learning based From Theory to Algorithms. Cambridge, U.K.: Cambridge Univ. Press,
computer-aided diagnosis systems for diabetic retinopathy: A survey,’’ 2014.
Artif. Intell. Med., vol. 99, Aug. 2019, Art. no. 101701. [34] J. H. Friedman, R. Tibshirani, and T. Hastie, The Elements of Statistical
[13] M. Guo, M. Zhao, A. M. Y. Cheong, H. Dai, A. K. C. Lam, and Y. Zhou, Learning: Data Mining, Inference, and Prediction. New York, NY, USA:
‘‘Automatic quantification of superficial foveal avascular zone in optical Springer, 2009.
coherence tomography angiography implemented with deep learning,’’ Vis. [35] A. Albert and A. Anderson, On the Existence of Maximum
Comput. Ind., Biomed., Art, vol. 2, no. 1, p. 21, Dec. 2019. Likelihood Estimates in Logistic Regression Models, 71st ed. 1984,
[14] D. Le, M. Alam, C. K. Yao, J. I. Lim, Y.-T. Hsieh, R. V. P. Chan, D. Toslak, pp. 1–10.
and X. Yao, ‘‘Transfer learning for automated OCTA detection of diabetic [36] A. Albert and J. A. Anderson, ‘‘On the existence of maximum likeli-
retinopathy,’’ Transl. Vis. Sci. Technol., vol. 9, no. 2, p. 35, Jul. 2020. hood estimates in logistic regression models,’’ Biometrika, vol. 71, no. 1,
[15] T. T. Hormel, H. Xiong, B. Wang, A. Camino, J. Wang, D. Huang, pp. 1–10, 1984.
T. S. Hwang, and Y. Jia, ‘‘Development and validation of a deep learning [37] S. Varma and R. Simon, ‘‘Bias in error estimation when using
algorithm for distinguishing the nonperfusion area from signal reduction cross-validation for model selection,’’ BioMed Central, vol. 7, no. 1,
artifacts on OCT angiography,’’ Biomed. Opt. Express, vol. 10, no. 7, p. 91, 2006.
pp. 3257–3268, 2019. [38] R. Parikh, A. Mathai, S. Parikh, S. G. Sekhar, and R. Thomas, ‘‘Under-
[16] M. Heisler, S. Karst, J. Lo, Z. Mammo, T. Yu, S. Warner, D. Maberley, standing and using sensitivity, specificity and predictive values,’’ Indian J.
M. F. Beg, E. V. Navajas, and M. V. Sarunic, ‘‘Ensemble deep learning Ophthalmol., vol. 56, no. 1, pp. 45–50, 2008.
for diabetic retinopathy detection using optical coherence tomography [39] S. K. Padhy, B. Takkar, R. Chawla, and A. Kumar, ‘‘Artificial intelligence
angiography,’’ Transl. Vis. Sci. Technol., vol. 9, no. 2, p. 20, Apr. 2020. in diabetic retinopathy: A natural step to the future,’’ Indian J. Ophthalmol.,
vol. 67, no. 7, pp. 1004–1009, 2019.
[17] J. Lo, M. Heisler, V. Vanzan, S. Karst, I. Z. Matovinovic, S. Loncaric,
[40] V. Gulshan L. Peng, M. Coram, M. C. Stumpe, D. Wu,
E. V. Navajas, M. F. Beg, and M. V. Šarunic, ‘‘Microvasculature segmen-
A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams,
tation and intercapillary area quantification of the deep vascular complex
J. Cuadros, R. Kim, R. Raman, P. C. Nelson, J. L. Mega, and
using transfer learning,’’ Transl. Vis. Sci. Technol., vol. 9, no. 2, p. 38,
D. R. Webster, ‘‘Development and validation of a deep learning algorithm
Jul. 2020.
for detection of diabetic retinopathy in retinal fundus photographs,’’
[18] Y. Guo, A. Camino, J. Wang, D. Huang, T. S. Hwang, and Y. Jia, ‘‘MED- J. Amer. Med. Assoc., vol. 316, no. 22, pp. 2402–2410, 2016.
net, a neural network for automated detection of avascular area in OCT [41] Z. Liu, J. Hong, J. Townsend, and J. Wang, ‘‘Retinal tissue perfusion
angiography,’’ Biomed. Opt. Express, vol. 9, no. 11, pp. 5147–5158, 2018. reduction best discriminates early stage diabetic retinopathy in patients
[19] M. D. Abràmoff, P. T. Lavin, M. Birch, N. Shah, and J. C. Folk, ‘‘Pivotal with type 2 diabetes mellitus,’’ Retina, vol. 41, no. 3, pp. 546–554.
trial of an autonomous AI-based diagnostic system for detection of diabetic [42] B. J. Erickson, P. Korfiatis, Z. Akkus, and T. L. Kline, ‘‘Machine learning
retinopathy in primary care offices,’’ Yearbook Paediatric Endocrinol., for medical imaging,’’ Radiographics, vol. 37, no. 2, pp. 505–515, 2017.
vol. 1, p. 39, Sep. 2019. [43] K. Akyol, B. Sen, and S. Bayir, ‘‘Automatic detection of optic disc in
[20] M. Alam, D. Le, J. I. Lim, R. V. P. Chan, and X. Yao, ‘‘Supervised retinal image by using keypoint detection, texture analysis, and visual dic-
machine learning based multi-task artificial intelligence classification of tionary techniques,’’ Comput. Math Methods Med., vol. 2016, Mar. 2016,
retinopathies,’’ J. Clin. Med., vol. 8, no. 6, p. 872, Jun. 2019. Art. no. 6814791.
[21] H. S. Sandhu, N. Eladawi, M. Elmogy, R. Keynton, O. Helmy, S. Schaal, [44] E. Imani, H.-R. Pourreza, and T. Banaee, ‘‘Fully automated diabetic
and A. El-Baz, ‘‘Automated diabetic retinopathy detection using optical retinopathy screening using morphological component analysis,’’ Comput.
coherence tomography angiography: A pilot study,’’ Brit. J. Ophthalmol., Med. Imag. Graph., vol. 43, pp. 78–88, Jul. 2015.
vol. 102, no. 11, pp. 1564–1569, Nov. 2018. [45] J. Nayak, P. S. Bhat, and U. R. Acharya, ‘‘Automatic identification of
[22] T. Chang and C.-C.-J. Kuo, ‘‘Texture analysis and classification with tree- diabetic maculopathy stages using fundus images,’’ J. Med. Eng. Technol.,
structured wavelet transform,’’ IEEE Trans. Image Process., vol. 2, no. 4, vol. 33, no. 2, pp. 119–129, 2009.
pp. 429–441, Oct. 1993.

51694 VOLUME 9, 2021

You might also like