0% found this document useful (0 votes)
13 views7 pages

Automated Deep Learning in Ophthalmology AI That.4

Uploaded by

drkathir28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views7 pages

Automated Deep Learning in Ophthalmology AI That.4

Uploaded by

drkathir28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

REVIEW

CURRENT
OPINION Automated deep learning in ophthalmology: AI that
can build AI
Ciara O’Byrne a,b, Abdallah Abbas a,c, Edward Korot a,d,
and Pearse A. Keane a,e

Purpose of review
The purpose of this review is to describe the current status of automated deep learning in healthcare and to
explore and detail the development of these models using commercially available platforms. We highlight
key studies demonstrating the effectiveness of this technique and discuss current challenges and future
directions of automated deep learning.
Recent findings
There are several commercially available automated deep learning platforms. Although specific features
differ between platforms, they utilise the common approach of supervised learning. Ophthalmology is an
exemplar speciality in the area, with a number of recent proof-of-concept studies exploring classification of
retinal fundus photographs, optical coherence tomography images and indocyanine green angiography
images. Automated deep learning has also demonstrated impressive results in other specialities such as
dermatology, radiology and histopathology.
Summary
Automated deep learning allows users without coding expertise to develop deep learning algorithms. It is
rapidly establishing itself as a valuable tool for those with limited technical experience. Despite residual
challenges, it offers considerable potential in the future of patient management, clinical research and
medical education.
Video abstract
http://links.lww.com/COOP/A44
Keywords
artificial medical intelligence, automated deep learning, code-free deep learning, deep learning

INTRODUCTION ARTIFICIAL INTELLIGENCE THAT CAN


Aging populations, changing disease patterns and BUILD ARTIFICIAL INTELLIGENCE
the rise in patient autonomy and expectations are More recently, automated deep learning has
contributing to unprecedented pressures on our emerged showing promising results across a number
&& &
healthcare systems [1–3]. Rising levels of clinician of areas [6 ,7 ,8]. Described by the New York Times
burn-out [4], enormous administrative burdens and as ’AI That Can Build AI’ [9], it is a technique that
strained resources additionally intensify the situa- automates the process of preprocessing, network
tion resulting in a healthcare ecosystem that is architecture selection, and hyperparameter tuning
unsustainable and not fit for purpose. Deep learn- allowing those without coding expertise to develop
ing, a subtype of artificial intelligence (AI) inspired
by the neural architecture of the human brain, has
a
risen to prominence as a potential solution to some Medical Retina Department, Moorfields Eye Hospital NHS Foundation
Trust, London, UK, bTrinity College School of Medicine, Dublin, Ireland,
of these challenges. In fact, a report from 2020 c
University College London Medical School, London, UK, dByers Eye
predicted that the Global AI in Healthcare Market Institute, Stanford University, Stanford, California, USA and eNIHR
would grow from USD 4.9 billion in 2020 to USD Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital
45.2 billion by 2026 [5]. However, despite consider- NHS Foundation, London, UK
able promise, deep learning is limited in healthcare Correspondence to Dr Pearse A. Keane, Moorfields Eye Hospital NHS
by the need for highly specialised technical exper- Foundation Trust, London, UK. E-mail: [email protected]
tise, advanced computing resources and significant Curr Opin Ophthalmol 2021, 32:406–412
financial investment. DOI:10.1097/ICU.0000000000000779

www.co-ophthalmology.com Volume 32  Number 5  September 2021

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.


Automated deep learning in ophthalmology O’Byrne et al.

error. Although specific features vary between plat-


KEY POINTS forms, the basic principles for the development of an
 Automated deep learning allows users with no coding automated deep learning model are similar. The
expertise to develop deep learning algorithms. graphical user interfaces (GUI) are intuitive and
offer drag-and-drop or simple upload tools, remov-
 It has shown impressive results in comparison to ing the need for any coding expertise. Furthermore,
bespoke deep learning models across a number of
cloud-based approaches have removed the need for
specialities, including ophthalmology, histopathology,
and radiology. large local computing power. For each of the com-
mercial platforms examined by the authors, the GUI
 It is available on a number of commercial platforms consists of three common components – data
and, with technological advances, is likely to become upload, data visualization, and model evaluation.
even more accessible in the future.
Most platforms (Amazon, Apple, Clarifai, Google
 Automated deep learning represents a promising tool in and Microsoft) offer the option to develop models
the future of patient care, clinical research and for image classification, segmentation and object
medical education. detection, with Google additionally providing the
facility to create models using tabular data. We will
now describe the development process of an image
classifier model in detail (see Fig. 1).
models. It is commercially available on a number of
different platforms including Amazon Rekognition
Custom Labels (Amazon), Apple Create ML (Apple), Image classification
Baidu EasyDL (Baidu), Clarifai Train (Clarifai), Goo- Automated deep learning image classifiers use
gle Cloud AutoML Vision (Google), Huawei Mode- supervised learning, a machine learning technique
lArts ExeML (Huawei), MedicMind Deep learning in which the model discovers patterns from labelled
Training Platform (MedicMind), and Microsoft input data and adjusts its internal parameters to
Azure Custom Vision (Microsoft). Automated deep output a prediction algorithm with the lowest pos-
learning has sparked considerable excitement sible error rate [14]. Automated deep learning does
within the fields of healthcare and research by offer- not remove the initial task of data preparation. This
ing to obviate the barriers that have traditionally is a critical stage and the clinician must be cognisant
limited the accessibility of deep learning. With its to the importance of well labelled datasets that are
heavy use of imaging, ophthalmology is particularly representative of the use-case and target popula-
suited to applications of deep learning. To date, it tion. It is also imperative that data ethics and gov-
has been one of the leading specialities in the explo- ernance are adhered to if public datasets are not
&& && &
ration of this technique [6 ,10 ,11 ]. By enabling being used. Therefore, dataset curation and label-
clinicians to create their own deep learning models, ling represent a persistent pain-point within auto-
automated deep learning may truly maximise the mated deep learning. A number of companies have
way in which we harness the power of data and begun to release services to address this challenge
AI leading to novel discoveries, applications and including Amazon Automate Data Labeling, Clar-
improvements in patient care. ifai Scribe Label and Google Cloud AutoML Vision
Human Labeling.
Once the dataset has been curated, the project
THE AUTOMATED DEEP LEARNING can be created and named directly via the GUI. For
PROCESS those using Amazon, Clarifai, MedicMind and
Automated machine learning (AutoML) describes a Microsoft, the dataset should be preorganised into
set of techniques that assist with dataset manage- labelled folders before upload. Amazon additionally
ment, model selection, and optimisation of hyper- allows for the project to be linked to a cloud bucket
parameters. Methods include Auto-WEKA, Auto- of labelled images. Google offers the user the option
Sklearn, TuPAQ and AlphaD3M [12]. Although to either upload the dataset directly from the com-
AutoML automates part of the machine learning puter using the GUI or to convert it into .csv files
pipeline, it still requires coding expertise [13]. Auto- locally and upload via a cloud storage bucket, this is
mated deep learning is based on an approach called useful for managing large datasets and labelsets.
neural architecture search and typically utilises rein- MedicMind also allows for .csv files to be used,
forcement learning algorithms to automatically however, not through a cloud bucket. Apple does
develop a deep learning architecture. Reinforce- not use cloud computing, and labels are assigned
ment learning describes a goal-oriented reward pro- according to local folders. After the dataset has been
cess in which the algorithm learns through trial and uploaded, the labels can be reviewed and amended if

1040-8738 Copyright ß 2021 Wolters Kluwer Health, Inc. All rights reserved. www.co-ophthalmology.com 407

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.


Artificial intelligence in retina

FIGURE 1. The development process of an automated deep learning image classifier model.

necessary. Metrics are also supplied alerting the user AUTOMATED DEEP LEARNING IN THE
to the distribution of images per label. The dataset LITERATURE
can then be either manually or automatically split
into three parts. The training set receives approxi- Ophthalmology
mately 60–80% of the images and is important for In 2019, our group published one of the earliest
network parameter selection. The remainder are demonstrations of automated deep learning for
&&
divided between a validation set, used to optimise medical imaging classification [6 ]. Two clinicians
the model parameters, and a held-out independent with no coding expertise developed automated deep
test set, which ultimately assesses the model perfor- learning models using five publicly available data-
mance. sets of retinal fundus images, optical coherence
Once the user is satisfied with the uploaded tomography (OCT) images, dermatological skin
dataset, the model may be trained. Following the lesion images and chest x-ray images. The models
training process, detailed statistics for the model were developed using Google AutoML Vision. Sen-
performance are provided, which vary between plat- sitivity (recall), specificity, positive predictive value
forms. Confidence threshold is provided by Ama- (precision) and area under the precision recall curve
zon, Clarifai, Google and Microsoft. This can be were used to evaluate model discriminative perfor-
altered to generate new precision and recalls by all mance and diagnostic properties. Aside from the
of the above except for Amazon. Confusion matrices multilabel model trained using one of the chest
(Apple, Clarifai, Google and MedicMind) allow the x-ray datasets, we were able to demonstrate compa-
user to visualise the true positives and false negatives rable accuracy to state-of-the-art bespoke deep learn-
and are essential for model evaluation. Precision ing systems. Similarly, Kim and colleagues used
recall curves are provided by Amazon, Clarifai and Google AutoML Vision to train two models in the
&
Google. MedicMind is the only automated deep classification of pachychoroid disease [11 ]. A data-
learning platform examined that offers saliency set of 783 ultra-widefield indocyanine green (ICG)
maps, an approach being explored within the sub- angiography images was curated, labelled by two
field of explainable AI [15]. Although both Medi- retina specialists. The model performance was
cMind and Google offer external validation, Google assessed using precision and recall, with accuracy
is the only platform familiar to the authors that levels then compared against both ophthalmic res-
allow for external validation via batch prediction, idents and retinal specialists. The authors reported
allowing predictions to be generated on a large that their second model demonstrated better preci-
external dataset efficiently. Download of the final sion and accuracy than the retina specialists with
model is facilitated by Google and Microsoft plat- comparable recall and specificity. In comparison
forms. to the ophthalmic residents, the second model

408 www.co-ophthalmology.com Volume 32  Number 5  September 2021

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.


Automated deep learning in ophthalmology O’Byrne et al.

demonstrated inferior recall and specificity but ML models are limited to the local computer, Goo-
greater precision and accuracy. gle AutoML Vision utilises Google Cloud resulting in
More recently, our group published a comprehen- computing fees.
sive performance and feature set review of six com-
mercially available platforms using four open-source
ophthalmic imaging datasets, including two retinal Tabular data
fundus photograph datasets and two OCT datasets Current automated deep learning-based ophthal-
&&
[10 ]. Twenty four automated deep learning models mology research has focused on interpreting fundus
were trained by clinicians with none to limited coding photographs, ICG angiography and OCT scans
&& &
expertise, and the specific features and performance of [10 ,11 ,22,23,24]. Structured data, based on a tab-
each application programming interface were evalu- ular format of columns and rows, represents an
ated. Notably, only Amazon, Apple, Google and Micro- additional rich source of information relating to
soft had the ability to process large imaging datasets patient histories, diagnoses and prognoses. The
and of these, Apple’s performance was considerably potential benefit of such data within ophthalmol-
worse than Amazon. We postulate that this may be due ogy research is exemplified by projects such as the
to Apple Create ML running locally, rather than utilis- Intelligent Research in Sight (IRIS) Registry. This
ing large cloud computing resources. We also observed contains information from nearly 66 million
an improved performance with OCT classification patients [25]. Thus, the diversification of automated
models across all platforms in comparison to the fun- deep learning-based ophthalmology research to
dus photograph models. We suspect this may be due to include models that take advantage of structured
the increased dimensionality of colour fundus photo- inputs represents a significant step forward.
graphs. As we have previously highlighted, Google Though the current literature is scarce, initial
AutoML Vision is the only commercial deep learning models built using structured datasets have shown
platform allowing the user to carry out external vali- promise. A recent study by Antaki et al. demon-
dation via batch prediction. The caveat is that this strated that ophthalmologists with no program-
must be carried out using the command line interface, ming experience could use electronic health
thus requiring some degree of coding experience [16]. record data to build predictive models for prolifer-
ative vitreoretinopathy, using an interactive appli-
cation in MATLAB [26]. These code-free models
Other specialities achieved comparable F1 scores to manually coded
Automated deep learning has also been applied in a models built on the same datasets. Moreover, novel
number of other specialities. As discussed, the auto- tools specifically engineered for structured data
mated deep learning model we described in 2019 have now been developed, such as Google Cloud’s
demonstrated impressive results in the classification AutoML Tables. This platform enables clinicians to
of chest x-rays, with one model showing comparable build classification, regression and time-series
&&
performance to bespoke deep learning models [6 ]. machine learning models without needing to code.
More recently, chest x-ray classification has been Early work using this platform includes a model that
explored by other research groups using Microsoft predicts visual outcomes in patients receiving treat-
&
Custom Vision [7 ] and Google AutoML Vision [17]. ment for neovascular age-related macular degenera-
Google AutoML Vision has also been used to tion. This achieved an area under the receiver
develop image classifier models in histopathology operating characteristic curve of 0.892 [27]. To assist
[18], neuro-histopathology [8] and otolaryngology in scheduling, our group has trained a cataract
[19], whereas Wang et al. utilised the Google surgery time prediction model which predicts oper-
AutoML object detection tool to develop a system ating time with a mean absolute error of 5 min.
capable of identifying and risk stratifying high-risk Future work should aim to further examine the
mutations in thyroid nodules [20]. Borkowski and feasibility of such tools in comparison to conven-
colleagues performed a comparison between Google tional machine learning methods, emulating the
AutoML Vision versus Apple Create ML in a variety numerous comparative studies exploring auto-
&&
of different lung and colon diagnostic pathology mated deep learning for image classification [6 ].
scenarios [21]. The authors trained twelve deep
learning models in total (six on each platform) to
differentiate between a variety of lung and colon LIMITATIONS OF AUTOMATED DEEP
pathologies. Although the authors did not deter- LEARNING
mine any statistically significant differences in Automated deep learning is not a panacea. Though
terms of model performance between both plat- outside the scope of this article, there are barriers
forms, they observe that although Apple Create common to all AI applications in healthcare. These

1040-8738 Copyright ß 2021 Wolters Kluwer Health, Inc. All rights reserved. www.co-ophthalmology.com 409

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.


Artificial intelligence in retina

include dataset curation, ethical and medicolegal effectiveness improves and ethical and governance
considerations, data governance and regulatory regulations are established. Clinicians are use-case
issues as well as patient and clinician acceptance experts, who are best suited to train models, speci-
of these systems. fied for patient-relevant endpoints. Consequently,
The ‘black box’ phenomena is well documented clinicians are best suited to apply the relevant labels
as a limitation in the implementation of artificial for model training. By allowing physicians to inde-
medical intelligence tools [28–30]. This is further pendently devise and develop deep learning models,
intensified by the inability to select or obtain infor- patient needs may be uniquely and efficiently
mation about the neural architecture framework addressed. Image recognition models may greatly
chosen for the model. Given the fact that minimal enhance screening programmes, particularly in
technical expertise is required for the development under-resourced areas [31]. Structured data
of these automated systems, it is imperative that approaches may prove useful in the prediction of
robust tools are developed to allow the clinician to patient outcomes, whereas natural language proc-
understand how the model has reached its decision. essing may alleviate the significant administrative
Discriminatory bias is another issue that must be burdens clinicians are faced with at present. Despite
highlighted to clinicians with limited deep learning these advantages, hospital management and clini-
experience when developing these models. Discrim- cians must be aware that the use of such models for
inatory bias describes the situation in which a model direct patient care would also be subject to the same
is selected to optimally represent the majority pop- clinical validation and regulatory requirements as
ulation. This may result in an inferior performance bespoke deep learning systems.
with under-represented groups. Although public
datasets represent a valuable resource, they may
be particularly prone to discriminatory bias depend- Clinical research
ing on how the dataset was collected and who the Automated deep learning may radically enhance the
deep learning model is being developed for. Clini- clinical research landscape. With the capacity to
cians must be aware of the perils associated with play a number of different roles within the research
overfitting and develop models with their target toolkit, it has the potential to both alleviate the
population in mind. External validation, with strain of laborious administrative tasks while also
datasets representing various real-world image identifying new patterns within data previously
acquisition environments with varying patient unknown to humans, leading the way towards clin-
demographics, remains key. ical trial selection, drug discovery and development.
There are also limitations specific to automated To first ascertain if there is sufficient signal to invest
deep learning platforms. These platforms do not in further custom model development via coding,
offer flexibility in selecting between model archi- automated deep learning models may be trained as a
tectures used to train the model. The evaluation proof of concept.
metrics vary between platforms, which can make
it difficult to accurately assess and compare the
performance of models between platforms. Many
Improved technology
platforms do not offer the facility to externally
validate the model. This is an essential step in the Although cloud computing has alleviated some of
process, and one which must be incorporated into a the challenges associated with deep learning, it still
system if it is to be considered for implementation. depends on high bandwidth, low latency and robust
Finally, there are costs associated with these com- privacy safeguards [32]. Further advances in mobile
mercial platforms, particularly those that are cloud- technology, such as 5G, may address these issues
based. Although automated deep learning is an through the use of automated deep learning systems
important step towards the democratisation of AI via local edge models (i.e., compact low power
in healthcare, it still presents financial burdens models which do not require a continuous internet
which may be challenging for small research groups connection to run) [33,34]. Combined with tele-
with minimal funding. medicine and wearable sensors, these models may
considerably improve the quality of healthcare in
under-resourced communities.
FUTURE DIRECTIONS

Direct patient care Medical education


Automated deep learning has the potential to play It is essential that medical students and clinicians
an important role in patient care, particularly as alike are adequately equipped with the skills

410 www.co-ophthalmology.com Volume 32  Number 5  September 2021

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.


Automated deep learning in ophthalmology O’Byrne et al.

FIGURE 2. Overview of the potential applications of automated deep learning in medical education.

needed to navigate the field of artificial medical CONCLUSION


intelligence [35]. Automated deep learning may be Automated deep learning is establishing itself as a
able to assist with this in a number of different ways potential solution to many of the challenges facing
(see Fig. 2). First, it provides a practical opportunity healthcare systems today. We believe it will play a
to grasp the process of model development, evalu- central role in the future democratisation and indus-
ation and implementation. Second, it draws atten- trialisation of AI in healthcare, ultimately transforming
tion to the potential hazards and limitations patient care, medical education and clinical research.
associated with deep learning models and in turn,
imparts a deeper understanding of the ethical con- Acknowledgements
siderations surrounding these systems. As dis- E.K. is supported by a Springboard Grant from the Moor-
cussed, automated deep learning allows the fields Eye Charity.
medical community to develop algorithms that P.A.K. is supported by a Moorfields Eye Charity Career
are uniquely appropriate for their specific needs. Development Award (R190028A) and a UK Research &
This could be capitalised upon to develop auto- Innovation Future Leaders Fellowship (MR/T019050/1).
mated deep learning models to enhance medical
education with applications in surgical training, Financial support and sponsorship
disease recognition and monitoring of progress P.A.K. has acted as a consultant for DeepMind, Roche,
and performance. Novartis, Apellis, and BitFount and is an equity owner in

1040-8738 Copyright ß 2021 Wolters Kluwer Health, Inc. All rights reserved. www.co-ophthalmology.com 411

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.


Artificial intelligence in retina

12. Waring J, Lindvall C, Umeton R. Automated machine learning: review of the


Big Picture Medical. He has received speaker fees from state-of-the-art and opportunities for healthcare. Artif Intell Med 2020;
Heidelberg Engineering, Topcon, Allergan, and Bayer. 104:101822. doi: 10.1016/j.artmed.2020.101822. [Epub ahead of print]
13. Yao QM, Wang MS, Chen YQ, et al. Taking human out of learning applica-
E.K. has acted as a consultant for Google Health. He tions: a survey on automated machine learning. arXiv 2019 Dec
has received consulting fees from Genentech. 16:181013306v4. Available at: https://arxiv.org/abs/1810.13306
14. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521:436–444.
15. Adebayo J, Gilmer J, Muelly M, et al. Sanity checks for saliency maps. arXiv
Conflicts of interest 2018. Available at: http://arxiv.org/abs/1810.03292. [Pre-print]
16. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and
There are no conflicts of interest. treatable diseases by image-based deep learning. Cell 2018; 172:1122–
1131. e9.
17. Ghosh T, Tanwar S, Chumber S, et al. Classification of chest radiographs
using general purpose cloud-based automated machine learning: pilot study.
REFERENCES AND RECOMMENDED Egypt J Radiol Nucl Med 2021; 52:120. https://doi.org/10.1186/s43055-
021-00499-w.
READING 18. Zeng Y, Zhang J. A machine learning model for detecting invasive ductal
Papers of particular interest, published within the annual period of review, have carcinoma with Google Cloud AutoML Vision. Comput Biol Med 2020;
been highlighted as: 122:103861. doi: 10.1016/j.compbiomed.2020.103861. [Epub ahead of
& of special interest print]
&& of outstanding interest
19. Livingstone D, Chau J. Otoscopic diagnosis using computer vision: an
automated machine learning approach. Laryngoscope 2020; 130:1408–
1. GBD 2019 Viewpoint Collaborators. Five insights from the Global Burden of 1413.
Disease Study 2019. Lancet 2020; 396:1135–1159. 20. Wang S, Xu J, Tahmasebi A, et al. Incorporation of a machine learning
2. Seniori Costantini A, Gallo F, Pega F, et al. Population health and status of algorithm with object detection within the thyroid imaging reporting and data
epidemiology in Western European, Balkan and Baltic countries. Int J Epi- system improves the diagnosis of genetic risk. Front Oncol 2020;
demiol 2015; 44:300–323. 10:591846. doi: 10.3389/fonc.2020.591846.
3. Maresova P, Javanmardi E, Barakovic S, et al. Consequences of chronic 21. Borkowski AA, Wilson CP, Borkowski SA, et al. Google Auto ML versus Apple
diseases and other limitations associated with old age - a scoping review. Create ML for Histopathologic Cancer Diagnosis; Which Algorithms Are
BMC Public Health 2019; 19:1431. https://doi.org/10.1186/s12889-019- Better? [Internet]. arXiv [q-bio.QM]. 2019. Available from: http://arxiv.org/
7762-5. abs/1903.08057
4. The Lancet. Physician burnout: a global crisis. Lancet 2019; 394:93. doi: 22. Korot E, Wagner S, Faes L, et al. AI building AI: deep learning detection of
10.1016/S0140-6736(19)31573-9. [Epub ahead of print] referable diabetic retinopathy sans-coding. Investig Ophthalmol Vis Sci 2020;
5. Artificial Intelligence in Healthcare Market with Covid-19 Impact Analysis by 61:2025–12025.
Offering (Hardware, Software, Services), Technology (Machine Learning, 23. Beqiri S, Abbas A, Korot E, et al. Investigating the impact of saliency maps on
NLP, Context-Aware Computing, Computer Vision), End-Use Application, clinician’s confidence in model predictions. Association for Research in Vision
End User and Region - Global Forecast to 2026. Markets and Markets; 2020 and Ophthalmology; Virtual Conference 2021.
Jun. Report No.: 5116503. 24. Wagner S, Korot E, Khalid H, et al. Automated machine learning model for
6. Faes L, Wagner SK, Fu DJ, et al. Automated deep learning design for medical fundus photo gradeability and laterality: a public ML Research Toolkit Sans-
&& image classification by health-care professionals with no coding experience: a coding. Investig Ophthalmol Vis Sci 2020; 61:2029–12029.
feasibility study. Lancet Digit Health 2019; 1:e232–e242. 25. Larkin H. Iris Registry update [Internet]. EuroTimes. 2021 [cited 2021 May 3].
This is a seminal publication in the field of automated deep learning for medical Available from: https://www.eurotimes.org/iris-registry-update/
image classification. The authors demonstrate that two clinicians with no coding 26. Antaki F, Kahwati G, Sebag J, et al. Predictive modeling of proliferative
expertise can develop deep learning models using automated deep learning via vitreoretinopathy using automated machine learning by ophthalmologists
Google Cloud AutoML Vision that have comparable performance to bespoke deep without coding experience. Sci Rep 2020; 10:19528. https://doi.org/
learning models. 10.1038/s41598-020-76665-3.
7. Borkowski AA, Viswanadhan NA, Thomas LB, et al. Using artificial 27. Abbas A, Beqiri S, Wagner S, et al. Using the What-if Tool to perform nearest
& intelligence for COVID-19 chest X-ray diagnosis. Fed Pract 2020; counterfactual analysis on an AutoML model that predicts visual acuity
37:398–404. outcomes in patients receiving treatment for wet age-related macular degen-
This paper discusses the development of an automated deep learning system to eration. In.
identify COVID-19 from a public dataset of chest x-ray images. It is one of the few 28. The Lancet Respiratory Medicine. Opening the black box of machine learning.
publications in the field of automated deep learning for medical image classifica- Lancet Respir Med 2018; 6:801. doi: 10.1016/S2213-2600(18)30425-9.
tion using Microsoft CustomVision. [Epub ahead of print]
8. Koga S, Ghayal NB, Dickson DW. Deep learning-based image classification 29. Zihni E, Madai VI, Livne M, et al. Opening the black box of artificial intelligence
in differentiating tufted astrocytes, astrocytic plaques, and neuritic plaques. J for clinical decision support: a study predicting stroke outcome. PLoS One
Neuropathol Exp Neurol 2021; 80:306–312. 2020; 15:e0231166.
9. Metz C. This AI can build AI itself. The New York Times 2017; B: 1. https:// 30. London AJ. Artificial intelligence and black-box medical decisions: accuracy
www.nytimes.com/2017/11/05/technology/machine-learning-artificial-intelli- versus explainability. Hastings Center Rep 2019; 49:15–21.
gence-ai.html 31. Bellemo V, Lim ZW, Lim G, et al. Artificial intelligence using deep learning to
10. Korot E, Guan Z, Ferraz D, et al. Code-free deep learning for multi-modality screen for referable and vision-threatening diabetic retinopathy in Africa: a
&& medical image classification. Nat Mach Intell 2021; 3:288–298. clinical validation study. Lancet Digit Health 2019; 1:e35–e44.
This is the only comprehensive review of six commercially available automated 32. Merenda M, Porcaro C, Iero D. Edge machine learning for AI-enabled IoT
deep learning platforms that we are aware of. This article details the various devices: a review. Sensors (Basel) 2020; 20:2533. doi: 10.3390/
features of each platform and describes the performance accuracy in comparison s20092533.
to previously published bespoke models. 33. Keane PA, Topol EJ. Medicine and meteorology: cloud, connectivity, and care.
11. Kim IK, Lee K, Park JH, et al. Classification of pachychoroid disease on Lancet 2020; 395:1334. doi: 10.1016/S0140-6736(20)30813-8.
& ultrawide-field indocyanine green angiography using auto-machine learning 34. Greco L, Percannella G, Ritrovato P, et al. Trends in IoT based solutions for
platform. Br J Ophthalmol 2021; 105:856–861. healthcare: moving AI to the edge. Pattern Recognit Lett 2020; 135:
This is the first application of automated deep learning to indocyanine green 346–353.
angiography images. It highlights the effectiveness of automated deep learning 35. Keane PA, Topol EJ. AI-facilitated healthcare requires education of clinicians.
and discusses some limitations. Lancet 2021; 397:1254. doi: 10.1016/S0140-6736(21)00722-4.

412 www.co-ophthalmology.com Volume 32  Number 5  September 2021

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.

You might also like