301
|
Zheng C, Xie X, Zhou K, Chen B, Chen J, Ye H, Li W, Qiao T, Gao S, Yang J, Liu J. Assessment of Generative Adversarial Networks Model for Synthetic Optical Coherence Tomography Images of Retinal Disorders. Transl Vis Sci Technol 2020; 9:29. [PMID: 32832202 PMCID: PMC7410116 DOI: 10.1167/tvst.9.2.29] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 03/24/2020] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To assess whether a generative adversarial network (GAN) could synthesize realistic optical coherence tomography (OCT) images that satisfactorily serve as the educational images for retinal specialists, and the training datasets for the classification of various retinal disorders using deep learning (DL). METHODS The GANs architecture was adopted to synthesize high-resolution OCT images trained on a publicly available OCT dataset, including urgent referrals (37,206 OCT images from eyes with choroidal neovascularization, and 11,349 OCT images from eyes with diabetic macular edema) and nonurgent referrals (8617 OCT images from eyes with drusen, and 51,140 OCT images from normal eyes). Four hundred real and synthetic OCT images were evaluated by two retinal specialists (with over 10 years of clinical retinal experience) to assess image quality. We further trained two DL models on either real or synthetic datasets and compared the performance of urgent versus nonurgent referrals diagnosis tested on a local (1000 images from the public dataset) and clinical validation dataset (278 images from Shanghai Shibei Hospital). RESULTS The image quality of real versus synthetic OCT images was similar as assessed by two retinal specialists. The accuracy of discrimination of real versus synthetic OCT images was 59.50% for retinal specialist 1 and 53.67% for retinal specialist 2. For the local dataset, the DL model trained on real (DL_Model_R) and synthetic OCT images (DL_Model_S) had an area under the curve (AUC) of 0.99, and 0.98, respectively. For the clinical dataset, the AUC was 0.94 for DL_Model_R and 0.90 for DL_Model_S. CONCLUSIONS The GAN synthetic OCT images can be used by clinicians for educational purposes and for developing DL algorithms. TRANSLATIONAL RELEVANCE The medical image synthesis based on GANs is promising in humans and machines to fulfill clinical tasks.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Kang Zhou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Bang Chen
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital, Shanghai, China
| | - Haiyun Ye
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wen Li
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Tong Qiao
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Shenghua Gao
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jianlong Yang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
| |
Collapse
|
302
|
Balachandar N, Chang K, Kalpathy-Cramer J, Rubin DL. Accounting for data variability in multi-institutional distributed deep learning for medical imaging. J Am Med Inform Assoc 2020; 27:700-708. [PMID: 32196092 PMCID: PMC7309257 DOI: 10.1093/jamia/ocaa017] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/02/2020] [Accepted: 02/07/2020] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVES Sharing patient data across institutions to train generalizable deep learning models is challenging due to regulatory and technical hurdles. Distributed learning, where model weights are shared instead of patient data, presents an attractive alternative. Cyclical weight transfer (CWT) has recently been demonstrated as an effective distributed learning method for medical imaging with homogeneous data across institutions. In this study, we optimize CWT to overcome performance losses from variability in training sample sizes and label distributions across institutions. MATERIALS AND METHODS Optimizations included proportional local training iterations, cyclical learning rate, locally weighted minibatch sampling, and cyclically weighted loss. We evaluated our optimizations on simulated distributed diabetic retinopathy detection and chest radiograph classification. RESULTS Proportional local training iteration mitigated performance losses from sample size variability, achieving 98.6% of the accuracy attained by centrally hosting in the diabetic retinopathy dataset split with highest sample size variance across institutions. Locally weighted minibatch sampling and cyclically weighted loss both mitigated performance losses from label distribution variability, achieving 98.6% and 99.1%, respectively, of the accuracy attained by centrally hosting in the diabetic retinopathy dataset split with highest label distribution variability across institutions. DISCUSSION Our optimizations to CWT improve its capability of handling data variability across institutions. Compared to CWT without optimizations, CWT with optimizations achieved performance significantly closer to performance from centrally hosting. CONCLUSION Our work is the first to identify and address challenges of sample size and label distribution variability in simulated distributed deep learning for medical imaging. Future work is needed to address other sources of real-world data variability.
Collapse
Affiliation(s)
- Niranjan Balachandar
- Laboratory of Quantitative Imaging and Artificial Intelligence, Department of Radiology and Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- MGH and BWH Center for Clinical Data Science, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel L Rubin
- Laboratory of Quantitative Imaging and Artificial Intelligence, Department of Radiology and Biomedical Data Science, Stanford University, Stanford, CA, USA
| |
Collapse
|
303
|
Lim G, Bellemo V, Xie Y, Lee XQ, Yip MYT, Ting DSW. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review. EYE AND VISION (LONDON, ENGLAND) 2020; 7:21. [PMID: 32313813 PMCID: PMC7155252 DOI: 10.1186/s40662-020-00182-7] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/10/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND Effective screening is a desirable method for the early detection and successful treatment for diabetic retinopathy, and fundus photography is currently the dominant medium for retinal imaging due to its convenience and accessibility. Manual screening using fundus photographs has however involved considerable costs for patients, clinicians and national health systems, which has limited its application particularly in less-developed countries. The advent of artificial intelligence, and in particular deep learning techniques, has however raised the possibility of widespread automated screening. MAIN TEXT In this review, we first briefly survey major published advances in retinal analysis using artificial intelligence. We take care to separately describe standard multiple-field fundus photography, and the newer modalities of ultra-wide field photography and smartphone-based photography. Finally, we consider several machine learning concepts that have been particularly relevant to the domain and illustrate their usage with extant works. CONCLUSIONS In the ophthalmology field, it was demonstrated that deep learning tools for diabetic retinopathy show clinically acceptable diagnostic performance when using colour retinal fundus images. Artificial intelligence models are among the most promising solutions to tackle the burden of diabetic retinopathy management in a comprehensive manner. However, future research is crucial to assess the potential clinical deployment, evaluate the cost-effectiveness of different DL systems in clinical practice and improve clinical acceptance.
Collapse
Affiliation(s)
- Gilbert Lim
- School of Computing, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Valentina Bellemo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Yuchen Xie
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Xin Q. Lee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Michelle Y. T. Yip
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| | - Daniel S. W. Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Vitreo-Retinal Service, Singapore National Eye Center, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
- Artificial Intelligence in Ophthalmology, Singapore Eye Research Institute, 11 Third Hospital Road Avenue, Singapore, 168751 Singapore
| |
Collapse
|
304
|
Horton MB, Brady CJ, Cavallerano J, Abramoff M, Barker G, Chiang MF, Crockett CH, Garg S, Karth P, Liu Y, Newman CD, Rathi S, Sheth V, Silva P, Stebbins K, Zimmer-Galler I. Practice Guidelines for Ocular Telehealth-Diabetic Retinopathy, Third Edition. Telemed J E Health 2020; 26:495-543. [PMID: 32209018 PMCID: PMC7187969 DOI: 10.1089/tmj.2020.0006] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 01/11/2020] [Accepted: 01/11/2020] [Indexed: 12/24/2022] Open
Abstract
Contributors The following document and appendices represent the third edition of the Practice Guidelines for Ocular Telehealth-Diabetic Retinopathy. These guidelines were developed by the Diabetic Retinopathy Telehealth Practice Guidelines Working Group. This working group consisted of a large number of subject matter experts in clinical applications for telehealth in ophthalmology. The editorial committee consisted of Mark B. Horton, OD, MD, who served as working group chair and Christopher J. Brady, MD, MHS, and Jerry Cavallerano, OD, PhD, who served as cochairs. The writing committees were separated into seven different categories. They are as follows: 1.Clinical/operational: Jerry Cavallerano, OD, PhD (Chair), Gail Barker, PhD, MBA, Christopher J. Brady, MD, MHS, Yao Liu, MD, MS, Siddarth Rathi, MD, MBA, Veeral Sheth, MD, MBA, Paolo Silva, MD, and Ingrid Zimmer-Galler, MD. 2.Equipment: Veeral Sheth, MD (Chair), Mark B. Horton, OD, MD, Siddarth Rathi, MD, MBA, Paolo Silva, MD, and Kristen Stebbins, MSPH. 3.Quality assurance: Mark B. Horton, OD, MD (Chair), Seema Garg, MD, PhD, Yao Liu, MD, MS, and Ingrid Zimmer-Galler, MD. 4.Glaucoma: Yao Liu, MD, MS (Chair) and Siddarth Rathi, MD, MBA. 5.Retinopathy of prematurity: Christopher J. Brady, MD, MHS (Chair) and Ingrid Zimmer-Galler, MD. 6.Age-related macular degeneration: Christopher J. Brady, MD, MHS (Chair) and Ingrid Zimmer-Galler, MD. 7.Autonomous and computer assisted detection, classification and diagnosis of diabetic retinopathy: Michael Abramoff, MD, PhD (Chair), Michael F. Chiang, MD, and Paolo Silva, MD.
Collapse
Affiliation(s)
- Mark B. Horton
- Indian Health Service-Joslin Vision Network (IHS-JVN) Teleophthalmology Program, Phoenix Indian Medical Center, Phoenix, Arizona
| | - Christopher J. Brady
- Division of Ophthalmology, Department of Surgery, Larner College of Medicine, University of Vermont, Burlington, Vermont
| | - Jerry Cavallerano
- Beetham Eye Institute, Joslin Diabetes Center, Massachusetts
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Michael Abramoff
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, Iowa
- Department of Biomedical Engineering, and The University of Iowa, Iowa City, Iowa
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, Iowa
- Department of Ophthalmology, Stephen A. Wynn Institute for Vision Research, The University of Iowa, Iowa City, Iowa
- Iowa City VA Health Care System, Iowa City, Iowa
- IDx, Coralville, Iowa
| | - Gail Barker
- Arizona Telemedicine Program, The University of Arizona, Phoenix, Arizona
| | - Michael F. Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | | | - Seema Garg
- Department of Ophthalmology, University of North Carolina, Chapel Hill, North Carolina
| | | | - Yao Liu
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison, Madison, Wisconsin
| | | | - Siddarth Rathi
- Department of Ophthalmology, NYU Langone Health, New York, New York
| | - Veeral Sheth
- University Retina and Macula Associates, University of Illinois at Chicago, Chicago, Illinois
| | - Paolo Silva
- Beetham Eye Institute, Joslin Diabetes Center, Massachusetts
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Kristen Stebbins
- Vision Care Department, Hillrom, Skaneateles Falls, New York, New York
| | | |
Collapse
|
305
|
Weinert MC, Wallace DK, Freedman SF, Riggins JW, Gallaher KJ, Prakalapakorn SG. ROPtool analysis of plus and pre-plus disease in narrow-field images: a multi-image quadrant-level approach. J AAPOS 2020; 24:89.e1-89.e7. [PMID: 32224288 PMCID: PMC8036168 DOI: 10.1016/j.jaapos.2020.01.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 01/12/2020] [Accepted: 01/25/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND The presence of plus disease is important in determining when to treat retinopathy of prematurity (ROP), but the diagnosis of plus disease is subjective. Semiautomated computer programs (eg, ROPtool) can objectively measure retinal vascular characteristics in retinal images, but are limited by image quality. The purpose of this study was to evaluate whether ROPtool can accurately identify pre-plus and plus disease in narrow-field images of varying qualities using a new methodology that combines quadrant-level data from multiple images of a single retina. METHODS This was a cross-sectional study of previously collected narrow-field retinal images of infants screened for ROP. Using one imaging session per infant, we evaluated the ability of ROPtool to analyze images using our new methodology and the accuracy of ROPtool indices (tortuosity index [TI], maximum tortuosity [Tmax], dilation index [DI], maximum dilation [Dmax], sum of adjusted indices [SAI], and tortuosity-weighted plus [TWP]) to identify pre-plus and plus disease in images compared to clinical examination findings. RESULTS Of 198 eyes (from 99 infants) imaged, 769/792 quadrants (98%) were analyzable. Overall, 98% of eyes had 3-4 analyzable quadrants. For plus disease, area under the curves (AUCs) of receiver operating characteristic curves were: TWP (0.98) > TI (0.97) = Tmax (0.97) > SAI (0.96) > DI (0.88) > Dmax (0.84). For pre-plus or plus disease, AUCs were: TWP (0.95) > TI (0.94) = Tmax (0.94) = SAI (0.94) > DI (0.86) > Dmax (0.83). CONCLUSIONS Using a novel methodology combining quadrant-level data, ROPtool can analyze narrow-field images of varying quality to identify pre-plus and plus disease with high accuracy.
Collapse
Affiliation(s)
- Marguerite C Weinert
- Duke University Department of Ophthalmology, Durham, North Carolina; Massachusetts Eye and Ear Infirmary, Boston, Massachusetts
| | - David K Wallace
- Indiana University Department of Ophthalmology, Indianapolis, Indiana
| | | | - J Wayne Riggins
- Department of Neonatology, Cape Fear Valley Medical Center, Fayetteville, North Carolina; Cape Fear Eye Associates, Fayetteville, North Carolina
| | - Keith J Gallaher
- Department of Neonatology, Cape Fear Valley Medical Center, Fayetteville, North Carolina
| | | |
Collapse
|
306
|
Li MD, Chang K, Bearce B, Chang CY, Huang AJ, Campbell JP, Brown JM, Singh P, Hoebel KV, Erdoğmuş D, Ioannidis S, Palmer WE, Chiang MF, Kalpathy-Cramer J. Siamese neural networks for continuous disease severity evaluation and change detection in medical imaging. NPJ Digit Med 2020; 3:48. [PMID: 32258430 PMCID: PMC7099081 DOI: 10.1038/s41746-020-0255-1] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Accepted: 03/06/2020] [Indexed: 01/01/2023] Open
Abstract
Using medical images to evaluate disease severity and change over time is a routine and important task in clinical decision making. Grading systems are often used, but are unreliable as domain experts disagree on disease severity category thresholds. These discrete categories also do not reflect the underlying continuous spectrum of disease severity. To address these issues, we developed a convolutional Siamese neural network approach to evaluate disease severity at single time points and change between longitudinal patient visits on a continuous spectrum. We demonstrate this in two medical imaging domains: retinopathy of prematurity (ROP) in retinal photographs and osteoarthritis in knee radiographs. Our patient cohorts consist of 4861 images from 870 patients in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) cohort study and 10,012 images from 3021 patients in the Multicenter Osteoarthritis Study (MOST), both of which feature longitudinal imaging data. Multiple expert clinician raters ranked 100 retinal images and 100 knee radiographs from excluded test sets for severity of ROP and osteoarthritis, respectively. The Siamese neural network output for each image in comparison to a pool of normal reference images correlates with disease severity rank (ρ = 0.87 for ROP and ρ = 0.89 for osteoarthritis), both within and between the clinical grading categories. Thus, this output can represent the continuous spectrum of disease severity at any single time point. The difference in these outputs can be used to show change over time. Alternatively, paired images from the same patient at two time points can be directly compared using the Siamese neural network, resulting in an additional continuous measure of change between images. Importantly, our approach does not require manual localization of the pathology of interest and requires only a binary label for training (same versus different). The location of disease and site of change detected by the algorithm can be visualized using an occlusion sensitivity map-based approach. For a longitudinal binary change detection task, our Siamese neural networks achieve test set receiving operator characteristic area under the curves (AUCs) of up to 0.90 in evaluating ROP or knee osteoarthritis change, depending on the change detection strategy. The overall performance on this binary task is similar compared to a conventional convolutional deep-neural network trained for multi-class classification. Our results demonstrate that convolutional Siamese neural networks can be a powerful tool for evaluating the continuous spectrum of disease severity and change in medical imaging.
Collapse
Affiliation(s)
- Matthew D. Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Ben Bearce
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Connie Y. Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Ambrose J. Huang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR USA
| | - James M. Brown
- School of Computer Science, University of Lincoln, Lincoln, UK
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Katharina V. Hoebel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Deniz Erdoğmuş
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA USA
| | - Stratis Ioannidis
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA USA
| | - William E. Palmer
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
| | - Michael F. Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA
- MGH and BWH Center for Clinical Data Science, Massachusetts General Hospital, Boston, MA USA
| |
Collapse
|
307
|
Brady CJ, D'Amico S, Campbell JP. Telemedicine for Retinopathy of Prematurity. Telemed J E Health 2020; 26:556-564. [PMID: 32209016 DOI: 10.1089/tmj.2020.0010] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Background: Retinopathy of prematurity (ROP) is a disease of the retinal vasculature that remains a leading cause of childhood blindness worldwide despite improvements in the systemic care of premature newborns. Screening for ROP is effective and cost-effective, but in many areas, access to skilled examiners to conduct dilated examinations is poor. Remote screening with retinal photography is an alternative strategy that may allow for improved ROP care. Methods: The current literature was reviewed to find clinical trials and expert consensus documents on the state-of-the-art of telemedicine for ROP. Results: Several studies have confirmed the utility of telemedicine for ROP. In addition, several clinical studies have reported favorable long-term results. Many investigators have reinforced the need for detailed protocols on image acquisition and image interpretation. Conclusions: Telemedicine for ROP appears to be a viable alternative to live ophthalmoscopic examinations in many circumstances. Standardization and documentation afforded by telemedicine may provide additional benefits to providers and their patients. With continued improvements in image quality and affordability of imaging systems as well as improved automated image interpretation tools anticipated in the near future, telemedicine for ROP is expected to play an expanding role for a uniquely vulnerable patient population.
Collapse
Affiliation(s)
- Christopher J Brady
- Division of Ophthalmology, Department of Surgery, Larner College of Medicine, University of Vermont, Burlington, Vermont
| | - Samantha D'Amico
- Division of Ophthalmology, Department of Surgery, University of Vermont Medical Center, Burlington, Vermont
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| |
Collapse
|
308
|
Abstract
Artificial intelligence is advancing rapidly and making its way into all areas of our lives. This review discusses developments and potential practices regarding the use of artificial intelligence in the field of ophthalmology, and the related topic of medical ethics. Various artificial intelligence applications related to the diagnosis of eye diseases were researched in books, journals, search engines, print and social media. Resources were cross-checked to verify the information. Artificial intelligence algorithms, some of which were approved by the US Food and Drug Administration, have been adopted in the field of ophthalmology, especially in diagnostic studies. Studies are being conducted that prove that artificial intelligence algorithms can be used in the field of ophthalmology, especially in diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity. Some of these algorithms have come to the approval stage. The current point in artificial intelligence studies shows that this technology has advanced considerably and shows promise for future work. It is believed that artificial intelligence applications will be effective in identifying patients with preventable vision loss and directing them to physicians, especially in developing countries where there are fewer trained professionals and physicians are difficult to reach. When we consider the possibility that some future artificial intelligence systems may be candidates for moral/ethical status, certain ethical issues arise. Questions about moral/ethical status are important in some areas of applied ethics. Although it is accepted that current intelligence systems do not have moral/ethical status, it has yet to be determined what the exact the characteristics that confer moral/ethical status are or will be.
Collapse
Affiliation(s)
- Kadircan Keskinbora
- Bahçeşehir University Faculty of Medicine, Department of Ophthalmology, Division of Medical Ethics and History of Medicine, İstanbul, Turkey
| | - Fatih Güven
- Health Sciences University Bakırköy Training and Research Hospital, Clinic of Ophthalmology, İstanbul, Turkey
| |
Collapse
|
309
|
Choi RY, Coyner AS, Kalpathy-Cramer J, Chiang MF, Campbell JP. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl Vis Sci Technol 2020; 9:14. [PMID: 32704420 PMCID: PMC7347027 DOI: 10.1167/tvst.9.2.14] [Citation(s) in RCA: 197] [Impact Index Per Article: 39.4] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Purpose To present an overview of current machine learning methods and their use in medical research, focusing on select machine learning techniques, best practices, and deep learning. Methods A systematic literature search in PubMed was performed for articles pertinent to the topic of artificial intelligence methods used in medicine with an emphasis on ophthalmology. Results A review of machine learning and deep learning methodology for the audience without an extensive technical computer programming background. Conclusions Artificial intelligence has a promising future in medicine; however, many challenges remain. Translational Relevance The aim of this review article is to provide the nontechnical readers a layman's explanation of the machine learning methods being used in medicine today. The goal is to provide the reader a better understanding of the potential and challenges of artificial intelligence within the field of medicine.
Collapse
Affiliation(s)
- Rene Y Choi
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University (OHSU), Portland, Oregon, United States
| | - Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, United States
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts, United States
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University (OHSU), Portland, Oregon, United States.,Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, United States
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University (OHSU), Portland, Oregon, United States
| |
Collapse
|
310
|
Yildiz VM, Tian P, Yildiz I, Brown JM, Kalpathy-Cramer J, Dy J, Ioannidis S, Erdogmus D, Ostmo S, Kim SJ, Chan RVP, Campbell JP, Chiang MF. Plus Disease in Retinopathy of Prematurity: Convolutional Neural Network Performance Using a Combined Neural Network and Feature Extraction Approach. Transl Vis Sci Technol 2020; 9:10. [PMID: 32704416 PMCID: PMC7346878 DOI: 10.1167/tvst.9.2.10] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Purpose Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels, is the most important feature to determine treatment-requiring ROP. We aimed to create a complete, publicly available and feature-extraction-based pipeline, I-ROP ASSIST, that achieves convolutional neural network (CNN)-like performance when diagnosing plus disease from retinal images. Methods We developed two datasets containing 100 and 5512 posterior retinal images, respectively. After segmenting retinal vessels, we detected the vessel centerlines. Then, we extracted features relevant to ROP, including tortuosity and dilation measures, and used these features in the classifiers including logistic regression, support vector machine and neural networks to assess a severity score for the input. We tested our system with fivefold cross-validation and calculated the area under the curve (AUC) metric for each classifier and dataset. Results For predicting plus versus not-plus categories, we achieved 99% and 94% AUC on the first and second datasets, respectively. For predicting pre-plus or worse versus normal categories, we achieved 99% and 88% AUC on the first and second datasets, respectively. The CNN method achieved 98% and 94% for predicting two categories on the second dataset. Conclusions Our system combining automatic retinal vessel segmentation, tracing, feature extraction and classification is able to diagnose plus disease in ROP with CNN-like performance. Translational Relevance The high performance of I-ROP ASSIST suggests potential applications in automated and objective diagnosis of plus disease.
Collapse
Affiliation(s)
- Veysi M Yildiz
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Peng Tian
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Ilkay Yildiz
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - James M Brown
- Department of Computer Science, University of Lincoln, Lincoln, UK
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Jennifer Dy
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Stratis Ioannidis
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Deniz Erdogmus
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Sang Jin Kim
- Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - R V Paul Chan
- Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | |
Collapse
|
311
|
Scruggs BA, Chan RVP, Kalpathy-Cramer J, Chiang MF, Campbell JP. Artificial Intelligence in Retinopathy of Prematurity Diagnosis. Transl Vis Sci Technol 2020; 9:5. [PMID: 32704411 PMCID: PMC7343673 DOI: 10.1167/tvst.9.2.5] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 11/21/2019] [Indexed: 02/06/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The diagnosis of ROP is subclassified by zone, stage, and plus disease, with each area demonstrating significant intra- and interexpert subjectivity and disagreement. In addition to improved efficiencies for ROP screening, artificial intelligence may lead to automated, quantifiable, and objective diagnosis in ROP. This review focuses on the development of artificial intelligence for automated diagnosis of plus disease in ROP and highlights the clinical and technical challenges of both the development and implementation of artificial intelligence in the real world.
Collapse
Affiliation(s)
- Brittni A. Scruggs
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - R. V. Paul Chan
- Department of Ophthalmology, University of Illinois, Chicago, IL, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, MA, USA
| | - Michael F. Chiang
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
312
|
Aggressive Posterior Retinopathy of Prematurity: Clinical and Quantitative Imaging Features in a Large North American Cohort. Ophthalmology 2020; 127:1105-1112. [PMID: 32197913 DOI: 10.1016/j.ophtha.2020.01.052] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 12/06/2019] [Accepted: 01/29/2020] [Indexed: 01/24/2023] Open
Abstract
PURPOSE Aggressive posterior retinopathy of prematurity (AP-ROP) is a vision-threatening disease with a significant rate of progression to retinal detachment. The purpose of this study was to characterize AP-ROP quantitatively by demographics, rate of disease progression, and a deep learning-based vascular severity score. DESIGN Retrospective analysis. PARTICIPANTS The Imaging and Informatics in ROP cohort from 8 North American centers, consisting of 947 patients and 5945 clinical eye examinations with fundus images, was used. Pretreatment eyes were categorized by disease severity: none, mild, type 2 or pre-plus, treatment-requiring (TR) without AP-ROP, TR with AP-ROP. Analyses compared TR with AP-ROP and TR without AP-ROP to investigate differences between AP-ROP and other TR disease. METHODS A reference standard diagnosis was generated for each eye examination using previously published methods combining 3 independent image-based gradings and 1 ophthalmoscopic grading. All fundus images were analyzed using a previously published deep learning system and were assigned a score from 1 through 9. MAIN OUTCOME MEASURES Birth weight, gestational age, postmenstrual age, and vascular severity score. RESULTS Infants who demonstrated AP-ROP were more premature by birth weight (617 g vs. 679 g; P = 0.01) and gestational age (24.3 weeks vs. 25.0 weeks; P < 0.01) and reached peak severity at an earlier postmenstrual age (34.7 weeks vs. 36.9 weeks; P < 0.001) compared with infants with TR without AP-ROP. The mean vascular severity score was greatest in TR with AP-ROP infants compared with TR without AP-ROP infants (8.79 vs. 7.19; P < 0.001). Analyzing the severity score over time, the rate of progression was fastest in infants with AP-ROP (P < 0.002 at 30-32 weeks). CONCLUSIONS Premature infants in North America with AP-ROP are born younger and demonstrate disease earlier than infants with less severe ROP. Disease severity is quantifiable with a deep learning-based score, which correlates with clinically identified categories of disease, including AP-ROP. The rate of progression to peak disease is greatest in eyes that demonstrate AP-ROP compared with other treatment-requiring eyes. Analysis of quantitative characteristics of AP-ROP may help improve diagnosis and treatment of an aggressive, vision-threatening form of ROP.
Collapse
|
313
|
Scruggs BA, Chan RVP, Kalpathy-Cramer J, Chiang MF, Campbell JP. Artificial Intelligence in Retinopathy of Prematurity Diagnosis. Transl Vis Sci Technol 2020. [DOI: 10.1167/tvst.210.2.2010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Brittni A. Scruggs
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - R. V. Paul Chan
- Department of Ophthalmology, University of Illinois, Chicago, IL, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, MA, USA
| | - Michael F. Chiang
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
314
|
Coyner AS, Campbell JP, Chiang MF. Demystifying the Jargon: The Bridge between Ophthalmology and Artificial Intelligence. Ophthalmol Retina 2020; 3:291-293. [PMID: 31014678 DOI: 10.1016/j.oret.2018.12.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 12/26/2018] [Accepted: 12/28/2018] [Indexed: 12/21/2022]
Affiliation(s)
- Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| | - Michael F Chiang
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon.; Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon..
| |
Collapse
|
315
|
Liebman DL, Chiang MF, Chodosh J. Realizing the Promise of Electronic Health Records: Moving Beyond "Paper on a Screen". Ophthalmology 2020; 126:331-334. [PMID: 30803511 DOI: 10.1016/j.ophtha.2018.09.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 09/13/2018] [Accepted: 09/14/2018] [Indexed: 01/20/2023] Open
|
316
|
|
317
|
Wintergerst MWM, Petrak M, Li JQ, Larsen PP, Berger M, Holz FG, Finger RP, Krohne TU. Non-contact smartphone-based fundus imaging compared to conventional fundus imaging: a low-cost alternative for retinopathy of prematurity screening and documentation. Sci Rep 2019; 9:19711. [PMID: 31873142 PMCID: PMC6928229 DOI: 10.1038/s41598-019-56155-x] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 12/07/2019] [Indexed: 01/11/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a frequent cause of treatable childhood blindness. The current dependency of telemedicine-based ROP screening on cost-intensive equipment does not meet the needs in economically disadvantaged regions. Smartphone-based fundus imaging (SBFI) allows for affordable and mobile fundus examination and, therefore, could facilitate cost-effective telemedicine-based ROP screening in low-resources settings. We compared non-contact SBFI and conventional contact fundus imaging (CFI) in terms of feasibility for ROP screening and documentation. Twenty-six eyes were imaged with both SBFI and CFI. Field-of-view was smaller (ratio of diameters, 1:2.5), level of detail was equal, and examination time was longer for SBFI as compared to CFI (109.0 ± 57.8 vs. 75.9 ± 36.3 seconds, p < 0.01). Good agreement with clinical evaluation by indirect funduscopy was achieved for assessment of plus disease and ROP stage for both SBFI (squared Cohen's kappa, 0.88 and 0.81, respectively) and CFI (0.86 and 0.93). Likewise, sensitivity/specificity for detection of plus disease and ROP was high for both SBFI (90%/100% and 88%/93%, respectively) and CFI (80%/100% and 100%/96%). SBFI is a non-contact and low-cost alternative to CFI for ROP screening and documentation that has the potential to considerably improve ROP care in middle- and low-resources settings.
Collapse
Affiliation(s)
| | - Michael Petrak
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Germany
| | - Jeany Q Li
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Germany
| | - Petra P Larsen
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Germany
| | - Moritz Berger
- Department of Medical Biometry, Informatics and Epidemiology, University Hospital Bonn, Sigmund-Freud-Str. 25, 53105, Bonn, Germany
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Germany
| | - Robert P Finger
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Germany
| | - Tim U Krohne
- Department of Ophthalmology, University of Bonn, Ernst-Abbe-Str. 2, 53127, Bonn, Germany.
| |
Collapse
|
318
|
Beyond Performance Metrics: Automatic Deep Learning Retinal OCT Analysis Reproduces Clinical Trial Outcome. Ophthalmology 2019; 127:793-801. [PMID: 32019699 DOI: 10.1016/j.ophtha.2019.12.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/10/2019] [Accepted: 12/17/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To validate the efficacy of a fully automatic, deep learning-based segmentation algorithm beyond conventional performance metrics by measuring the primary outcome of a clinical trial for macular telangiectasia type 2 (MacTel2). DESIGN Evaluation of diagnostic test or technology. PARTICIPANTS A total of 92 eyes from 62 participants with MacTel2 from a phase 2 clinical trial (NCT01949324) randomized to 1 of 2 treatment groups METHODS: The ellipsoid zone (EZ) defect areas were measured on spectral domain OCT images of each eye at 2 time points (baseline and month 24) by a fully automatic, deep learning-based segmentation algorithm. The change in EZ defect area from baseline to month 24 was calculated and analyzed according to the clinical trial protocol. MAIN OUTCOME MEASURE Difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups. RESULTS The difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups measured by the fully automatic segmentation algorithm was 0.072±0.035 mm2 (P = 0.021). This was comparable to the outcome of the clinical trial using semiautomatic measurements by expert readers, 0.065±0.033 mm2 (P = 0.025). CONCLUSIONS The fully automatic segmentation algorithm was as accurate as semiautomatic expert segmentation to assess EZ defect areas and was able to reliably reproduce the statistically significant primary outcome measure of the clinical trial. This approach, to validate the performance of an automatic segmentation algorithm on the primary clinical trial end point, provides a robust gauge of its clinical applicability.
Collapse
|
319
|
Burlina PM, Joshi N, Pacheco KD, Liu TYA, Bressler NM. Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration. JAMA Ophthalmol 2019; 137:258-264. [PMID: 30629091 DOI: 10.1001/jamaophthalmol.2018.6156] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Importance Deep learning (DL) used for discriminative tasks in ophthalmology, such as diagnosing diabetic retinopathy or age-related macular degeneration (AMD), requires large image data sets graded by human experts to train deep convolutional neural networks (DCNNs). In contrast, generative DL techniques could synthesize large new data sets of artificial retina images with different stages of AMD. Such images could enhance existing data sets of common and rare ophthalmic diseases without concern for personally identifying information to assist medical education of students, residents, and retinal specialists, as well as for training new DL diagnostic models for which extensive data sets from large clinical trials of expertly graded images may not exist. Objective To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines. Design, Setting, and Participants Generative adversarial networks were trained on 133 821 color fundus images from 4613 study participants from the Age-Related Eye Disease Study (AREDS), generating synthetic fundus images with and without AMD. We compared retinal specialists' ability to diagnose AMD on both real and synthetic images, asking them to assess image gradability and testing their ability to discern real from synthetic images. The performance of AMD diagnostic DCNNs (referable vs not referable AMD) trained on either all-real vs all-synthetic data sets was compared. Main Outcomes and Measures Accuracy of 2 retinal specialists (T.Y.A.L. and K.D.P.) for diagnosing and distinguishing AMD on real vs synthetic images and diagnostic performance (area under the curve) of DL algorithms trained on synthetic vs real images. Results The diagnostic accuracy of 2 retinal specialists on real vs synthetic images was similar. The accuracy of diagnosis as referable vs nonreferable AMD compared with certified human graders for retinal specialist 1 was 84.54% (error margin, 4.06%) on real images vs 84.12% (error margin, 4.16%) on synthetic images and for retinal specialist 2 was 89.47% (error margin, 3.45%) on real images vs 89.19% (error margin, 3.54%) on synthetic images. Retinal specialists could not distinguish real from synthetic images, with an accuracy of 59.50% (error margin, 3.93%) for retinal specialist 1 and 53.67% (error margin, 3.99%) for retinal specialist 2. The DCNNs trained on real data showed an area under the curve of 0.9706 (error margin, 0.0029), and those trained on synthetic data showed an area under the curve of 0.9235 (error margin, 0.0045). Conclusions and Relevance Deep learning-synthesized images appeared to be realistic to retinal specialists, and DCNNs achieved diagnostic performance on synthetic data close to that for real images, suggesting that DL generative techniques hold promise for training humans and machines.
Collapse
Affiliation(s)
- Philippe M Burlina
- Johns Hopkins University Applied Physics Laboratory, Baltimore, Maryland.,Malone Center for Engineering in Healthcare, Baltimore, Maryland.,Retina Division, Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Neil Joshi
- Johns Hopkins University Applied Physics Laboratory, Baltimore, Maryland
| | - Katia D Pacheco
- Brasilian Center of Vision Eye Hospital, Brasilia, Distrito Federal, Brazil
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Neil M Bressler
- Retina Division, Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland.,Editor
| |
Collapse
|
320
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhang L, Xu F, Jin C, Zhang X, Xiao H, Zhang K, Zhao L, Yu S, Zhang G, Wang J, Lin H. A deep learning system for identifying lattice degeneration and retinal breaks using ultra-widefield fundus images. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:618. [PMID: 31930019 DOI: 10.21037/atm.2019.11.28] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Background Lattice degeneration and/or retinal breaks, defined as notable peripheral retinal lesions (NPRLs), are prone to evolving into rhegmatogenous retinal detachment which can cause severe visual loss. However, screening NPRLs is time-consuming and labor-intensive. Therefore, we aimed to develop and evaluate a deep learning (DL) system for automated identifying NPRLs based on ultra-widefield fundus (UWF) images. Methods A total of 5,606 UWF images from 2,566 participants were used to train and verify a DL system. All images were classified by 3 experienced ophthalmologists. The reference standard was determined when an agreement was achieved among all 3 ophthalmologists, or adjudicated by another retinal specialist if disagreements existed. An independent test set of 750 images was applied to verify the performance of 12 DL models trained using 4 different DL algorithms (InceptionResNetV2, InceptionV3, ResNet50, and VGG16) with 3 preprocessing techniques (original, augmented, and histogram-equalized images). Heatmaps were generated to visualize the process of the best DL system in the identification of NPRLs. Results In the test set, the best DL system for identifying NPRLs achieved an area under the curve (AUC) of 0.999 with a sensitivity and specificity of 98.7% and 99.2%, respectively. The best preprocessing method in each algorithm was the application of original image augmentation (average AUC =0.996). The best algorithm in each preprocessing method was InceptionResNetV2 (average AUC =0.996). In the test set, 150 of 154 true-positive cases (97.4%) displayed heatmap visualization in the NPRL regions. Conclusions A DL system has high accuracy in identifying NPRLs based on UWF images. This system may help to prevent the development of rhegmatogenous retinal detachment by early detection of NPRLs.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Danyao Nie
- Shenzhen Ophthalmic Center, Jinan University, Shenzhen 518001, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Li Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Hui Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.,School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Ophthalmic Center, Jinan University, Shenzhen 518001, China
| | - Jiantao Wang
- Shenzhen Ophthalmic Center, Jinan University, Shenzhen 518001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| |
Collapse
|
321
|
Ting DSW, Lee AY, Wong TY. An Ophthalmologist's Guide to Deciphering Studies in Artificial Intelligence. Ophthalmology 2019; 126:1475-1479. [PMID: 31635697 PMCID: PMC7339681 DOI: 10.1016/j.ophtha.2019.09.014] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 09/18/2019] [Accepted: 09/18/2019] [Indexed: 01/10/2023] Open
|
322
|
Tan Z, Simkin S, Lai C, Dai S. Deep Learning Algorithm for Automated Diagnosis of Retinopathy of Prematurity Plus Disease. Transl Vis Sci Technol 2019; 8:23. [PMID: 31819832 PMCID: PMC6892443 DOI: 10.1167/tvst.8.6.23] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 09/30/2019] [Indexed: 12/20/2022] Open
Abstract
PURPOSE This study describes the initial development of a deep learning algorithm, ROP.AI, to automatically diagnose retinopathy of prematurity (ROP) plus disease in fundal images. METHODS ROP.AI was trained using 6974 fundal images from Australasian image databases. Each image was given a diagnosis as part of real-world routine ROP screening and classified as normal or plus disease. The algorithm was trained using 80% of the images and validated against the remaining 20% within a hold-out test set. Performance in diagnosing plus disease was evaluated against an external set of 90 images. Performance in detecting pre-plus disease was also tested. As a screening tool, the algorithm's operating point was optimized for sensitivity and negative predictive value, and its performance reevaluated. RESULTS For plus disease diagnosis within the 20% hold-out test set, the algorithm achieved a 96.6% sensitivity, 98.0% specificity, and 97.3% ± 0.7% accuracy. Area under the receiver operating characteristic curve was 0.993. Within the independent test set, the algorithm achieved a 93.9% sensitivity, 80.7% specificity, and 95.8% negative predictive value. For detection of pre-plus and plus disease, the algorithm achieved 81.4% sensitivity, 80.7% specificity, and 80.7% negative predictive value. Following the identification of an optimized operating point, the algorithm diagnosed plus disease with a 97.0% sensitivity and 97.8% negative predictive value. CONCLUSIONS ROP.AI is a deep learning algorithm able to automatically diagnose ROP plus disease with high sensitivity and negative predictive value. TRANSLATIONAL RELEVANCE In the context of increasing global disease burden, future development may improve access to ROP diagnosis and care.
Collapse
Affiliation(s)
- Zachary Tan
- Save Sight Institute, The University of Sydney, Sydney, New South Wales, Australia
- St Vincent's Hospital Sydney, Sydney, New South Wales, Australia
- Faculty of Medicine, The University of Queensland, Brisbane, Queensland, Australia
| | - Samantha Simkin
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, The University of Auckland, Auckland, New Zealand
| | - Connie Lai
- Queen Mary Hospital, Hong Kong, China
- Department of Ophthalmology, The University of Hong Kong, Hong Kong, China
| | - Shuan Dai
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, The University of Auckland, Auckland, New Zealand
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Queensland, Australia
| |
Collapse
|
323
|
Toliušis R, Kurasova O, Bernatavičienė J. Semantic Segmentation of Eye Fundus Images Using Convolutional Neural Networks. INFORMACIJOS MOKSLAI 2019. [DOI: 10.15388/im.2019.85.20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This article reviews the problems of eye bottom fundus analysis and semantic segmentation algorithms used to distinguish the eye vessels and the optical disk. Various diseases, such as glaucoma, hypertension, diabetic retinopathy, macular degeneration, etc., can be diagnosed through changes and anomalies of the vesssels and optical disk. Convolutional neural networks, especially the U-Net architecture, are well-suited for semantic segmentation. A number of U-Net modifications have been recently developed that deliver excellent performance results.
Collapse
|
324
|
Xu J, Xue K, Zhang K. Current status and future trends of clinical diagnoses via image-based deep learning. Am J Cancer Res 2019; 9:7556-7565. [PMID: 31695786 PMCID: PMC6831476 DOI: 10.7150/thno.38065] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Accepted: 07/28/2019] [Indexed: 12/26/2022] Open
Abstract
With the recent developments in deep learning technologies, artificial intelligence (AI) has gradually been transformed from cutting-edge technology into practical applications. AI plays an important role in disease diagnosis and treatment, health management, drug research and development, and precision medicine. Interdisciplinary collaborations will be crucial to develop new AI algorithms for medical applications. In this paper, we review the basic workflow for building an AI model, identify publicly available databases of ocular fundus images, and summarize over 60 papers contributing to the field of AI development.
Collapse
|
325
|
Yıldız İ, Tian P, Dy J, Erdoğmuş D, Brown J, Kalpathy-Cramer J, Ostmo S, Peter Campbell J, Chiang MF, Ioannidis S. Classification and comparison via neural networks. Neural Netw 2019; 118:65-80. [PMID: 31254769 PMCID: PMC6718310 DOI: 10.1016/j.neunet.2019.06.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 04/10/2019] [Accepted: 06/05/2019] [Indexed: 10/26/2022]
Abstract
We consider learning from comparison labels generated as follows: given two samples in a dataset, a labeler produces a label indicating their relative order. Such comparison labels scale quadratically with the dataset size; most importantly, in practice, they often exhibit lower variance compared to class labels. We propose a new neural network architecture based on siamese networks to incorporate both class and comparison labels in the same training pipeline, using Bradley-Terry and Thurstone loss functions. Our architecture leads to a significant improvement in predicting both class and comparison labels, increasing classification AUC by as much as 35% and comparison AUC by as much as 6% on several real-life datasets. We further show that, by incorporating comparisons, training from few samples becomes possible: a deep neural network of 5.9 million parameters trained on 80 images attains a 0.92 AUC when incorporating comparisons.
Collapse
Affiliation(s)
- İlkay Yıldız
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Avenue, 409 Dana, Boston, MA 02115, USA.
| | - Peng Tian
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Avenue, 409 Dana, Boston, MA 02115, USA.
| | - Jennifer Dy
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Avenue, 409 Dana, Boston, MA 02115, USA
| | - Deniz Erdoğmuş
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Avenue, 409 Dana, Boston, MA 02115, USA
| | - James Brown
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, Boston, MA 02114, USA
| | | | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - Stratis Ioannidis
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Avenue, 409 Dana, Boston, MA 02115, USA
| |
Collapse
|
326
|
Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, Mahendiran T, Moraes G, Shamdas M, Kern C, Ledsam JR, Schmid MK, Balaskas K, Topol EJ, Bachmann LM, Keane PA, Denniston AK. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health 2019; 1:e271-e297. [PMID: 33323251 DOI: 10.1016/s2589-7500(19)30123-2] [Citation(s) in RCA: 777] [Impact Index Per Article: 129.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Revised: 08/06/2019] [Accepted: 08/14/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging. METHODS In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176. FINDINGS Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0-90·2) for deep learning models and 86·4% (79·9-91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1-96·4) for deep learning models and 90·5% (80·6-95·7) for health-care professionals. INTERPRETATION Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology. FUNDING None.
Collapse
Affiliation(s)
- Xiaoxuan Liu
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK; Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Health Data Research UK, London, UK
| | - Livia Faes
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Eye Clinic, Cantonal Hospital of Lucerne, Lucerne, Switzerland
| | - Aditya U Kale
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Dun Jack Fu
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Alice Bruynseels
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Thushika Mahendiran
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Gabriella Moraes
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Mohith Shamdas
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Christoph Kern
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; University Eye Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | | | - Martin K Schmid
- Eye Clinic, Cantonal Hospital of Lucerne, Lucerne, Switzerland
| | - Konstantinos Balaskas
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, California
| | | | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK; Health Data Research UK, London, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK; Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK; Health Data Research UK, London, UK.
| |
Collapse
|
327
|
Smartphone-based fundus photography for screening of plus-disease retinopathy of prematurity. Graefes Arch Clin Exp Ophthalmol 2019; 257:2579-2585. [PMID: 31501929 DOI: 10.1007/s00417-019-04470-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Revised: 08/27/2019] [Accepted: 09/05/2019] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Inadequate screening of treatment-warranted retinopathy of prematurity (ROP) can lead to devastating visual outcomes. Especially in resource-poor communities, the use of an affordable, portable, and easy to use smartphone-based non-contact fundus photography device may prove useful for screening for high-risk ROP. This study evaluates the feasibility of screening for high-risk ROP using a novel smartphone-based fundus photography device, RetinaScope. METHODS Retinal images were obtained using RetinaScope on a cohort of prematurely born infants during routine examinations for ROP. Images were reviewed by two masked graders who determined the image quality, the presence or absence of plus disease, and whether there was retinopathy that met predefined criteria for referral. The agreement between image-based assessments was compared to the gold standard indirect ophthalmoscopic assessment. RESULTS Fifty-four eyes of 27 infants were included. A wide-field fundus photograph was obtained using RetinaScope. Image quality was acceptable or excellent in 98% and 95% of cases. There was substantial agreement between the gold standard and photographic assessment of presence or absence of plus disease (Cohen's κ = 0.85). Intergrader agreement on the presence of any retinopathy in photographs was also high (κ = 0.92). CONCLUSIONS RetinaScope can capture digital retinal photographs of prematurely born infants with good image quality for grading of plus disease.
Collapse
|
328
|
Ting DSJ, Ang M, Mehta JS, Ting DSW. Artificial intelligence-assisted telemedicine platform for cataract screening and management: a potential model of care for global eye health. Br J Ophthalmol 2019; 103:1537-1538. [DOI: 10.1136/bjophthalmol-2019-315025] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
329
|
|
330
|
Chiang MF. Making Progress Toward an Electronic Infrastructure for Ophthalmic Care. JAMA Ophthalmol 2019; 137:975-976. [DOI: 10.1001/jamaophthalmol.2019.1996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Michael F. Chiang
- Department of Ophthalmology, Oregon Health & Science University, Portland
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland
- Casey Eye Institute, Oregon Health & Science University, Portland
| |
Collapse
|
331
|
Gupta K, Campbell JP, Taylor S, Brown JM, Ostmo S, Chan RVP, Dy J, Erdogmus D, Ioannidis S, Kalpathy-Cramer J, Kim SJ, Chiang MF. A Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning to Monitor Disease Regression After Treatment. JAMA Ophthalmol 2019; 137:1029-1036. [PMID: 31268499 PMCID: PMC6613298 DOI: 10.1001/jamaophthalmol.2019.2442] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 04/14/2019] [Indexed: 01/10/2023]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide, but treatment failure and disease recurrence are important causes of adverse outcomes in patients with treatment-requiring ROP (TR-ROP). Objectives To apply an automated ROP vascular severity score obtained using a deep learning algorithm and to assess its utility for objectively monitoring ROP regression after treatment. Design, Setting, and Participants This retrospective cohort study used data from the Imaging and Informatics in ROP consortium, which comprises 9 tertiary referral centers in North America that screen high volumes of at-risk infants for ROP. Images of 5255 clinical eye examinations from 871 infants performed between July 2011 and December 2016 were assessed for eligibility in the present study. The disease course was assessed with time across the numerous examinations for patients with TR-ROP. Infants born prematurely meeting screening criteria for ROP who developed TR-ROP and who had images captured within 4 weeks before and after treatment as well as at the time of treatment were included. Main Outcomes and Measures The primary outcome was mean (SD) ROP vascular severity score before, at time of, and after treatment. A deep learning classifier was used to assign a continuous ROP vascular severity score, which ranged from 1 (normal) to 9 (most severe), at each examination. A secondary outcome was the difference in ROP vascular severity score among eyes treated with laser or the vascular endothelial growth factor antagonist bevacizumab. Differences between groups for both outcomes were assessed using unpaired 2-tailed t tests with Bonferroni correction. Results Of 5255 examined eyes, 91 developed TR-ROP, of which 46 eyes met the inclusion criteria based on the available images. The mean (SD) birth weight of those patients was 653 (185) g, with a mean (SD) gestational age of 24.9 (1.3) weeks. The mean (SD) ROP vascular severity scores significantly increased 2 weeks prior to treatment (4.19 [1.75]), peaked at treatment (7.43 [1.89]), and decreased for at least 2 weeks after treatment (4.00 [1.88]) (all P < .001). Eyes requiring retreatment with laser had higher ROP vascular severity scores at the time of initial treatment compared with eyes receiving a single treatment (P < .001). Conclusions and Relevance This quantitative ROP vascular severity score appears to consistently reflect clinical disease progression and posttreatment regression in eyes with TR-ROP. These study results may have implications for the monitoring of patients with ROP for treatment failure and disease recurrence and for determining the appropriate level of disease severity for primary treatment in eyes with aggressive disease.
Collapse
Affiliation(s)
- Kishan Gupta
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - Stanford Taylor
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - James M. Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Jennifer Dy
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Deniz Erdogmus
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Stratis Ioannidis
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown
- Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston
| | - Sang J. Kim
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Michael F. Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland
| |
Collapse
|
332
|
Taylor S, Brown JM, Gupta K, Campbell JP, Ostmo S, Chan RVP, Dy J, Erdogmus D, Ioannidis S, Kim SJ, Kalpathy-Cramer J, Chiang MF. Monitoring Disease Progression With a Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning. JAMA Ophthalmol 2019; 137:1022-1028. [PMID: 31268518 PMCID: PMC6613341 DOI: 10.1001/jamaophthalmol.2019.2433] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 04/14/2019] [Indexed: 01/08/2023]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide, but clinical diagnosis is subjective and qualitative. Objective To describe a quantitative ROP severity score derived using a deep learning algorithm designed to evaluate plus disease and to assess its utility for objectively monitoring ROP progression. Design, Setting, and Participants This retrospective cohort study included images from 5255 clinical examinations of 871 premature infants who met the ROP screening criteria of the Imaging and Informatics in ROP (i-ROP) Consortium, which comprises 9 tertiary care centers in North America, from July 1, 2011, to December 31, 2016. Data analysis was performed from July 2017 to May 2018. Exposure A deep learning algorithm was used to assign a continuous ROP vascular severity score from 1 (most normal) to 9 (most severe) at each examination based on a single posterior photograph compared with a reference standard diagnosis (RSD) simplified into 4 categories: no ROP, mild ROP, type 2 ROP or pre-plus disease, or type 1 ROP. Disease course was assessed longitudinally across multiple examinations for all patients. Main Outcomes and Measures Mean ROP vascular severity score progression over time compared with the RSD. Results A total of 5255 clinical examinations from 871 infants (mean [SD] gestational age, 27.0 [2.0] weeks; 493 [56.6%] male; mean [SD] birth weight, 949 [271] g) were analyzed. The median severity scores for each category were as follows: 1.1 (interquartile range [IQR], 1.0-1.5) (no ROP), 1.5 (IQR, 1.1-3.4) (mild ROP), 4.6 (IQR, 2.4-5.3) (type 2 and pre-plus), and 7.5 (IQR, 5.0-8.7) (treatment-requiring ROP) (P < .001). When the long-term differences in the median severity scores across time between the eyes progressing to treatment and those who did not eventually require treatment were compared, the median score was higher in the treatment group by 0.06 at 30 to 32 weeks, 0.75 at 32 to 34 weeks, 3.56 at 34 to 36 weeks, 3.71 at 36 to 38 weeks, and 3.24 at 38 to 40 weeks postmenstrual age (P < .001 for all comparisons). Conclusions and Relevance The findings suggest that the proposed ROP vascular severity score is associated with category of disease at a given point in time and clinical progression of ROP in premature infants. Automated image analysis may be used to quantify clinical disease progression and identify infants at high risk for eventually developing treatment-requiring ROP. This finding has implications for quality and delivery of ROP care and for future approaches to disease classification.
Collapse
Affiliation(s)
- Stanford Taylor
- Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland
| | - James M. Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown
| | - Kishan Gupta
- Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Jennifer Dy
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Deniz Erdogmus
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Stratis Ioannidis
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Sang J. Kim
- Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown
- Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science, Boston
| | - Michael F. Chiang
- Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland
| |
Collapse
|
333
|
Ting DS, Peng L, Varadarajan AV, Keane PA, Burlina PM, Chiang MF, Schmetterer L, Pasquale LR, Bressler NM, Webster DR, Abramoff M, Wong TY. Deep learning in ophthalmology: The technical and clinical considerations. Prog Retin Eye Res 2019; 72:100759. [DOI: 10.1016/j.preteyeres.2019.04.003] [Citation(s) in RCA: 137] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2018] [Revised: 04/21/2019] [Accepted: 04/23/2019] [Indexed: 12/22/2022]
|
334
|
Bellemo V, Lim G, Rim TH, Tan GSW, Cheung CY, Sadda S, He MG, Tufail A, Lee ML, Hsu W, Ting DSW. Artificial Intelligence Screening for Diabetic Retinopathy: the Real-World Emerging Application. Curr Diab Rep 2019; 19:72. [PMID: 31367962 DOI: 10.1007/s11892-019-1189-3] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
PURPOSE OF REVIEW This paper systematically reviews the recent progress in diabetic retinopathy screening. It provides an integrated overview of the current state of knowledge of emerging techniques using artificial intelligence integration in national screening programs around the world. Existing methodological approaches and research insights are evaluated. An understanding of existing gaps and future directions is created. RECENT FINDINGS Over the past decades, artificial intelligence has emerged into the scientific consciousness with breakthroughs that are sparking increasing interest among computer science and medical communities. Specifically, machine learning and deep learning (a subtype of machine learning) applications of artificial intelligence are spreading into areas that previously were thought to be only the purview of humans, and a number of applications in ophthalmology field have been explored. Multiple studies all around the world have demonstrated that such systems can behave on par with clinical experts with robust diagnostic performance in diabetic retinopathy diagnosis. However, only few tools have been evaluated in clinical prospective studies. Given the rapid and impressive progress of artificial intelligence technologies, the implementation of deep learning systems into routinely practiced diabetic retinopathy screening could represent a cost-effective alternative to help reduce the incidence of preventable blindness around the world.
Collapse
Affiliation(s)
- Valentina Bellemo
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - SriniVas Sadda
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | - Ming-Guang He
- Center of Eye Research Australia, Melbourne, Victoria, Australia
| | - Adnan Tufail
- Moorfields Eye Hospital & Institute of Ophthalmology, UCL, London, UK
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
335
|
|
336
|
Dzobo K, Adotey S, Thomford NE, Dzobo W. Integrating Artificial and Human Intelligence: A Partnership for Responsible Innovation in Biomedical Engineering and Medicine. OMICS-A JOURNAL OF INTEGRATIVE BIOLOGY 2019; 24:247-263. [PMID: 31313972 DOI: 10.1089/omi.2019.0038] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Historically, the term "artificial intelligence" dates to 1956 when it was first used in a conference at Dartmouth College in the US. Since then, the development of artificial intelligence has in part been shaped by the field of neuroscience. By understanding the human brain, scientists have attempted to build new intelligent machines capable of performing complex tasks akin to humans. Indeed, future research into artificial intelligence will continue to benefit from the study of the human brain. While the development of artificial intelligence algorithms has been fast paced, the actual use of most artificial intelligence (AI) algorithms in biomedical engineering and clinical practice is still markedly below its conceivably broader potentials. This is partly because for any algorithm to be incorporated into existing workflows it has to stand the test of scientific validation, clinical and personal utility, application context, and is equitable as well. In this context, there is much to be gained by combining AI and human intelligence (HI). Harnessing Big Data, computing power and storage capacities, and addressing societal issues emergent from algorithm applications, demand deploying HI in tandem with AI. Very few countries, even economically developed states, lack adequate and critical governance frames to best understand and steer the AI innovation trajectories in health care. Drug discovery and translational pharmaceutical research stand to gain from AI technology provided they are also informed by HI. In this expert review, we analyze the ways in which AI applications are likely to traverse the continuum of life from birth to death, and encompassing not only humans but also all animal, plant, and other living organisms that are increasingly touched by AI. Examples of AI applications include digital health, diagnosis of diseases in newborns, remote monitoring of health by smart devices, real-time Big Data analytics for prompt diagnosis of heart attacks, and facial analysis software with consequences on civil liberties. While we underscore the need for integration of AI and HI, we note that AI technology does not have to replace medical specialists or scientists and rather, is in need of such expert HI. Altogether, AI and HI offer synergy for responsible innovation and veritable prospects for improving health care from prevention to diagnosis to therapeutics while unintended consequences of automation emergent from AI and algorithms should be borne in mind on scientific cultures, work force, and society at large.
Collapse
Affiliation(s)
- Kevin Dzobo
- International Centre for Genetic Engineering and Biotechnology (ICGEB), Cape Town Component, Wernher and Beit Building (South), UCT Medical Campus, Anzio Road, Observatory 7925, Cape Town, South Africa.,Division of Medical Biochemistry and Institute of Infectious Disease and Molecular Medicine, Department of Integrative Biomedical Sciences, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa
| | - Sampson Adotey
- International Development Innovation Network, D-Lab, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Nicholas E Thomford
- Pharmacogenetics Research Group, Division of Human Genetics, Department of Pathology and Institute of Infectious Diseases and Molecular Medicine, Faculty of Health Sciences, University of Cape Town, Observatory 7925, Cape Town, South Africa
| | - Witness Dzobo
- Pathology and Immunology Department, University Hospital Southampton, Mail Point B, Tremona Road, Southampton, UK.,University of Portsmouth, Faculty of Science, St Michael's Building, White Swan Road, Portsmouth, UK
| |
Collapse
|
337
|
Kessel K, Mattila J, Linder N, Kivelä T, Lundin J. Deep Learning Algorithms for Corneal Amyloid Deposition Quantitation in Familial Amyloidosis. Ocul Oncol Pathol 2019; 6:58-65. [PMID: 32002407 DOI: 10.1159/000500896] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 05/13/2019] [Indexed: 12/20/2022] Open
Abstract
Objectives The aim of this study was to train and validate deep learning algorithms to quantitate relative amyloid deposition (RAD; mean amyloid deposited area per stromal area) in corneal sections from patients with familial amyloidosis, Finnish (FAF), and assess its relationship with visual acuity. Methods Corneal specimens were obtained from 42 patients undergoing penetrating keratoplasty, stained with Congo red, and digitally scanned. Areas of amyloid deposits and areas of stromal tissue were labeled on a pixel level for training and validation. The algorithms were used to quantify RAD in each cornea, and the association of RAD with visual acuity was assessed. Results In the validation of the amyloid area classification, sensitivity was 86%, specificity 92%, and F-score 81. For corneal stromal area classification, sensitivity was 74%, specificity 82%, and F-score 73. There was insufficient evidence to demonstrate correlation (Spearman's rank correlation, -0.264, p = 0.091) between RAD and visual acuity (logMAR). Conclusions Deep learning algorithms can achieve a high sensitivity and specificity in pixel-level classification of amyloid and corneal stromal area. Further modeling and development of algorithms to assess earlier stages of deposition from clinical images is necessary to better assess the correlation between amyloid deposition and visual acuity. The method might be applied to corneal dystrophies as well.
Collapse
Affiliation(s)
- Klaus Kessel
- Institute for Molecular Medicine Finland (FIMM), HiLIFE, University of Helsinki, Helsinki, Finland
| | - Jaakko Mattila
- Cornea Service, Department of Ophthalmology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Nina Linder
- Institute for Molecular Medicine Finland (FIMM), HiLIFE, University of Helsinki, Helsinki, Finland.,Department of Women's and Children's Health, International Maternal and Child Health (IMCH), Uppsala University, Uppsala, Sweden
| | - Tero Kivelä
- Ophthalmic Pathology Laboratory, Department of Ophthalmology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.,Ophthalmic Pathology, Hospital District of Helsinki and Uusimaa Laboratory (HUSLAB), Helsinki, Finland
| | - Johan Lundin
- Institute for Molecular Medicine Finland (FIMM), HiLIFE, University of Helsinki, Helsinki, Finland.,Department of Public Health Sciences, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
338
|
Plus Disease in Telemedicine Approaches to Evaluating Acute-Phase ROP (e-ROP) Study: Characteristics, Predictors, and Accuracy of Image Grading. Ophthalmology 2019; 126:868-875. [DOI: 10.1016/j.ophtha.2019.01.021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 01/09/2019] [Accepted: 01/16/2019] [Indexed: 11/24/2022] Open
|
339
|
Smits DJ, Elze T, Wang H, Pasquale LR. Machine Learning in the Detection of the Glaucomatous Disc and Visual Field. Semin Ophthalmol 2019; 34:232-242. [PMID: 31132292 DOI: 10.1080/08820538.2019.1620801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Glaucoma is the leading cause of irreversible blindness worldwide. Early detection is of utmost importance as there is abundant evidence that early treatment prevents disease progression, preserves vision, and improves patients' long-term quality of life. The structure and function thresholds that alert to the diagnosis of glaucoma can be obtained entirely via digital means, and as such, screening is well suited to benefit from artificial intelligence and specifically machine learning. This paper reviews the concepts and current literature on the use of machine learning for detection of the glaucomatous disc and visual field.
Collapse
Affiliation(s)
- David J Smits
- a Department of Ophthalmology , Massachusetts Eye and Ear Infirmary, Harvard Medical School , Boston , USA
| | - Tobias Elze
- b Schepens Eye Research Institute , Massachusetts Eye and Ear Infirmary, Harvard Medical School , Boston , USA
| | - Haobing Wang
- c Harvard Medical School , Massachusetts Eye and Ear Infirmary , Boston , USA
| | - Louis R Pasquale
- d Department of Ophthalmology , Icahn School of Medicine at Mount Sinai , New York , NY , USA
| |
Collapse
|
340
|
Begley BA, Martin J, Tufty GT, Suh DW. Evaluation of a Remote Telemedicine Screening System for Severe Retinopathy of Prematurity. J Pediatr Ophthalmol Strabismus 2019; 56:157-161. [PMID: 31116862 DOI: 10.3928/01913913-20190215-01] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 01/14/2019] [Indexed: 01/25/2023]
Abstract
PURPOSE To evaluate the validity of remote telemedicine screening for retinopathy of prematurity (ROP) in a population of at-risk preterm infants in Iowa and South Dakota. METHODS The medical records for all preterm infants screened for ROP at neonatal intensive care units (NICUs) in Sioux City, Iowa, and Sioux Falls, South Dakota, from September 1, 2017, to July 31, 2018, were retrospectively reviewed. The RetCam Shuttle (Natus Medical Inc., Pleasanton, CA) was used to capture retinal images, which were posted on a secure server for evaluation by a pediatric ophthalmologist. Infants with suspected ROP approaching the criteria for treatment with anti-vascular endothelial growth factor (VEGF) medications were transferred to the Children's Hospital and Medical Center NICU in Omaha, Nebraska, where a comprehensive examination was performed and treatment was administered when indicated. The remaining infants received an outpatient comprehensive examination by one of two pediatric ophthalmologists within 2 weeks of discharge. RESULTS A total of 124 telemedicine examinations were performed on 35 infants during the study period. Remote telemedicine screening for referral-warranted ROP using the RetCam Shuttle had a sensitivity of 100%, specificity of 97%, positive predictive value of 66.7%, and negative predictive value of 100%. Of the three infants transferred for referral-warranted ROP, two required treatment with anti-VEGF medications. Good outcomes were noted in all cases, and no patients progressed beyond stage 3 ROP. CONCLUSIONS Telemedicine screening reliably detected referral-warranted ROP in at-risk premature infants at two remote sites, with no poor outcomes during the 11-month period. These results demonstrate the validity and utility of remote telemedicine screening for ROP. [J Pediatr Ophthalmol Strabismus. 2019;56(3):157-161.].
Collapse
|
341
|
Vijayalakshmi C, Sakthivel P, Vinekar A. Automated Detection and Classification of Telemedical Retinopathy of Prematurity Images. Telemed J E Health 2019; 26:354-358. [PMID: 31084534 DOI: 10.1089/tmj.2019.0004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background: Retinopathy of prematurity (ROP) is a retinal disorder of low birth weight infants and it is the leading cause of childhood blindness. The capability of wide field digital imaging systems to capture the clinical features of ROP has greatly helped the physicians to assess the severity of ROP and prevent childhood blindness due to ROP. Currently there is a lack of automated systems to assess the severity of ROP to assist the ROP specialist to make treatment decision. Objective: To present an automated detection and classification approach to assess the severity of ROP using wide field telemedical images. Materials and Methods: A total of 160 telemedical ROP (tele-ROP) images were collected out, of which 36 images were Normal, 79 images were Stage 2, and 45 images were Stage 3. Hessian analysis and support vector machine (SVM) classifier have been used to detect and classify the severity of ROP from tele-ROP images. Results: Classified the Normal, Stage 2, and Stage 3 images using SVM. Achieved accuracy of 91.8%, sensitivity of 90.37%, specificity of 94.65%, false positive rate of 5.35%, and false negative rate of 9.63%. Conclusions: The automated approach of detecting and classifying ROP would support pediatric ophthalmologists for early treatment decisions with optimal care.
Collapse
Affiliation(s)
- C Vijayalakshmi
- Department of Electronics and Communication Engineering, College of Engineering, Anna University, Chennai, Tamil Nadu, India
| | - P Sakthivel
- Department of Electronics and Communication Engineering, College of Engineering, Anna University, Chennai, Tamil Nadu, India
| | - Anand Vinekar
- Karnataka Internet-Assisted Diagnosis of Retinopathy of Prematurity, Department of Pediatric Retina, Narayana Nethralaya, Bangalore, Karnataka, India
| |
Collapse
|
342
|
Moshfeghi DM, Capone A. Economic Barriers in Retinopathy of Prematurity Management. Ophthalmol Retina 2019; 2:1177-1178. [PMID: 31047186 DOI: 10.1016/j.oret.2018.10.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 09/26/2018] [Accepted: 10/01/2018] [Indexed: 10/28/2022]
Affiliation(s)
- Darius M Moshfeghi
- Horngren Family Vitreoretinal Center, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California.
| | - Antonio Capone
- Associated Retinal Consultants, Royal Oak, Michigan; Oakland University William Beaumont School of Medicine, Auburn Hills, Michigan
| |
Collapse
|
343
|
Automated Classification of the Tympanic Membrane Using a Convolutional Neural Network. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9091827] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Precise evaluation of the tympanic membrane (TM) is required for accurate diagnosis of middle ear diseases. However, making an accurate assessment is sometimes difficult. Artificial intelligence is often employed for image processing, especially for performing high level analysis such as image classification, segmentation and matching. In particular, convolutional neural networks (CNNs) are increasingly used in medical image recognition. This study demonstrates the usefulness and reliability of CNNs in recognizing the side and perforation of TMs in medical images. CNN was constructed with typically six layers. After random assignment of the available images to the training, validation and test sets, training was performed. The accuracy of the CNN model was consequently evaluated using a new dataset. A class activation map (CAM) was used to evaluate feature extraction. The CNN model accuracy of detecting the TM side in the test dataset was 97.9%, whereas that of detecting the presence of perforation was 91.0%. The side of the TM and the presence of a perforation affect the activation sites. The results show that CNNs can be a useful tool for classifying TM lesions and identifying TM sides. Further research is required to consider real-time analysis and to improve classification accuracy.
Collapse
|
344
|
Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: a clinical validation study. LANCET DIGITAL HEALTH 2019; 1:e35-e44. [DOI: 10.1016/s2589-7500(19)30004-4] [Citation(s) in RCA: 122] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 03/21/2019] [Accepted: 03/21/2019] [Indexed: 12/11/2022]
|
345
|
Fielder AR, Wallace DK, Stahl A, Reynolds JD, Chiang MF, Quinn GE. Describing Retinopathy of Prematurity: Current Limitations and New Challenges. Ophthalmology 2019; 126:652-654. [PMID: 31005186 DOI: 10.1016/j.ophtha.2018.12.034] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 12/12/2018] [Accepted: 12/17/2018] [Indexed: 11/19/2022] Open
|
346
|
Nisha KL, G S, Sathidevi PS, Mohanachandran P, Vinekar A. A computer-aided diagnosis system for plus disease in retinopathy of prematurity with structure adaptive segmentation and vessel based features. Comput Med Imaging Graph 2019; 74:72-94. [PMID: 31039506 DOI: 10.1016/j.compmedimag.2019.04.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 03/07/2019] [Accepted: 04/15/2019] [Indexed: 11/28/2022]
Abstract
Retinopathy of Prematurity (ROP) is a blinding disease affecting the retina of low birth-weight preterm infants. Accurate diagnosis of ROP is essential to identify treatment-requiring ROP, which would help to prevent childhood blindness. Plus disease, which characterizes abnormal twisting, widening and branching of the blood vessels, is a significant symptom of treatment requiring ROP. In this paper, we have developed and evaluated a computer-based analysis system for objective assessment of plus disease in ROP, which best mimics the clinical method of disease diagnosis by identifying unique vessel based features. The proposed system consists of an initial segmentation stage, which will efficiently extract blood vessels of varying width and length by utilizing structure adaptive filtering, connectivity analysis and image fusion. The paper proposes the usage of additional retinal features namely leaf node count and vessel density, to portray the abnormal growth and branching of the blood vessels and to complement the commonly used features namely tortuosity and width. The test results show a better classification of plus disease in terms of sensitivity (95%) and specificity (93%), emphasizing the superiority of the proposed segmentation algorithm and vessel-based features. An additional advantage of the proposed system is that the process of selection of relevant vessels for feature extraction is fully automated, which makes the system highly useful to the non-physician graders, owing to the unavailability of a sufficient number of ROP specialists.
Collapse
Affiliation(s)
- K L Nisha
- National Institute of Technology Calicut, Kerala, India.
| | - Sreelekha G
- National Institute of Technology Calicut, Kerala, India
| | - P S Sathidevi
- National Institute of Technology Calicut, Kerala, India.
| | | | - Anand Vinekar
- Narayana Nethralaya PG Institute of Ophthalmology, Bangalore, India.
| |
Collapse
|
347
|
Peng Y, Dharssi S, Chen Q, Keenan TD, Agrón E, Wong WT, Chew EY, Lu Z. DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs. Ophthalmology 2019; 126:565-575. [PMID: 30471319 PMCID: PMC6435402 DOI: 10.1016/j.ophtha.2018.11.015] [Citation(s) in RCA: 171] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 11/01/2018] [Accepted: 11/12/2018] [Indexed: 12/22/2022] Open
Abstract
PURPOSE In assessing the severity of age-related macular degeneration (AMD), the Age-Related Eye Disease Study (AREDS) Simplified Severity Scale predicts the risk of progression to late AMD. However, its manual use requires the time-consuming participation of expert practitioners. Although several automated deep learning systems have been developed for classifying color fundus photographs (CFP) of individual eyes by AREDS severity score, none to date has used a patient-based scoring system that uses images from both eyes to assign a severity score. DESIGN DeepSeeNet, a deep learning model, was developed to classify patients automatically by the AREDS Simplified Severity Scale (score 0-5) using bilateral CFP. PARTICIPANTS DeepSeeNet was trained on 58 402 and tested on 900 images from the longitudinal follow-up of 4549 participants from AREDS. Gold standard labels were obtained using reading center grades. METHODS DeepSeeNet simulates the human grading process by first detecting individual AMD risk factors (drusen size, pigmentary abnormalities) for each eye and then calculating a patient-based AMD severity score using the AREDS Simplified Severity Scale. MAIN OUTCOME MEASURES Overall accuracy, specificity, sensitivity, Cohen's kappa, and area under the curve (AUC). The performance of DeepSeeNet was compared with that of retinal specialists. RESULTS DeepSeeNet performed better on patient-based classification (accuracy = 0.671; kappa = 0.558) than retinal specialists (accuracy = 0.599; kappa = 0.467) with high AUC in the detection of large drusen (0.94), pigmentary abnormalities (0.93), and late AMD (0.97). DeepSeeNet also outperformed retinal specialists in the detection of large drusen (accuracy 0.742 vs. 0.696; kappa 0.601 vs. 0.517) and pigmentary abnormalities (accuracy 0.890 vs. 0.813; kappa 0.723 vs. 0.535) but showed lower performance in the detection of late AMD (accuracy 0.967 vs. 0.973; kappa 0.663 vs. 0.754). CONCLUSIONS By simulating the human grading process, DeepSeeNet demonstrated high accuracy with increased transparency in the automated assignment of individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. These results highlight the potential of deep learning to assist and enhance clinical decision-making in patients with AMD, such as early AMD detection and risk prediction for developing late AMD. DeepSeeNet is publicly available on https://github.com/ncbi-nlp/DeepSeeNet.
Collapse
Affiliation(s)
- Yifan Peng
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Shazia Dharssi
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland; National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Tiarnan D Keenan
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Elvira Agrón
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Wai T Wong
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Emily Y Chew
- National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
348
|
Abstract
Artificial intelligence (AI) is becoming ubiquitous in health care, largely through machine learning and predictive analytics applications. Recent applications of AI to common health care scenarios, such as screening and diagnosing, have fueled optimism about the use of advanced analytics to improve care. Careful and objective considerations need to be made before implementing an advanced analytics solution. Critical evaluation before, during, and after its implementation will ensure safe care, good outcomes, and the elimination of waste. In this commentary we offer basic practical considerations for developing, implementing, and evaluating such solutions based on many years of experience.
Collapse
|
349
|
Valikodath N, Cole E, Chiang MF, Campbell JP, Chan RVP. Imaging in Retinopathy of Prematurity. Asia Pac J Ophthalmol (Phila) 2019; 8:178-186. [PMID: 31037876 PMCID: PMC7891847 DOI: 10.22608/apo.201963] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 04/16/2019] [Indexed: 01/29/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable childhood blindness worldwide. Barriers to ROP screening and difficulties with subsequent evaluation and management include poor access to care, lack of physicians trained in ROP, and issues with objective documentation. Digital retinal imaging can help address these barriers and improve our knowledge of the pathophysiology of the disease. Advancements in technology have led to new, non-mydriatic and mydriatic cameras with wider fields of view as well as devices that can simultaneously incorporate fluorescein angiography, optical coherence tomography (OCT), and OCT angiography. Image analysis in ROP is also being employed through smartphones and computer-based software. Telemedicine programs in the United States and worldwide have utilized imaging to extend ROP screening to infants in remote areas and have shown that digital retinal imaging can be reliable, accurate, and cost-effective. In addition, tele-education programs are also using digital retinal images to increase the number of healthcare providers trained in ROP. Although indirect ophthalmoscopy is still an important skill for screening, digital retinal imaging holds promise for more widespread screening and management of ROP.
Collapse
Affiliation(s)
- N Valikodath
- From the Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, United States; and Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, United States
| | | | | | | | | |
Collapse
|
350
|
Ting DSW, Pasquale LR, Peng L, Campbell JP, Lee AY, Raman R, Tan GSW, Schmetterer L, Keane PA, Wong TY. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2019; 103:167-175. [PMID: 30361278 PMCID: PMC6362807 DOI: 10.1136/bjophthalmol-2018-313173] [Citation(s) in RCA: 665] [Impact Index Per Article: 110.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 09/17/2018] [Accepted: 09/23/2018] [Indexed: 12/18/2022]
Abstract
Artificial intelligence (AI) based on deep learning (DL) has sparked tremendous global interest in recent years. DL has been widely adopted in image recognition, speech recognition and natural language processing, but is only beginning to impact on healthcare. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography and visual fields, achieving robust classification performance in the detection of diabetic retinopathy and retinopathy of prematurity, the glaucoma-like disc, macular oedema and age-related macular degeneration. DL in ocular imaging may be used in conjunction with telemedicine as a possible solution to screen, diagnose and monitor major eye diseases for patients in primary care and community settings. Nonetheless, there are also potential challenges with DL application in ophthalmology, including clinical and technical challenges, explainability of the algorithm results, medicolegal issues, and physician and patient acceptance of the AI 'black-box' algorithms. DL could potentially revolutionise how ophthalmology is practised in the future. This review provides a summary of the state-of-the-art DL systems described for ophthalmic applications, potential challenges in clinical deployment and the path forward.
Collapse
Affiliation(s)
- Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Louis R Pasquale
- Department of Ophthalmology, Mt Sinai Hospital, New York City, New York, USA
| | - Lily Peng
- Google AI Healthcare, Mountain View, California, USA
| | - John Peter Campbell
- Casey Eye Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, School of Medicine, Seattle, Washington, USA
| | - Rajiv Raman
- Vitreo-retinal Department, Sankara Nethralaya, Chennai, Tamil Nadu, India
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Pearse A Keane
- Vitreo-retinal Service, Moorfields Eye Hospital, London, UK
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| |
Collapse
|