1
|
Arunga S, Morley KE, Kwaga T, Morley MG, Nakayama LF, Mwavu R, Kaggwa F, Ssempiira J, Celi LA, Haberer JE, Obua C. Assessment of Clinical Metadata on the Accuracy of Retinal Fundus Image Labels in Diabetic Retinopathy in Uganda: Case-Crossover Study Using the Multimodal Database of Retinal Images in Africa. JMIR Form Res 2024; 8:e59914. [PMID: 39293049 PMCID: PMC11451581 DOI: 10.2196/59914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 06/22/2024] [Accepted: 07/16/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND Labeling color fundus photos (CFP) is an important step in the development of artificial intelligence screening algorithms for the detection of diabetic retinopathy (DR). Most studies use the International Classification of Diabetic Retinopathy (ICDR) to assign labels to CFP, plus the presence or absence of macular edema (ME). Images can be grouped as referrable or nonreferrable according to these classifications. There is little guidance in the literature about how to collect and use metadata as a part of the CFP labeling process. OBJECTIVE This study aimed to improve the quality of the Multimodal Database of Retinal Images in Africa (MoDRIA) by determining whether the availability of metadata during the image labeling process influences the accuracy, sensitivity, and specificity of image labels. MoDRIA was developed as one of the inaugural research projects of the Mbarara University Data Science Research Hub, part of the Data Science for Health Discovery and Innovation in Africa (DS-I Africa) initiative. METHODS This is a crossover assessment with 2 groups and 2 phases. Each group had 10 randomly assigned labelers who provided an ICDR score and the presence or absence of ME for each of the 50 CFP in a test image with and without metadata including blood pressure, visual acuity, glucose, and medical history. Specificity and sensitivity of referable retinopathy were based on ICDR scores, and ME was calculated using a 2-sided t test. Comparison of sensitivity and specificity for ICDR scores and ME with and without metadata for each participant was calculated using the Wilcoxon signed rank test. Statistical significance was set at P<.05. RESULTS The sensitivity for identifying referrable DR with metadata was 92.8% (95% CI 87.6-98.0) compared with 93.3% (95% CI 87.6-98.9) without metadata, and the specificity was 84.9% (95% CI 75.1-94.6) with metadata compared with 88.2% (95% CI 79.5-96.8) without metadata. The sensitivity for identifying the presence of ME was 64.3% (95% CI 57.6-71.0) with metadata, compared with 63.1% (95% CI 53.4-73.0) without metadata, and the specificity was 86.5% (95% CI 81.4-91.5) with metadata compared with 87.7% (95% CI 83.9-91.5) without metadata. The sensitivity and specificity of the ICDR score and the presence or absence of ME were calculated for each labeler with and without metadata. No findings were statistically significant. CONCLUSIONS The sensitivity and specificity scores for the detection of referrable DR were slightly better without metadata, but the difference was not statistically significant. We cannot make definitive conclusions about the impact of metadata on the sensitivity and specificity of image labels in our study. Given the importance of metadata in clinical situations, we believe that metadata may benefit labeling quality. A more rigorous study to determine the sensitivity and specificity of CFP labels with and without metadata is recommended.
Collapse
Affiliation(s)
- Simon Arunga
- Department of Ophthalmology, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Katharine Elise Morley
- Massachusetts General Hospital Center for Global Health, Department of Medicine, Harvard Medical School, Boston, MA, United States
| | - Teddy Kwaga
- Department of Ophthalmology, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Michael Gerard Morley
- Harvard Ophthalmology AI Lab, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
| | - Luis Filipe Nakayama
- Ophthalmology Department, Sao Paulo Federal University, Sao Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Rogers Mwavu
- Faculty of Computing and Informatics, Department of Information Technology, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Fred Kaggwa
- Faculty of Computing and Informatics, Department of Computer Science, Mbarara University of Science and Technology, Mbarara, Uganda
| | | | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, United States
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, United States
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Jessica E Haberer
- Massachusetts General Hospital Center for Global Health, Department of Medicine, Harvard Medical School, Boston, MA, United States
| | - Celestino Obua
- Department of Pharmacology, Mbarara University of Science and Technology, Mbarara, Uganda
| |
Collapse
|
2
|
Rao DP, Savoy FM, Sivaraman A, Dutt S, Shahsuvaryan M, Jrbashyan N, Hambardzumyan N, Yeghiazaryan N, Das T. Evaluation of an AI algorithm trained on an ethnically diverse dataset to screen a previously unseen population for diabetic retinopathy. Indian J Ophthalmol 2024; 72:1162-1167. [PMID: 39078960 PMCID: PMC11451790 DOI: 10.4103/ijo.ijo_2151_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/22/2023] [Accepted: 02/02/2024] [Indexed: 10/06/2024] Open
Abstract
PURPOSE This study aimed to determine the generalizability of an artificial intelligence (AI) algorithm trained on an ethnically diverse dataset to screen for referable diabetic retinopathy (RDR) in the Armenian population unseen during AI development. METHODS This study comprised 550 patients with diabetes mellitus visiting the polyclinics of Armenia over 10 months requiring diabetic retinopathy (DR) screening. The Medios AI-DR algorithm was developed using a robust, diverse, ethnically balanced dataset with no inherent bias and deployed offline on a smartphone-based fundus camera. The algorithm here analyzed the retinal images captured using the target device for the presence of RDR (i.e., moderate non-proliferative diabetic retinopathy (NPDR) and/or clinically significant diabetic macular edema (CSDME) or more severe disease) and sight-threatening DR (STDR, i.e., severe NPDR and/or CSDME or more severe disease). The results compared the AI output to a consensus or majority image grading of three expert graders according to the International Clinical Diabetic Retinopathy severity scale. RESULTS On 478 subjects included in the analysis, the algorithm achieved a high classification sensitivity of 95.30% (95% CI: 91.9%-98.7%) and a specificity of 83.89% (95% CI: 79.9%-87.9%) for the detection of RDR. The sensitivity for STDR detection was 100%. CONCLUSION The study proved that Medios AI-DR algorithm yields good accuracy in screening for RDR in the Armenian population. In our literature search, this is the only smartphone-based, offline AI model validated in different populations.
Collapse
Affiliation(s)
- Divya P Rao
- AL& ML, Remidio Innovative Solutions, Inc, Glen Allen, USA
| | - Florian M Savoy
- AI&ML, Medios Technologies Pte Ltd, Remidio Innovative Solutions, Singapore
| | - Anand Sivaraman
- AI&ML, Remidio Innovative Solutions Pvt Ltd, Bengaluru, India
| | - Sreetama Dutt
- AI&ML, Remidio Innovative Solutions Pvt Ltd, Bengaluru, India
| | - Marianne Shahsuvaryan
- Ophthalmology, Yerevan State Medical University, Armenia
- Armenian Eyecare Project, Yerevan State University, Armenia
| | | | | | | | - Taraprasad Das
- Vitreoretinal Services, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, India
| |
Collapse
|
3
|
Gonçalves MB, Nakayama LF, Ferraz D, Faber H, Korot E, Malerbi FK, Regatieri CV, Maia M, Celi LA, Keane PA, Belfort R. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: a review. Eye (Lond) 2024; 38:426-433. [PMID: 37667028 PMCID: PMC10858054 DOI: 10.1038/s41433-023-02717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/26/2023] [Accepted: 08/25/2023] [Indexed: 09/06/2023] Open
Abstract
This study aimed to evaluate the image quality assessment (IQA) and quality criteria employed in publicly available datasets for diabetic retinopathy (DR). A literature search strategy was used to identify relevant datasets, and 20 datasets were included in the analysis. Out of these, 12 datasets mentioned performing IQA, but only eight specified the quality criteria used. The reported quality criteria varied widely across datasets, and accessing the information was often challenging. The findings highlight the importance of IQA for AI model development while emphasizing the need for clear and accessible reporting of IQA information. The study suggests that automated quality assessments can be a valid alternative to manual labeling and emphasizes the importance of establishing quality standards based on population characteristics, clinical use, and research purposes. In conclusion, image quality assessment is important for AI model development; however, strict data quality standards must not limit data sharing. Given the importance of IQA for developing, validating, and implementing deep learning (DL) algorithms, it's recommended that this information be reported in a clear, specific, and accessible way whenever possible. Automated quality assessments are a valid alternative to the traditional manual labeling process, and quality standards should be determined according to population characteristics, clinical use, and research purpose.
Collapse
Affiliation(s)
- Mariana Batista Gonçalves
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil.
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA.
| | - Daniel Ferraz
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Hanna Faber
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Ophthalmology, University of Tuebingen, Tuebingen, Germany
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Stanford University Byers Eye Institute Palo Alto, Palo Alto, CA, USA
| | | | | | - Mauricio Maia
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA
- Harvard TH Chan School of Public Health, Department of Biostatistics, Boston, MA, USA
- Beth Israel Deaconess Medical Center, Department of Medicine, Boston, MA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
4
|
Korot E, Gonçalves MB, Huemer J, Beqiri S, Khalid H, Kelly M, Chia M, Mathijs E, Struyven R, Moussa M, Keane PA. Clinician-Driven AI: Code-Free Self-Training on Public Data for Diabetic Retinopathy Referral. JAMA Ophthalmol 2023; 141:1029-1036. [PMID: 37856110 PMCID: PMC10587830 DOI: 10.1001/jamaophthalmol.2023.4508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/23/2023] [Indexed: 10/20/2023]
Abstract
Importance Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets. Objective To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models. Design, Setting, and Participants This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021. Exposures Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images. Main Outcomes and Measures The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis. Results For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively. Conclusions and Relevance These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models.
Collapse
Affiliation(s)
- Edward Korot
- Retina Specialists of Michigan, Grand Rapids
- Moorfields Eye Hospital, London, United Kingdom
- Stanford University Byers Eye Institute, Palo Alto, California
| | - Mariana Batista Gonçalves
- Moorfields Eye Hospital, London, United Kingdom
- Federal University of Sao Paulo, Sao Paulo, Brazil
- Instituto da Visão, Sao Paulo, Brazil
| | | | - Sara Beqiri
- Moorfields Eye Hospital, London, United Kingdom
- University College London Medical School, London, United Kingdom
| | - Hagar Khalid
- Moorfields Eye Hospital, London, United Kingdom
- Ophthalmology Department, Faculty of Medicine, Tanta University Hospital, Tanta, Gharbia, Egypt
| | - Madeline Kelly
- Moorfields Eye Hospital, London, United Kingdom
- University College London Medical School, London, United Kingdom
- UCL Centre for Medical Image Computing, London, United Kingdom
| | - Mark Chia
- Moorfields Eye Hospital, London, United Kingdom
| | - Emily Mathijs
- Michigan State University College of Osteopathic Medicine, East Lansing
| | | | - Magdy Moussa
- Ophthalmology Department, Faculty of Medicine, Tanta University Hospital, Tanta, Gharbia, Egypt
| | | |
Collapse
|
5
|
Oganov AC, Seddon I, Jabbehdari S, Uner OE, Fonoudi H, Yazdanpanah G, Outani O, Arevalo JF. Artificial intelligence in retinal image analysis: Development, advances, and challenges. Surv Ophthalmol 2023; 68:905-919. [PMID: 37116544 DOI: 10.1016/j.survophthal.2023.04.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 04/20/2023] [Accepted: 04/24/2023] [Indexed: 04/30/2023]
Abstract
Modern advances in diagnostic technologies offer the potential for unprecedented insight into ophthalmic conditions relating to the retina. We discuss the current landscape of artificial intelligence in retina with respect to screening, diagnosis, and monitoring of retinal pathologies such as diabetic retinopathy, diabetic macular edema, central serous chorioretinopathy, and age-related macular degeneration. We review the methods used in these models and evaluate their performance in both research and clinical contexts and discuss potential future directions for investigation, use of multiple imaging modalities in artificial intelligence algorithms, and challenges in the application of artificial intelligence in retinal pathologies.
Collapse
Affiliation(s)
- Anthony C Oganov
- Department of Ophthalmology, Renaissance School of Medicine, Stony Brook, NY, USA
| | - Ian Seddon
- College of Osteopathic Medicine, Nova Southeastern University, Fort Lauderdale, FL, USA
| | - Sayena Jabbehdari
- Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
| | - Ogul E Uner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health and Science University, Portland, OR, USA
| | - Hossein Fonoudi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Iranshahr University of Medical Sciences, Iranshahr, Sistan and Baluchestan, Iran
| | - Ghasem Yazdanpanah
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Oumaima Outani
- Faculty of Medicine and Pharmacy of Rabat, Mohammed 5 University, Rabat, Rabat, Morocco
| | - J Fernando Arevalo
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
6
|
Wang Z, Li Z, Li K, Mu S, Zhou X, Di Y. Performance of artificial intelligence in diabetic retinopathy screening: a systematic review and meta-analysis of prospective studies. Front Endocrinol (Lausanne) 2023; 14:1197783. [PMID: 37383397 PMCID: PMC10296189 DOI: 10.3389/fendo.2023.1197783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/23/2023] [Indexed: 06/30/2023] Open
Abstract
Aims To systematically evaluate the diagnostic value of an artificial intelligence (AI) algorithm model for various types of diabetic retinopathy (DR) in prospective studies over the previous five years, and to explore the factors affecting its diagnostic effectiveness. Materials and methods A search was conducted in Cochrane Library, Embase, Web of Science, PubMed, and IEEE databases to collect prospective studies on AI models for the diagnosis of DR from January 2017 to December 2022. We used QUADAS-2 to evaluate the risk of bias in the included studies. Meta-analysis was performed using MetaDiSc and STATA 14.0 software to calculate the combined sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of various types of DR. Diagnostic odds ratios, summary receiver operating characteristic (SROC) plots, coupled forest plots, and subgroup analysis were performed according to the DR categories, patient source, region of study, and quality of literature, image, and algorithm. Results Finally, 21 studies were included. Meta-analysis showed that the pooled sensitivity, specificity, pooled positive likelihood ratio, pooled negative likelihood ratio, area under the curve, Cochrane Q index, and pooled diagnostic odds ratio of AI model for the diagnosis of DR were 0.880 (0.875-0.884), 0.912 (0.99-0.913), 13.021 (10.738-15.789), 0.083 (0.061-0.112), 0.9798, 0.9388, and 206.80 (124.82-342.63), respectively. The DR categories, patient source, region of study, sample size, quality of literature, image, and algorithm may affect the diagnostic efficiency of AI for DR. Conclusion AI model has a clear diagnostic value for DR, but it is influenced by many factors that deserve further study. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023389687.
Collapse
|
7
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
8
|
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100078. [PMID: 37846285 PMCID: PMC10577833 DOI: 10.1016/j.aopr.2022.100078] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/01/2022] [Accepted: 08/18/2022] [Indexed: 10/18/2023]
Abstract
Background The ophthalmology field was among the first to adopt artificial intelligence (AI) in medicine. The availability of digitized ocular images and substantial data have made deep learning (DL) a popular topic. Main text At the moment, AI in ophthalmology is mostly used to improve disease diagnosis and assist decision-making aiming at ophthalmic diseases like diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), cataract and other anterior segment diseases. However, most of the AI systems developed to date are still in the experimental stages, with only a few having achieved clinical applications. There are a number of reasons for this phenomenon, including security, privacy, poor pervasiveness, trust and explainability concerns. Conclusions This review summarizes AI applications in ophthalmology, highlighting significant clinical considerations for adopting AI techniques and discussing the potential challenges and future directions.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
9
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
10
|
Jeong S, Fischer ML, Breunig H, Marklein AR, Hopkins FM, Biraud SC. Artificial Intelligence Approach for Estimating Dairy Methane Emissions. ENVIRONMENTAL SCIENCE & TECHNOLOGY 2022; 56:4849-4858. [PMID: 35363471 DOI: 10.1021/acs.est.1c08802] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
California's dairy sector accounts for ∼50% of anthropogenic CH4 emissions in the state's greenhouse gas (GHG) emission inventory. Although California dairy facilities' location and herd size vary over time, atmospheric inverse modeling studies rely on decade-old facility-scale geospatial information. For the first time, we apply artificial intelligence (AI) to aerial imagery to estimate dairy CH4 emissions from California's San Joaquin Valley (SJV), a region with ∼90% of the state's dairy population. Using an AI method, we process 316,882 images to estimate the facility-scale herd size across the SJV. The AI approach predicts herd size that strongly (>95%) correlates with that made by human visual inspection, providing a low-cost alternative to the labor-intensive inventory development process. We estimate SJV's dairy enteric and manure CH4 emissions for 2018 to be 496-763 Gg/yr (mean = 624; 95% confidence) using the predicted herd size. We also apply our AI approach to estimate CH4 emission reduction from anaerobic digester deployment. We identify 162 large (90th percentile) farms and estimate a CH4 reduction potential of 83 Gg CH4/yr for these large facilities from anaerobic digester adoption. The results indicate that our AI approach can be applied to characterize the manure system (e.g., use of an anaerobic lagoon) and estimate GHG emissions for other sectors.
Collapse
Affiliation(s)
- Seongeun Jeong
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| | - Marc L Fischer
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| | - Hanna Breunig
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| | - Alison R Marklein
- University of California, Riverside, 900 University Avenue, Riverside, California 92521, United States
| | - Francesca M Hopkins
- University of California, Riverside, 900 University Avenue, Riverside, California 92521, United States
| | - Sebastien C Biraud
- Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, United States
| |
Collapse
|
11
|
Li J, Wang L, Gao Y, Liang Q, Chen L, Sun X, Yang H, Zhao Z, Meng L, Xue S, Du Q, Zhang Z, Lv C, Xu H, Guo Z, Xie G, Xie L. Automated detection of myopic maculopathy from color fundus photographs using deep convolutional neural networks. EYE AND VISION (LONDON, ENGLAND) 2022; 9:13. [PMID: 35361278 PMCID: PMC8973805 DOI: 10.1186/s40662-022-00285-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 03/09/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND Myopic maculopathy (MM) has become a major cause of visual impairment and blindness worldwide, especially in East Asian countries. Deep learning approaches such as deep convolutional neural networks (DCNN) have been successfully applied to identify some common retinal diseases and show great potential for the intelligent analysis of MM. This study aimed to build a reliable approach for automated detection of MM from retinal fundus images using DCNN models. METHODS A dual-stream DCNN (DCNN-DS) model that perceives features from both original images and corresponding processed images by color histogram distribution optimization method was designed for classification of no MM, tessellated fundus (TF), and pathologic myopia (PM). A total of 36,515 gradable images from four hospitals were used for DCNN model development, and 14,986 gradable images from the other two hospitals for external testing. We also compared the performance of the DCNN-DS model and four ophthalmologists on 3000 randomly sampled fundus images. RESULTS The DCNN-DS model achieved sensitivities of 93.3% and 91.0%, specificities of 99.6% and 98.7%, areas under the receiver operating characteristic curves (AUC) of 0.998 and 0.994 for detecting PM, whereas sensitivities of 98.8% and 92.8%, specificities of 95.6% and 94.1%, AUCs of 0.986 and 0.970 for detecting TF in two external testing datasets. In the sampled testing dataset, the sensitivities of four ophthalmologists ranged from 88.3% to 95.8% and 81.1% to 89.1%, and the specificities ranged from 95.9% to 99.2% and 77.8% to 97.3% for detecting PM and TF, respectively. Meanwhile, the DCNN-DS model achieved sensitivities of 90.8% and 97.9% and specificities of 99.1% and 94.0% for detecting PM and TF, respectively. CONCLUSIONS The proposed DCNN-DS approach demonstrated reliable performance with high sensitivity, specificity, and AUC to classify different MM levels on fundus photographs sourced from clinics. It can help identify MM automatically among the large myopic groups and show great potential for real-life applications.
Collapse
Affiliation(s)
- Jun Li
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Lilong Wang
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China
| | - Yan Gao
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Qianqian Liang
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Lingzhi Chen
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China
| | - Xiaolei Sun
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China.,Shandong Eye Hospital of Shandong First Medical University, Jinan, 250021, China
| | | | | | - Lina Meng
- Qilu Hospital of Shandong University (Qingdao), Qingdao, 266035, China
| | - Shuyue Xue
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Qing Du
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Zhichun Zhang
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Chuanfeng Lv
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China
| | - Haifeng Xu
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Zhen Guo
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Guotong Xie
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China. .,Ping An Healthcare and Technology Company Limited, Shanghai, 200030, China. .,Ping An International Smart City Technology Company Limited, Shenzhen, 518000, China.
| | - Lixin Xie
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China. .,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China.
| |
Collapse
|
12
|
Li Z, Qiang W, Chen H, Pei M, Yu X, Wang L, Li Z, Xie W, Wu X, Jiang J, Wu G. Artificial intelligence to detect malignant eyelid tumors from photographic images. NPJ Digit Med 2022; 5:23. [PMID: 35236921 PMCID: PMC8891262 DOI: 10.1038/s41746-022-00571-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/04/2022] [Indexed: 11/23/2022] Open
Abstract
Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Hongyun Chen
- Zunyi First People's Hospital, Zunyi Medical University, Zunyi, 563000, China
| | - Mengjie Pei
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Xiaomei Yu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Layi Wang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Zhen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Weiwei Xie
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guizhou, 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China.
| | - Guohai Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
| |
Collapse
|
13
|
Lo JE, Kang EYC, Chen YN, Hsieh YT, Wang NK, Chen TC, Chen KJ, Wu WC, Hwang YS, Lo FS, Lai CC. Data Homogeneity Effect in Deep Learning-Based Prediction of Type 1 Diabetic Retinopathy. J Diabetes Res 2021; 2021:2751695. [PMID: 35071603 PMCID: PMC8776492 DOI: 10.1155/2021/2751695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 10/27/2021] [Accepted: 11/25/2021] [Indexed: 12/05/2022] Open
Abstract
This study is aimed at evaluating a deep transfer learning-based model for identifying diabetic retinopathy (DR) that was trained using a dataset with high variability and predominant type 2 diabetes (T2D) and comparing model performance with that in patients with type 1 diabetes (T1D). The Kaggle dataset, which is a publicly available dataset, was divided into training and testing Kaggle datasets. In the comparison dataset, we collected retinal fundus images of T1D patients at Chang Gung Memorial Hospital in Taiwan from 2013 to 2020, and the images were divided into training and testing T1D datasets. The model was developed using 4 different convolutional neural networks (Inception-V3, DenseNet-121, VGG1, and Xception). The model performance in predicting DR was evaluated using testing images from each dataset, and area under the curve (AUC), sensitivity, and specificity were calculated. The model trained using the Kaggle dataset had an average (range) AUC of 0.74 (0.03) and 0.87 (0.01) in the testing Kaggle and T1D datasets, respectively. The model trained using the T1D dataset had an AUC of 0.88 (0.03), which decreased to 0.57 (0.02) in the testing Kaggle dataset. Heatmaps showed that the model focused on retinal hemorrhage, vessels, and exudation to predict DR. In wrong prediction images, artifacts and low-image quality affected model performance. The model developed with the high variability and T2D predominant dataset could be applied to T1D patients. Dataset homogeneity could affect the performance, trainability, and generalization of the model.
Collapse
Affiliation(s)
- Jui-En Lo
- School of Medicine, National Taiwan University College of Medicine, Taipei 106, Taiwan
- Department of Computer Science and Information Engineering National Taiwan University, Taipei 106, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
- Graduate Institute of Clinical Medical Sciences, Chang Gung University, Taoyuan 333, Taiwan
| | - Yun-Nung Chen
- Department of Computer Science and Information Engineering National Taiwan University, Taipei 106, Taiwan
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei 100, Taiwan
| | - Nan-Kai Wang
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University, New York, New York 10032, USA
| | - Ta-Ching Chen
- Department of Ophthalmology, National Taiwan University Hospital, Taipei 100, Taiwan
- Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei 106, Taiwan
| | - Kuan-Jen Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
- Department of Ophthalmology, Chang Gung Memorial Hospital, Xiamen 361028, China
- Department of Ophthalmology, Jen-Ai Hospital Dali Branch, Taichung 400, Taiwan
| | - Fu-Sung Lo
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
- Division of Pediatric Endocrinology and Genetics, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 333, Taiwan
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
- Department of Ophthalmology, Chang Gung Memorial Hospital, Keelung 204, Taiwan
| |
Collapse
|
14
|
Yang Y, Guo XM, Wang H, Zheng YN. Deep Learning-Based Heart Sound Analysis for Left Ventricular Diastolic Dysfunction Diagnosis. Diagnostics (Basel) 2021; 11:2349. [PMID: 34943586 PMCID: PMC8699866 DOI: 10.3390/diagnostics11122349] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/06/2021] [Accepted: 12/10/2021] [Indexed: 11/20/2022] Open
Abstract
The aggravation of left ventricular diastolic dysfunction (LVDD) could lead to ventricular remodeling, wall stiffness, reduced compliance, and progression to heart failure with a preserved ejection fraction. A non-invasive method based on convolutional neural networks (CNN) and heart sounds (HS) is presented for the early diagnosis of LVDD in this paper. A deep convolutional generative adversarial networks (DCGAN) model-based data augmentation (DA) method was proposed to expand a HS database of LVDD for model training. Firstly, the preprocessing of HS signals was performed using the improved wavelet denoising method. Secondly, the logistic regression based hidden semi-Markov model was utilized to segment HS signals, which were subsequently converted into spectrograms for DA using the short-time Fourier transform (STFT). Finally, the proposed method was compared with VGG-16, VGG-19, ResNet-18, ResNet-50, DenseNet-121, and AlexNet in terms of performance for LVDD diagnosis. The result shows that the proposed method has a reasonable performance with an accuracy of 0.987, a sensitivity of 0.986, and a specificity of 0.988, which proves the effectiveness of HS analysis for the early diagnosis of LVDD and demonstrates that the DCGAN-based DA method could effectively augment HS data.
Collapse
Affiliation(s)
- Yang Yang
- Key Laboratory of Biorheology Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing 400044, China; (Y.Y.); (H.W.)
| | - Xing-Ming Guo
- Key Laboratory of Biorheology Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing 400044, China; (Y.Y.); (H.W.)
| | - Hui Wang
- Key Laboratory of Biorheology Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing 400044, China; (Y.Y.); (H.W.)
| | - Yi-Neng Zheng
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing 400016, China
| |
Collapse
|
15
|
Koyama A, Miyazaki D, Nakagawa Y, Ayatsuka Y, Miyake H, Ehara F, Sasaki SI, Shimizu Y, Inoue Y. Determination of probability of causative pathogen in infectious keratitis using deep learning algorithm of slit-lamp images. Sci Rep 2021; 11:22642. [PMID: 34811468 PMCID: PMC8608802 DOI: 10.1038/s41598-021-02138-w] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 11/02/2021] [Indexed: 11/09/2022] Open
Abstract
Corneal opacities are important causes of blindness, and their major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images including 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.
Collapse
Affiliation(s)
- Ayumi Koyama
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Dai Miyazaki
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan.
| | | | | | - Hitomi Miyake
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Fumie Ehara
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Shin-Ichi Sasaki
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Yumiko Shimizu
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| | - Yoshitsugu Inoue
- Department of Ophthalmology, Tottori University, 36-1 Nishicho, Yonago, Tottori, 683-8504, Japan
| |
Collapse
|
16
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
17
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
18
|
Gunasekeran DV, Tham YC, Ting DSW, Tan GSW, Wong TY. Digital health during COVID-19: lessons from operationalising new models of care in ophthalmology. LANCET DIGITAL HEALTH 2021; 3:e124-e134. [PMID: 33509383 DOI: 10.1016/s2589-7500(20)30287-9] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/11/2020] [Accepted: 11/18/2020] [Indexed: 12/13/2022]
Abstract
The COVID-19 pandemic has resulted in massive disruptions within health care, both directly as a result of the infectious disease outbreak, and indirectly because of public health measures to mitigate against transmission. This disruption has caused rapid dynamic fluctuations in demand, capacity, and even contextual aspects of health care. Therefore, the traditional face-to-face patient-physician care model has had to be re-examined in many countries, with digital technology and new models of care being rapidly deployed to meet the various challenges of the pandemic. This Viewpoint highlights new models in ophthalmology that have adapted to incorporate digital health solutions such as telehealth, artificial intelligence decision support for triaging and clinical care, and home monitoring. These models can be operationalised for different clinical applications based on the technology, clinical need, demand from patients, and manpower availability, ranging from out-of-hospital models including the hub-and-spoke pre-hospital model, to front-line models such as the inflow funnel model and monitoring models such as the so-called lighthouse model for provider-led monitoring. Lessons learnt from operationalising these models for ophthalmology in the context of COVID-19 are discussed, along with their relevance for other specialty domains.
Collapse
Affiliation(s)
- Dinesh V Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Duke-NUS Medical School, Singapore.
| |
Collapse
|
19
|
Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput 2021; 59:401-415. [PMID: 33492598 PMCID: PMC7829497 DOI: 10.1007/s11517-021-02321-1] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 01/15/2021] [Indexed: 01/16/2023]
Abstract
Deep learning (DL) has been successfully applied to the diagnosis of ophthalmic diseases. However, rare diseases are commonly neglected due to insufficient data. Here, we demonstrate that few-shot learning (FSL) using a generative adversarial network (GAN) can improve the applicability of DL in the optical coherence tomography (OCT) diagnosis of rare diseases. Four major classes with a large number of datasets and five rare disease classes with a few-shot dataset are included in this study. Before training the classifier, we constructed GAN models to generate pathological OCT images of each rare disease from normal OCT images. The Inception-v3 architecture was trained using an augmented training dataset, and the final model was validated using an independent test dataset. The synthetic images helped in the extraction of the characteristic features of each rare disease. The proposed DL model demonstrated a significant improvement in the accuracy of the OCT diagnosis of rare retinal diseases and outperformed the traditional DL models, Siamese network, and prototypical network. By increasing the accuracy of diagnosing rare retinal diseases through FSL, clinicians can avoid neglecting rare diseases with DL assistance, thereby reducing diagnosis delay and patient burden.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Sangdang-gu, Cheongju, South Korea.
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
20
|
Artificial intelligence for diabetic retinopathy screening, prediction and management. Curr Opin Ophthalmol 2020; 31:357-365. [PMID: 32740069 DOI: 10.1097/icu.0000000000000693] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
PURPOSE OF REVIEW Diabetic retinopathy is the most common specific complication of diabetes mellitus. Traditional care for patients with diabetes and diabetic retinopathy is fragmented, uncoordinated and delivered in a piecemeal nature, often in the most expensive and high-resource tertiary settings. Transformative new models incorporating digital technology are needed to address these gaps in clinical care. RECENT FINDINGS Artificial intelligence and telehealth may improve access, financial sustainability and coverage of diabetic retinopathy screening programs. They enable risk stratifying patients based on individual risk of vision-threatening diabetic retinopathy including diabetic macular edema (DME), and predicting which patients with DME best respond to antivascular endothelial growth factor therapy. SUMMARY Progress in artificial intelligence and tele-ophthalmology for diabetic retinopathy screening, including artificial intelligence applications in 'real-world settings' and cost-effectiveness studies are summarized. Furthermore, the initial research on the use of artificial intelligence models for diabetic retinopathy risk stratification and management of DME are outlined along with potential future directions. Finally, the need for artificial intelligence adoption within ophthalmology in response to coronavirus disease 2019 is discussed. Digital health solutions such as artificial intelligence and telehealth can facilitate the integration of community, primary and specialist eye care services, optimize the flow of patients within healthcare networks, and improve the efficiency of diabetic retinopathy management.
Collapse
|