1
|
Li Z, Xie H, Wang Z, Li D, Chen K, Zong X, Qiang W, Wen F, Deng Z, Chen L, Li H, Dong H, Wu P, Sun T, Cheng Y, Yang Y, Xue J, Zheng Q, Jiang J, Chen W. Deep learning for multi-type infectious keratitis diagnosis: A nationwide, cross-sectional, multicenter study. NPJ Digit Med 2024; 7:181. [PMID: 38971902 PMCID: PMC11227533 DOI: 10.1038/s41746-024-01174-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 06/21/2024] [Indexed: 07/08/2024] Open
Abstract
The main cause of corneal blindness worldwide is keratitis, especially the infectious form caused by bacteria, fungi, viruses, and Acanthamoeba. The key to effective management of infectious keratitis hinges on prompt and precise diagnosis. Nevertheless, the current gold standard, such as cultures of corneal scrapings, remains time-consuming and frequently yields false-negative results. Here, using 23,055 slit-lamp images collected from 12 clinical centers nationwide, this study constructed a clinically feasible deep learning system, DeepIK, that could emulate the diagnostic process of a human expert to identify and differentiate bacterial, fungal, viral, amebic, and noninfectious keratitis. DeepIK exhibited remarkable performance in internal, external, and prospective datasets (all areas under the receiver operating characteristic curves > 0.96) and outperformed three other state-of-the-art algorithms (DenseNet121, InceptionResNetV2, and Swin-Transformer). Our study indicates that DeepIK possesses the capability to assist ophthalmologists in accurately and swiftly identifying various infectious keratitis types from slit-lamp images, thereby facilitating timely and targeted treatment.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Daoyuan Li
- Department of Ophthalmology, The Affiliated Hospital of Guizhou Medical University, Guiyang, 550004, China
| | - Kuan Chen
- Department of Ophthalmology, Cangnan Hospital, Wenzhou Medical University, Wenzhou, 325000, China
| | - Xihang Zong
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Feng Wen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Zhihong Deng
- Department of Ophthalmology, The Third Xiangya Hospital, Central South University, Changsha, 410013, China
| | - Limin Chen
- Department of Ophthalmology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350000, China
| | - Huiping Li
- Department of Ophthalmology, People's Hospital of Ningxia Hui Autonomous Region, Ningxia Medical University, Yinchuan, 750001, China
| | - He Dong
- The Third People's Hospital of Dalian & Dalian Municipal Eye Hospital, Dalian, 116033, China
| | - Pengcheng Wu
- Department of Ophthalmology, The Second Hospital of Lanzhou University, Lanzhou, 730030, China
| | - Tao Sun
- The Affiliated Eye Hospital of Nanchang University, Jiangxi Clinical Research Center for Ophthalmic Disease, Jiangxi Research Institute of Ophthalmology and Visual Science, Jiangxi Provincial Key Laboratory for Ophthalmology, Nanchang, 330006, China
| | - Yan Cheng
- Xi'an No.1 Hospital, Shaanxi Institute of Ophthalmology, Shaanxi Key Laboratory of Ophthalmology, The First Affiliated Hospital of Northwestern University, Xi'an, 710002, China
| | - Yanning Yang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, 430060, China
| | - Jinsong Xue
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, 210029, China
| | - Qinxiang Zheng
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China.
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
2
|
Hogg RE, Wickens R, O'Connor S, Gidman E, Ward E, Treanor C, Peto T, Burton B, Knox P, Lotery AJ, Sivaprasad S, Donnelly M, Rogers CA, Reeves BC. Home-monitoring for neovascular age-related macular degeneration in older adults within the UK: the MONARCH diagnostic accuracy study. Health Technol Assess 2024; 28:1-136. [PMID: 39023220 PMCID: PMC11261425 DOI: 10.3310/cyra9912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
Background Most neovascular age-related macular degeneration treatments involve long-term follow-up of disease activity. Home monitoring would reduce the burden on patients and those they depend on for transport, and release clinic appointments for other patients. The study aimed to evaluate three home-monitoring tests for patients to use to detect active neovascular age-related macular degeneration compared with diagnosing active neovascular age-related macular degeneration by hospital follow-up. Objectives There were five objectives: Estimate the accuracy of three home-monitoring tests to detect active neovascular age-related macular degeneration. Determine the acceptability of home monitoring to patients and carers and adherence to home monitoring. Explore whether inequalities exist in recruitment, participants' ability to self-test and their adherence to weekly testing during follow-up. Provide pilot data about the accuracy of home monitoring to detect conversion to neovascular age-related macular degeneration in fellow eyes of patients with unilateral neovascular age-related macular degeneration. Describe challenges experienced when implementing home-monitoring tests. Design Diagnostic test accuracy cohort study, stratified by time since starting treatment. Setting Six United Kingdom Hospital Eye Service macular clinics (Belfast, Liverpool, Moorfields, James Paget, Southampton, Gloucester). Participants Patients with at least one study eye being monitored by hospital follow-up. Reference standard Detection of active neovascular age-related macular degeneration by an ophthalmologist at hospital follow-up. Index tests KeepSight Journal: paper-based near-vision tests presented as word puzzles. MyVisionTrack®: electronic test, viewed on a tablet device. MultiBit: electronic test, viewed on a tablet device. Participants provided test scores weekly. Raw scores between hospital follow-ups were summarised as averages. Results Two hundred and ninety-seven patients (mean age 74.9 years) took part. At least one hospital follow-up was available for 317 study eyes, including 9 second eyes that became eligible during follow-up, in 261 participants (1549 complete visits). Median testing frequency was three times/month. Estimated areas under receiver operating curves were < 0.6 for all index tests, and only KeepSight Journal summary score was significantly associated with the lesion activity (odds ratio = 3.48, 95% confidence interval 1.09 to 11.13, p = 0.036). Older age and worse deprivation for home address were associated with lower participation (χ2 = 50.5 and 24.3, respectively, p < 0.001) but not ability or adherence to self-testing. Areas under receiver operating curves appeared higher for conversion of fellow eyes to neovascular age-related macular degeneration (0.85 for KeepSight Journal) but were estimated with less precision. Almost half of participants called a study helpline, most often due to inability to test electronically. Limitations Pre-specified sample size not met; participants' difficulties using the devices; electronic tests not always available. Conclusions No index test provided adequate test accuracy to identify lesion diagnosed as active in follow-up clinics. If used to detect conversion, patients would still need to be monitored at hospital. Associations of older age and worse deprivation with study participation highlight the potential for inequities with such interventions. Provision of reliable electronic testing was challenging. Future work Future studies evaluating similar technologies should consider: Independent monitoring with clear stopping rules based on test performance. Deployment of apps on patients' own devices since providing devices did not reduce inequalities in participation and complicated home testing. Alternative methods to summarise multiple scores over the period preceding a follow-up. Trial registration This trial is registered as ISRCTN79058224. Funding This award was funded by the National Institute of Health and Care Research (NIHR) Health Technology Assessment programme (NIHR award ref: 15/97/02) and is published in full in Health Technology Assessment; Vol. 28, No. 32. See the NIHR Funding and Awards website for further award information.
Collapse
Affiliation(s)
- Ruth E Hogg
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Robin Wickens
- Bristol Trials Centre, University of Bristol, Bristol, UK
| | - Sean O'Connor
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Eleanor Gidman
- Bristol Trials Centre, University of Bristol, Bristol, UK
| | - Elizabeth Ward
- Bristol Trials Centre, University of Bristol, Bristol, UK
| | - Charlene Treanor
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Ben Burton
- James Paget University Hospitals NHS Trust, Great Yarmouth, UK
| | - Paul Knox
- University of Liverpool, Liverpool, UK
| | - Andrew J Lotery
- Department of Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Michael Donnelly
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Chris A Rogers
- Bristol Trials Centre, University of Bristol, Bristol, UK
| | | |
Collapse
|
3
|
Papaioannou C. Advancements in the treatment of age-related macular degeneration: a comprehensive review. Postgrad Med J 2024; 100:445-450. [PMID: 38330506 DOI: 10.1093/postmj/qgae016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 01/04/2024] [Accepted: 01/16/2024] [Indexed: 02/10/2024]
Abstract
Age-related macular degeneration (AMD) stands as a leading cause of irreversible blindness, particularly affecting central vision and impeding daily tasks. This paper provides a thorough exploration of AMD, distinguishing between its two main subtypes-Wet and Dry AMD-while shedding light on the prevalence and risk factors, including age, genetics, and smoking. The focus shifts to the current and future treatment landscape, examining both Dry and Wet AMD. Regarding Dry AMD, interventions such as antioxidant supplementation and ongoing clinical trials offer hope. Notable among these is Pegcetacoplan which is the only Food and Drug Administration (FDA)-approved medication, displaying promising results in reducing geographic atrophy lesions. For Wet AMD, anti-Vascular Endothelial Growth Factor therapies like Ranibizumab (Lucentis®) have been instrumental, and newer drugs like Faricimab and OPT-302 show comparable efficacy with extended dosing intervals. Additionally, gene therapies such as RGX-314 present a potential paradigm shift, reducing or eliminating the need for frequent injections. Biosimilars offer cost-effective alternatives. The paper also delves into the integration of technology and artificial intelligence in AMD management, highlighting the role of smartphone apps for patient monitoring and artificial intelligence algorithms for diagnosis and surveillance. Furthermore, patient perspectives on artificial intelligence demonstrate a positive correlation between understanding and trust. The narrative concludes with a glimpse into ground-breaking technologies, including retinal implants and bionic chips, offering hope for vision restoration. Overall, this paper underscores the multifaceted approach in addressing AMD, combining traditional and innovative strategies, paving the way for a more promising future in AMD treatment.
Collapse
Affiliation(s)
- Christos Papaioannou
- Department of Surgery, Chelsea and Westminster Hospital NHS Foundation Trust, TW7 6AF, London, United Kingdom
| |
Collapse
|
4
|
Borrelli E, Serafino S, Ricardi F, Coletto A, Neri G, Olivieri C, Ulla L, Foti C, Marolo P, Toro MD, Bandello F, Reibaldi M. Deep Learning in Neovascular Age-Related Macular Degeneration. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:990. [PMID: 38929607 PMCID: PMC11205843 DOI: 10.3390/medicina60060990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 05/29/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024]
Abstract
Background and objectives: Age-related macular degeneration (AMD) is a complex and multifactorial condition that can lead to permanent vision loss once it progresses to the neovascular exudative stage. This review aims to summarize the use of deep learning in neovascular AMD. Materials and Methods: Pubmed search. Results: Deep learning has demonstrated effectiveness in analyzing structural OCT images in patients with neovascular AMD. This review outlines the role of deep learning in identifying and measuring biomarkers linked to an elevated risk of transitioning to the neovascular form of AMD. Additionally, deep learning techniques can quantify critical OCT features associated with neovascular AMD, which have prognostic implications for these patients. Incorporating deep learning into the assessment of neovascular AMD eyes holds promise for enhancing clinical management strategies for affected individuals. Conclusion: Several studies have demonstrated effectiveness of deep learning in assessing neovascular AMD patients and this has a promising role in the assessment of these patients.
Collapse
Affiliation(s)
- Enrico Borrelli
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Sonia Serafino
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Federico Ricardi
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Andrea Coletto
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Giovanni Neri
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Chiara Olivieri
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Lorena Ulla
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Claudio Foti
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Paola Marolo
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Mario Damiano Toro
- Eye Clinic, Public Health Department, University of Naples Federico II, 80138 Naples, Italy;
| | - Francesco Bandello
- Department of Ophthalmology, Vita-Salute San Raffaele University, 20132 Milan, Italy;
- IRCCS San Raffaele Scientific Institute, 20132 Milan, Italy
| | - Michele Reibaldi
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| |
Collapse
|
5
|
Yu C, Xu J, Heidari G, Jiang H, Shi Y, Wu A, Makvandi P, Neisiany RE, Zare EN, Shao M, Hu L. Injectable hydrogels based on biopolymers for the treatment of ocular diseases. Int J Biol Macromol 2024; 269:132086. [PMID: 38705321 DOI: 10.1016/j.ijbiomac.2024.132086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 04/23/2024] [Accepted: 05/02/2024] [Indexed: 05/07/2024]
Abstract
Injectable hydrogels based on biopolymers, fabricated utilizing diverse chemical and physical methodologies, exhibit exceptional physical, chemical, and biological properties. They have multifaceted applications encompassing wound healing, tissue regeneration, and across diverse scientific realms. This review critically evaluates their largely uncharted potential in ophthalmology, elucidating their diverse applications across an array of ocular diseases. These conditions include glaucoma, cataracts, corneal disorders (spanning from age-related degeneration to trauma, infections, and underlying chronic illnesses), retina-associated ailments (such as diabetic retinopathy, retinitis pigmentosa, and age-related macular degeneration (AMD)), eyelid abnormalities, and uveal melanoma (UM). This study provides a thorough analysis of applications of injectable hydrogels based on biopolymers across these ocular disorders. Injectable hydrogels based on biopolymers can be customized to have specific physical, chemical, and biological properties that make them suitable as drug delivery vehicles, tissue scaffolds, and sealants in the eye. For example, they can be engineered to have optimum viscosity to be injected intravitreally and sustain drug release to treat retinal diseases. Their porous structure and biocompatibility promote cellular infiltration to regenerate diseased corneal tissue. By accentuating their indispensable role in ocular disease treatment, this review strives to present innovative and targeted approaches in this domain, thereby advancing ocular therapeutics.
Collapse
Affiliation(s)
- Caiyu Yu
- Department of Eye, Ear, Nose and Throat, The Dingli Clinical College of Wenzhou Medical University, The Second Affiliated Hospital of Shanghai University, Wenzhou Central Hospital, Wenzhou 325000, China; School of Optometry and Ophthalmology and Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang Province, China
| | - Jiahao Xu
- School of Optometry and Ophthalmology and Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang Province, China
| | - Golnaz Heidari
- School of Natural Sciences, Massey University, Private Bag 11 222, Palmerston North 4410, New Zealand
| | - Huijun Jiang
- School of Optometry and Ophthalmology and Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang Province, China
| | - Yifeng Shi
- Department of Orthopaedics, Key Laboratory of Orthopaedics of Zhejiang Province, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, Zhejiang 325000, China
| | - Aimin Wu
- Department of Orthopaedics, Key Laboratory of Orthopaedics of Zhejiang Province, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, Zhejiang 325000, China
| | - Pooyan Makvandi
- The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou, Zhejiang 324000, China; Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh 174103, India; Department of Biomaterials, Saveetha Dental College and Hospitals, SIMATS, Saveetha University, Chennai 600077, India
| | - Rasoul Esmaeely Neisiany
- Biotechnology Centre, Silesian University of Technology, Krzywoustego 8, 44-100 Gliwice, Poland; Department of Polymer Engineering, Hakim Sabzevari University, Sabzevar 9617976487, Iran
| | - Ehsan Nazarzadeh Zare
- School of Chemistry, Damghan University, Damghan 36716-45667, Iran; Centre of Research Impact and Outreach, Chitkara University, Rajpura 140417, Punjab, India.
| | - Minmin Shao
- Department of Eye, Ear, Nose and Throat, The Dingli Clinical College of Wenzhou Medical University, The Second Affiliated Hospital of Shanghai University, Wenzhou Central Hospital, Wenzhou 325000, China.
| | - Liang Hu
- Department of Eye, Ear, Nose and Throat, The Dingli Clinical College of Wenzhou Medical University, The Second Affiliated Hospital of Shanghai University, Wenzhou Central Hospital, Wenzhou 325000, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China; State Key Laboratory of Ophthalmology, Optometry and Visual Science, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
6
|
Goh KL, Abbott CJ, Campbell TG, Cohn AC, Ong DN, Wickremasinghe SS, Hodgson LAB, Guymer RH, Wu Z. Clinical performance of predicting late age-related macular degeneration development using multimodal imaging. Clin Exp Ophthalmol 2024. [PMID: 38812454 DOI: 10.1111/ceo.14405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/17/2024] [Accepted: 05/17/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND To examine whether the clinical performance of predicting late age-related macular degeneration (AMD) development is improved through using multimodal imaging (MMI) compared to using colour fundus photography (CFP) alone, and how this compares with a basic prediction model using well-established AMD risk factors. METHODS Individuals with AMD in this study underwent MMI, including optical coherence tomography (OCT), fundus autofluorescence, near-infrared reflectance and CFP at baseline, and then at 6-monthly intervals for 3-years to determine MMI-defined late AMD development. Four retinal specialists independently assessed the likelihood that each eye at baseline would progress to MMI-defined late AMD over 3-years with CFP, and then with MMI. Predictive performance with CFP and MMI were compared to each other, and to a basic prediction model using age, presence of pigmentary abnormalities, and OCT-based drusen volume. RESULTS The predictive performance of the clinicians using CFP [AUC = 0.75; 95% confidence interval (CI) = 0.68-0.82] improved when using MMI (AUC = 0.79; 95% CI = 0.72-0.85; p = 0.034). However, a basic prediction model outperformed clinicians using either CFP or MMI (AUC = 0.85; 95% CI = 0.78-91; p ≤ 0.002). CONCLUSIONS Clinical performance for predicting late AMD development was improved by using MMI compared to CFP. However, a basic prediction model using well-established AMD risk factors outperformed retinal specialists, suggesting that such a model could further improve personalised counselling and monitoring of individuals with the early stages of AMD in clinical practice.
Collapse
Affiliation(s)
- Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Thomas G Campbell
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Amy C Cohn
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Dai Ni Ong
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Sanjeewa S Wickremasinghe
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Lauren A B Hodgson
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| |
Collapse
|
7
|
Miranda M, Santos-Oliveira J, Mendonça AM, Sousa V, Melo T, Carneiro Â. Human versus Artificial Intelligence: Validation of a Deep Learning Model for Retinal Layer and Fluid Segmentation in Optical Coherence Tomography Images from Patients with Age-Related Macular Degeneration. Diagnostics (Basel) 2024; 14:975. [PMID: 38786273 PMCID: PMC11119996 DOI: 10.3390/diagnostics14100975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/25/2024] [Accepted: 04/28/2024] [Indexed: 05/25/2024] Open
Abstract
Artificial intelligence (AI) models have received considerable attention in recent years for their ability to identify optical coherence tomography (OCT) biomarkers with clinical diagnostic potential and predict disease progression. This study aims to externally validate a deep learning (DL) algorithm by comparing its segmentation of retinal layers and fluid with a gold-standard method for manually adjusting the automatic segmentation of the Heidelberg Spectralis HRA + OCT software Version 6.16.8.0. A total of sixty OCT images of healthy subjects and patients with intermediate and exudative age-related macular degeneration (AMD) were included. A quantitative analysis of the retinal thickness and fluid area was performed, and the discrepancy between these methods was investigated. The results showed a moderate-to-strong correlation between the metrics extracted by both software types, in all the groups, and an overall near-perfect area overlap was observed, except for in the inner segment ellipsoid (ISE) layer. The DL system detected a significant difference in the outer retinal thickness across disease stages and accurately identified fluid in exudative cases. In more diseased eyes, there was significantly more disagreement between these methods. This DL system appears to be a reliable method for accessing important OCT biomarkers in AMD. However, further accuracy testing should be conducted to confirm its validity in real-world settings to ultimately aid ophthalmologists in OCT imaging management and guide timely treatment approaches.
Collapse
Affiliation(s)
- Mariana Miranda
- Department of Surgery and Physiology, Faculty of Medicine of the University of Porto, 4200 Porto, Portugal
| | - Joana Santos-Oliveira
- Department of Ophthalmology, Centro Hospitalar Universitário of São João, 4200 Porto, Portugal
| | - Ana Maria Mendonça
- Electrical and Computer Engineering Department, Faculty of Engineering of the University of Porto, 4200 Porto, Portugal
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200 Porto, Portugal
| | - Vânia Sousa
- Department of Ophthalmology, Centro Hospitalar Universitário of São João, 4200 Porto, Portugal
| | - Tânia Melo
- Electrical and Computer Engineering Department, Faculty of Engineering of the University of Porto, 4200 Porto, Portugal
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200 Porto, Portugal
| | - Ângela Carneiro
- Department of Surgery and Physiology, Faculty of Medicine of the University of Porto, 4200 Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar Universitário of São João, 4200 Porto, Portugal
| |
Collapse
|
8
|
Mares V, Nehemy MB, Bogunovic H, Frank S, Reiter GS, Schmidt-Erfurth U. AI-based support for optical coherence tomography in age-related macular degeneration. Int J Retina Vitreous 2024; 10:31. [PMID: 38589936 PMCID: PMC11000391 DOI: 10.1186/s40942-024-00549-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 03/16/2024] [Indexed: 04/10/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative technology across various fields, and its applications in the medical domain, particularly in ophthalmology, has gained significant attention. The vast amount of high-resolution image data, such as optical coherence tomography (OCT) images, has been a driving force behind AI growth in this field. Age-related macular degeneration (AMD) is one of the leading causes for blindness in the world, affecting approximately 196 million people worldwide in 2020. Multimodal imaging has been for a long time the gold standard for diagnosing patients with AMD, however, currently treatment and follow-up in routine disease management are mainly driven by OCT imaging. AI-based algorithms have by their precision, reproducibility and speed, the potential to reliably quantify biomarkers, predict disease progression and assist treatment decisions in clinical routine as well as academic studies. This review paper aims to provide a summary of the current state of AI in AMD, focusing on its applications, challenges, and prospects.
Collapse
Affiliation(s)
- Virginia Mares
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Department of Ophthalmology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Marcio B Nehemy
- Department of Ophthalmology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Hrvoje Bogunovic
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Sophie Frank
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Gregor S Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
9
|
Veritti D, Rubinato L, Sarao V, De Nardin A, Foresti GL, Lanzetta P. Behind the mask: a critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024; 262:975-982. [PMID: 37747539 PMCID: PMC10907411 DOI: 10.1007/s00417-023-06245-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 07/24/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023] Open
Abstract
PURPOSE This narrative review aims to provide an overview of the dangers, controversial aspects, and implications of artificial intelligence (AI) use in ophthalmology and other medical-related fields. METHODS We conducted a decade-long comprehensive search (January 2013-May 2023) of both academic and grey literature, focusing on the application of AI in ophthalmology and healthcare. This search included key web-based academic databases, non-traditional sources, and targeted searches of specific organizations and institutions. We reviewed and selected documents for relevance to AI, healthcare, ethics, and guidelines, aiming for a critical analysis of ethical, moral, and legal implications of AI in healthcare. RESULTS Six main issues were identified, analyzed, and discussed. These include bias and clinical safety, cybersecurity, health data and AI algorithm ownership, the "black-box" problem, medical liability, and the risk of widening inequality in healthcare. CONCLUSION Solutions to address these issues include collecting high-quality data of the target population, incorporating stronger security measures, using explainable AI algorithms and ensemble methods, and making AI-based solutions accessible to everyone. With careful oversight and regulation, AI-based systems can be used to supplement physician decision-making and improve patient care and outcomes.
Collapse
Affiliation(s)
- Daniele Veritti
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy.
| | - Leopoldo Rubinato
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
| | - Valentina Sarao
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Gian Luca Foresti
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| |
Collapse
|
10
|
Wan C, Mao Y, Xi W, Zhang Z, Wang J, Yang W. DBPF-net: dual-branch structural feature extraction reinforcement network for ocular surface disease image classification. Front Med (Lausanne) 2024; 10:1309097. [PMID: 38239621 PMCID: PMC10794599 DOI: 10.3389/fmed.2023.1309097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Pterygium and subconjunctival hemorrhage are two common types of ocular surface diseases that can cause distress and anxiety in patients. In this study, 2855 ocular surface images were collected in four categories: normal ocular surface, subconjunctival hemorrhage, pterygium to be observed, and pterygium requiring surgery. We propose a diagnostic classification model for ocular surface diseases, dual-branch network reinforced by PFM block (DBPF-Net), which adopts the conformer model with two-branch architectural properties as the backbone of a four-way classification model for ocular surface diseases. In addition, we propose a block composed of a patch merging layer and a FReLU layer (PFM block) for extracting spatial structure features to further strengthen the feature extraction capability of the model. In practice, only the ocular surface images need to be input into the model to discriminate automatically between the disease categories. We also trained the VGG16, ResNet50, EfficientNetB7, and Conformer models, and evaluated and analyzed the results of all models on the test set. The main evaluation indicators were sensitivity, specificity, F1-score, area under the receiver operating characteristics curve (AUC), kappa coefficient, and accuracy. The accuracy and kappa coefficient of the proposed diagnostic model in several experiments were averaged at 0.9789 and 0.9681, respectively. The sensitivity, specificity, F1-score, and AUC were, respectively, 0.9723, 0.9836, 0.9688, and 0.9869 for diagnosing pterygium to be observed, and, respectively, 0.9210, 0.9905, 0.9292, and 0.9776 for diagnosing pterygium requiring surgery. The proposed method has high clinical reference value for recognizing these four types of ocular surface images.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yulong Mao
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Wenqun Xi
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Zhe Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
11
|
Nguyen TD, Le DT, Bum J, Kim S, Song SJ, Choo H. Retinal Disease Diagnosis Using Deep Learning on Ultra-Wide-Field Fundus Images. Diagnostics (Basel) 2024; 14:105. [PMID: 38201414 PMCID: PMC10804390 DOI: 10.3390/diagnostics14010105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/20/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
Ultra-wide-field fundus imaging (UFI) provides comprehensive visualization of crucial eye components, including the optic disk, fovea, and macula. This in-depth view facilitates doctors in accurately diagnosing diseases and recommending suitable treatments. This study investigated the application of various deep learning models for detecting eye diseases using UFI. We developed an automated system that processes and enhances a dataset of 4697 images. Our approach involves brightness and contrast enhancement, followed by applying feature extraction, data augmentation and image classification, integrated with convolutional neural networks. These networks utilize layer-wise feature extraction and transfer learning from pre-trained models to accurately represent and analyze medical images. Among the five evaluated models, including ResNet152, Vision Transformer, InceptionResNetV2, RegNet and ConVNext, ResNet152 is the most effective, achieving a testing area under the curve (AUC) score of 96.47% (with a 95% confidence interval (CI) of 0.931-0.974). Additionally, the paper presents visualizations of the model's predictions, including confidence scores and heatmaps that highlight the model's focal points-particularly where lesions due to damage are evident. By streamlining the diagnosis process and providing intricate prediction details without human intervention, our system serves as a pivotal tool for ophthalmologists. This research underscores the compatibility and potential of utilizing ultra-wide-field images in conjunction with deep learning.
Collapse
Affiliation(s)
- Toan Duc Nguyen
- Department of AI Systems Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Duc-Tai Le
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Junghyun Bum
- Sungkyun AI Research Institute, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Seongho Kim
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Su Jeong Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea;
- Biomedical Institute for Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Hyunseung Choo
- Department of AI Systems Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
12
|
Zheng B, Zhang M, Zhu S, Wu M, Chen L, Zhang S, Yang W. Research on an artificial intelligence-based myopic maculopathy grading method using EfficientNet. Indian J Ophthalmol 2024; 72:S53-S59. [PMID: 38131543 PMCID: PMC10833160 DOI: 10.4103/ijo.ijo_48_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 08/04/2023] [Accepted: 08/15/2023] [Indexed: 12/23/2023] Open
Abstract
PURPOSE We aimed to develop an artificial intelligence-based myopic maculopathy grading method using EfficientNet to overcome the delayed grading and diagnosis of different myopic maculopathy degrees. METHODS The cooperative hospital provided 4642 healthy and myopic maculopathy color fundus photographs, comprising the four degrees of myopic maculopathy and healthy fundi. The myopic maculopathy grading models were trained using EfficientNet-B0 to EfficientNet-B7 models. The diagnostic results were compared with those of the VGG16 and ResNet50 classification models. The leading evaluation indicators were sensitivity, specificity, F1 score, area under the receiver operating characteristic (ROC) curve area under curve (AUC), 95% confidence interval, kappa value, and accuracy. The ROC curves of the ten grading models were also compared. RESULTS We used 1199 color fundus photographs to evaluate the myopic maculopathy grading models. The size of the EfficientNet-B0 myopic maculopathy grading model was 15.6 MB, and it had the highest kappa value (88.32%) and accuracy (83.58%). The model's sensitivities to diagnose tessellated fundus (TF), diffuse chorioretinal atrophy (DCA), patchy chorioretinal atrophy (PCA), and macular atrophy (MA) were 96.86%, 75.98%, 64.67%, and 88.75%, respectively. The specificity was above 93%, and the AUCs were 0.992, 0.960, 0.964, and 0.989, respectively. CONCLUSION The EfficientNet models were used to design grading diagnostic models for myopic maculopathy. Based on the collected fundus images, the models could diagnose a healthy fundus and four types of myopic maculopathy. The models might help ophthalmologists to make preliminary diagnoses of different degrees of myopic maculopathy.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Maotao Zhang
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Lu Chen
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | | | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
13
|
Shimizu E, Tanji M, Nakayama S, Ishikawa T, Agata N, Yokoiwa R, Nishimura H, Khemlani RJ, Sato S, Hanyuda A, Sato Y. AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep 2023; 13:22046. [PMID: 38086904 PMCID: PMC10716159 DOI: 10.1038/s41598-023-49563-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 12/09/2023] [Indexed: 12/18/2023] Open
Abstract
In ophthalmology, the availability of many fundus photographs and optical coherence tomography images has spurred consideration of using artificial intelligence (AI) for diagnosing retinal and optic nerve disorders. However, AI application for diagnosing anterior segment eye conditions remains unfeasible due to limited standardized images and analysis models. We addressed this limitation by augmenting the quantity of standardized optical images using a video-recordable slit-lamp device. We then investigated whether our proposed machine learning (ML) AI algorithm could accurately diagnose cataracts from videos recorded with this device. We collected 206,574 cataract frames from 1812 cataract eye videos. Ophthalmologists graded the nuclear cataracts (NUCs) using the cataract grading scale of the World Health Organization. These gradings were used to train and validate an ML algorithm. A validation dataset was used to compare the NUC diagnosis and grading of AI and ophthalmologists. The results of individual cataract gradings were: NUC 0: area under the curve (AUC) = 0.967; NUC 1: AUC = 0.928; NUC 2: AUC = 0.923; and NUC 3: AUC = 0.949. Our ML-based cataract diagnostic model achieved performance comparable to a conventional device, presenting a promising and accurate auto diagnostic AI tool.
Collapse
Affiliation(s)
- Eisuke Shimizu
- OUI Inc., Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan.
- Yokohama Keiai Eye Clinic, Yokohama, Japan.
| | - Makoto Tanji
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Shintato Nakayama
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Toshiki Ishikawa
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | | | | | - Hiroki Nishimura
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | | | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University, Tokyo, Japan
| |
Collapse
|
14
|
Cheng AMS, Chalam KV, Brar VS, Yang DTY, Bhatt J, Banoub RG, Gupta SK. Recent Advances in Imaging Macular Atrophy for Late-Stage Age-Related Macular Degeneration. Diagnostics (Basel) 2023; 13:3635. [PMID: 38132220 PMCID: PMC10742961 DOI: 10.3390/diagnostics13243635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 12/02/2023] [Accepted: 12/06/2023] [Indexed: 12/23/2023] Open
Abstract
Age-related macular degeneration (AMD) is a leading cause of blindness worldwide. In late-stage AMD, geographic atrophy (GA) of dry AMD or choroidal neovascularization (CNV) of neovascular AMD eventually results in macular atrophy (MA), leading to significant visual loss. Despite the development of innovative therapies, there are currently no established effective treatments for MA. As a result, early detection of MA is critical in identifying later central macular involvement throughout time. Accurate and early diagnosis is achieved through a combination of clinical examination and imaging techniques. Our review of the literature depicts advances in retinal imaging to identify biomarkers of progression and risk factors for late AMD. Imaging methods like fundus photography; dye-based angiography; fundus autofluorescence (FAF); near-infrared reflectance (NIR); optical coherence tomography (OCT); and optical coherence tomography angiography (OCTA) can be used to detect and monitor the progression of retinal atrophy. These evolving diverse imaging modalities optimize detection of pathologic anatomy and measurement of visual function; they may also contribute to the understanding of underlying mechanistic pathways, particularly the underlying MA changes in late AMD.
Collapse
Affiliation(s)
- Anny M. S. Cheng
- Department of Ophthalmology, Broward Health, Fort Lauderdale, FL 33064, USA; (A.M.S.C.); (R.G.B.)
- Specialty Retina Center, Coral Springs, FL 33067, USA;
- Department of Ophthalmology, Herbert Wertheim College of Medicine, Florida International University, Miami, FL 33199, USA
| | - Kakarla V. Chalam
- Department of Ophthalmology, Loma Linda University, Loma Linda, CA 92350, USA;
| | - Vikram S. Brar
- Department of Ophthalmology, Virginia Commonwealth University, Richmond, VA 23298, USA;
| | - David T. Y. Yang
- College of Biological Science, University of California, Davis, Sacramento, CA 95616, USA;
| | - Jineel Bhatt
- Specialty Retina Center, Coral Springs, FL 33067, USA;
| | - Raphael G. Banoub
- Department of Ophthalmology, Broward Health, Fort Lauderdale, FL 33064, USA; (A.M.S.C.); (R.G.B.)
- Specialty Retina Center, Coral Springs, FL 33067, USA;
| | - Shailesh K. Gupta
- Department of Ophthalmology, Broward Health, Fort Lauderdale, FL 33064, USA; (A.M.S.C.); (R.G.B.)
- Specialty Retina Center, Coral Springs, FL 33067, USA;
| |
Collapse
|
15
|
Moterani VC, Abbade JF, Borges VTM, Fonseca CGF, Desiderio N, Moterani Junior NJW, Gonçalves Moterani LBB. [Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extensionDiretrizes para protocolos de ensaios clínicos com intervenções que utilizam inteligência artificial: a extensão SPIRIT-AI]. Rev Panam Salud Publica 2023; 47:e149. [PMID: 38361499 PMCID: PMC10868409 DOI: 10.26633/rpsp.2023.149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/23/2020] [Indexed: 01/10/2024] Open
Abstract
The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human-AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial.
Collapse
Affiliation(s)
- Vinicius Cesar Moterani
- Universidade Estadual Paulista “Júlio de Mesquita Filho,”BotucatuBrazilUniversidade Estadual Paulista “Júlio de Mesquita Filho,” Botucatu, Brazil
| | - Joelcio Francisco Abbade
- Universidade Estadual Paulista “Júlio de Mesquita Filho,”BotucatuBrazilUniversidade Estadual Paulista “Júlio de Mesquita Filho,” Botucatu, Brazil
| | - Vera Therezinha Medeiros Borges
- Universidade Estadual Paulista “Júlio de Mesquita Filho,”BotucatuBrazilUniversidade Estadual Paulista “Júlio de Mesquita Filho,” Botucatu, Brazil
| | - Cecilia Guimarães Ferreira Fonseca
- Universidade Estadual Paulista “Júlio de Mesquita Filho,”BotucatuBrazilUniversidade Estadual Paulista “Júlio de Mesquita Filho,” Botucatu, Brazil
| | - Nathalia Desiderio
- Marilia Medical SchoolMariliaBrazilMarilia Medical School, Marilia, Brazil
| | | | | |
Collapse
|
16
|
Tan TF, Thirunavukarasu AJ, Campbell JP, Keane PA, Pasquale LR, Abramoff MD, Kalpathy-Cramer J, Lum F, Kim JE, Baxter SL, Ting DSW. Generative Artificial Intelligence Through ChatGPT and Other Large Language Models in Ophthalmology: Clinical Applications and Challenges. OPHTHALMOLOGY SCIENCE 2023; 3:100394. [PMID: 37885755 PMCID: PMC10598525 DOI: 10.1016/j.xops.2023.100394] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 08/07/2023] [Accepted: 08/30/2023] [Indexed: 10/28/2023]
Abstract
The rapid progress of large language models (LLMs) driving generative artificial intelligence applications heralds the potential of opportunities in health care. We conducted a review up to April 2023 on Google Scholar, Embase, MEDLINE, and Scopus using the following terms: "large language models," "generative artificial intelligence," "ophthalmology," "ChatGPT," and "eye," based on relevance to this review. From a clinical viewpoint specific to ophthalmologists, we explore from the different stakeholders' perspectives-including patients, physicians, and policymakers-the potential LLM applications in education, research, and clinical domains specific to ophthalmology. We also highlight the foreseeable challenges of LLM implementation into clinical practice, including the concerns of accuracy, interpretability, perpetuating bias, and data security. As LLMs continue to mature, it is essential for stakeholders to jointly establish standards for best practices to safeguard patient safety. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
- Corpus Christi College, University of Cambridge, Cambridge, United Kingdom
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| | - Pearse A. Keane
- Moorfields Eye Hospital, University of College London, London, United Kingdom
| | - Louis R. Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York City, New York
| | - Michael D. Abramoff
- American Medical Association's Digital Medicine Payment Advisory Group (DMPAG) Artificial Intelligence Workgroup, American Medical Association, Chicago, Illinois
- Department of Ophthalmology, University of Iowa, Iowa City, Iowa
- Digital Diagnostics, Inc, Coralville, Iowa
| | | | - Flora Lum
- American Academy of Ophthalmology, San Francisco, California
| | - Judy E. Kim
- Department of Ophthalmology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, La Jolla, California
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Byers Eye Institute, Stanford University, Stanford, California
| |
Collapse
|
17
|
Xu X, Jia Q, Yuan H, Qiu H, Dong Y, Xie W, Yao Z, Zhang J, Nie Z, Li X, Shi Y, Zou JY, Huang M, Zhuang J. A clinically applicable AI system for diagnosis of congenital heart diseases based on computed tomography images. Med Image Anal 2023; 90:102953. [PMID: 37734140 DOI: 10.1016/j.media.2023.102953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 08/22/2023] [Accepted: 09/01/2023] [Indexed: 09/23/2023]
Abstract
Congenital heart disease (CHD) is the most common type of birth defect. Without timely detection and treatment, approximately one-third of children with CHD would die in the infant period. However, due to the complicated heart structures, early diagnosis of CHD and its types is quite challenging, even for experienced radiologists. Here, we present an artificial intelligence (AI) system that achieves a comparable performance of human experts in the critical task of classifying 17 categories of CHD types. We collected the first-large CT dataset from three different CT machines, including more than 3750 CHD patients over 14 years. Experimental results demonstrate that it can achieve diagnosis accuracy (86.03%) comparable with junior cardiovascular radiologists (86.27%) in a World Health Organization appointed research and cooperation center in China on most types of CHD, and obtains a higher sensitivity (82.91%) than junior cardiovascular radiologists (76.18%). The accuracy of the combination of our AI system (97.20%) and senior radiologists achieves comparable results to that of junior radiologists and senior radiologists (97.16%) which is the current clinical routine. Our AI system can further provide 3D visualization of hearts to senior radiologists for interpretation and flexible review, surgeons for precise intuition of heart structures, and clinicians for more precise outcome prediction. We demonstrate the potential of our model to be integrated into current clinic practice to improve the diagnosis of CHD globally, especially in regions where experienced radiologists can be scarce.
Collapse
Affiliation(s)
- Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Yuhao Dong
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zhiqaing Nie
- Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong Special Administrative Region
| | - Yiyu Shi
- Computer Science and Engineering, University of Notre Dame, IN, 46656, USA
| | - James Y Zou
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA.
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| |
Collapse
|
18
|
Dow ER, Khan NC, Chen KM, Mishra K, Perera C, Narala R, Basina M, Dang J, Kim M, Levine M, Phadke A, Tan M, Weng K, Do DV, Moshfeghi DM, Mahajan VB, Mruthyunjaya P, Leng T, Myung D. AI-Human Hybrid Workflow Enhances Teleophthalmology for the Detection of Diabetic Retinopathy. OPHTHALMOLOGY SCIENCE 2023; 3:100330. [PMID: 37449051 PMCID: PMC10336195 DOI: 10.1016/j.xops.2023.100330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/04/2023] [Accepted: 05/08/2023] [Indexed: 07/18/2023]
Abstract
Objective Detection of diabetic retinopathy (DR) outside of specialized eye care settings is an important means of access to vision-preserving health maintenance. Remote interpretation of fundus photographs acquired in a primary care or other nonophthalmic setting in a store-and-forward manner is a predominant paradigm of teleophthalmology screening programs. Artificial intelligence (AI)-based image interpretation offers an alternative means of DR detection. IDx-DR (Digital Diagnostics Inc) is a Food and Drug Administration-authorized autonomous testing device for DR. We evaluated the diagnostic performance of IDx-DR compared with human-based teleophthalmology over 2 and a half years. Additionally, we evaluated an AI-human hybrid workflow that combines AI-system evaluation with human expert-based assessment for referable cases. Design Prospective cohort study and retrospective analysis. Participants Diabetic patients ≥ 18 years old without a prior DR diagnosis or DR examination in the past year presenting for routine DR screening in a primary care clinic. Methods Macula-centered and optic nerve-centered fundus photographs were evaluated by an AI algorithm followed by consensus-based overreading by retina specialists at the Stanford Ophthalmic Reading Center. Detection of more-than-mild diabetic retinopathy (MTMDR) was compared with in-person examination by a retina specialist. Main Outcome Measures Sensitivity, specificity, accuracy, positive predictive value, and gradability achieved by the AI algorithm and retina specialists. Results The AI algorithm had higher sensitivity (95.5% sensitivity; 95% confidence interval [CI], 86.7%-100%) but lower specificity (60.3% specificity; 95% CI, 47.7%-72.9%) for detection of MTMDR compared with remote image interpretation by retina specialists (69.5% sensitivity; 95% CI, 50.7%-88.3%; 96.9% specificity; 95% CI, 93.5%-100%). Gradability of encounters was also lower for the AI algorithm (62.5%) compared with retina specialists (93.1%). A 2-step AI-human hybrid workflow in which the AI algorithm initially rendered an assessment followed by overread by a retina specialist of MTMDR-positive encounters resulted in a sensitivity of 95.5% (95% CI, 86.7%-100%) and a specificity of 98.2% (95% CI, 94.6%-100%). Similarly, a 2-step overread by retina specialists of AI-ungradable encounters improved gradability from 63.5% to 95.6% of encounters. Conclusions Implementation of an AI-human hybrid teleophthalmology workflow may both decrease reliance on human specialist effort and improve diagnostic accuracy. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Eliot R. Dow
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Nergis C. Khan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Kapil Mishra
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Ramsudha Narala
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Marina Basina
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Jimmy Dang
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Michael Kim
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marcie Levine
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Anuradha Phadke
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marilyn Tan
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Kirsti Weng
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Diana V. Do
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Darius M. Moshfeghi
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - David Myung
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| |
Collapse
|
19
|
Tejero JG, Neila PM, Kurmann T, Gallardo M, Zinkernagel M, Wolf S, Sznitman R. Predicting OCT biological marker localization from weak annotations. Sci Rep 2023; 13:19667. [PMID: 37952011 PMCID: PMC10640596 DOI: 10.1038/s41598-023-47019-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/08/2023] [Indexed: 11/14/2023] Open
Abstract
Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.
Collapse
Affiliation(s)
- Javier Gamazo Tejero
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland.
| | - Pablo Márquez Neila
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Thomas Kurmann
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Mathias Gallardo
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Martin Zinkernagel
- Department of Ophthalmology, Bern University Hospital, 3010, Bern, Switzerland
| | - Sebastian Wolf
- Department of Ophthalmology, Bern University Hospital, 3010, Bern, Switzerland
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| |
Collapse
|
20
|
Li D, Ran AR, Cheung CY, Prince JL. Deep learning in optical coherence tomography: Where are the gaps? Clin Exp Ophthalmol 2023; 51:853-863. [PMID: 37245525 PMCID: PMC10825778 DOI: 10.1111/ceo.14258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 05/30/2023]
Abstract
Optical coherence tomography (OCT) is a non-invasive optical imaging modality, which provides rapid, high-resolution and cross-sectional morphology of macular area and optic nerve head for diagnosis and managing of different eye diseases. However, interpreting OCT images requires experts in both OCT images and eye diseases since many factors such as artefacts and concomitant diseases can affect the accuracy of quantitative measurements made by post-processing algorithms. Currently, there is a growing interest in applying deep learning (DL) methods to analyse OCT images automatically. This review summarises the trends in DL-based OCT image analysis in ophthalmology, discusses the current gaps, and provides potential research directions. DL in OCT analysis shows promising performance in several tasks: (1) layers and features segmentation and quantification; (2) disease classification; (3) disease progression and prognosis; and (4) referral triage level prediction. Different studies and trends in the development of DL-based OCT image analysis are described and the following challenges are identified and described: (1) public OCT data are scarce and scattered; (2) models show performance discrepancies in real-world settings; (3) models lack of transparency; (4) there is a lack of societal acceptance and regulatory standards; and (5) OCT is still not widely available in underprivileged areas. More work is needed to tackle the challenges and gaps, before DL is further applied in OCT image analysis for clinical use.
Collapse
Affiliation(s)
- Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
21
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
22
|
Gholami S, Lim JI, Leng T, Ong SSY, Thompson AC, Alam MN. Federated learning for diagnosis of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1259017. [PMID: 37901412 PMCID: PMC10613107 DOI: 10.3389/fmed.2023.1259017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.
Collapse
Affiliation(s)
- Sina Gholami
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Theodore Leng
- Department of Ophthalmology, School of Medicine, Stanford University, Stanford, CA, United States
| | - Sally Shin Yee Ong
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Atalie Carina Thompson
- Department of Surgical Ophthalmology, Atrium-Health Wake Forest Baptist, Winston-Salem, NC, United States
| | - Minhaj Nur Alam
- Department of Electrical Engineering, University of North Carolina at Charlotte, Charlotte, NC, United States
| |
Collapse
|
23
|
Vaughan N. Review of smartphone funduscopy for diabetic retinopathy screening. Surv Ophthalmol 2023:S0039-6257(23)00132-7. [PMID: 37806567 DOI: 10.1016/j.survophthal.2023.10.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/23/2023] [Accepted: 10/03/2023] [Indexed: 10/10/2023]
Abstract
I detail advances in funduscopy diagnostic systems integrating smartphones. Smartphone funduscopy devices are comprised of lens devices connecting with smartphones and software applications to be used for mobile retinal image capturing and diagnosis of diabetic retinopathy. This is particularly beneficial to automate and mobilize retinopathy screening techniques and methods in remote and rural areas as those diabetes patients are often not meeting the required regular screening for diabetic retinopathy. Smartphone retinal image grading systems enable retinopathy to be screened remotely as teleophthalmology or as a stand-alone point-of-care-testing system. Smartphone funduscopy aims to avoid the need for patients to be seen by expert ophthalmologists, which can reduce patient travel, time taken for images to be processed, appointment backlog, health service overhead costs, and the workload burden for expert ophthalmologists.
Collapse
Affiliation(s)
- Neil Vaughan
- Exeter Centre of Excellence for Diabetes (ExCEeD), University of Exeter, Exeter, UK; Faculty of Health and Life Sciences (HLS), University of Exeter, Exeter, UK; Royal Academy of Engineering (RAEng), London, UK; NIHR Exeter Biomedical Research Centre, Exeter, UK.
| |
Collapse
|
24
|
Williamson RC, Selvam A, Sant V, Patel M, Bollepalli SC, Vupparaboina KK, Sahel JA, Chhablani J. Radiomics-Based Prediction of Anti-VEGF Treatment Response in Neovascular Age-Related Macular Degeneration With Pigment Epithelial Detachment. Transl Vis Sci Technol 2023; 12:3. [PMID: 37792693 PMCID: PMC10565708 DOI: 10.1167/tvst.12.10.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 09/01/2023] [Indexed: 10/06/2023] Open
Abstract
Purpose Machine learning models based on radiomic feature extraction from clinical imaging data provide effective and interpretable means for clinical decision making. This pilot study evaluated whether radiomics features in baseline optical coherence tomography (OCT) images of eyes with pigment epithelial detachment (PED) associated with neovascular age-related macular degeneration (nAMD) can predict treatment response to as-needed anti-vascular endothelial growth factor (VEGF) therapy. Methods Thirty-nine eyes of patients with PED undergoing anti-VEGF therapy were included. All eyes underwent a loading dose followed by as-needed therapy. OCT images at baseline, month 3, and month 6 were analyzed. Images were manually separated into non-responding, recurring, and responding eyes based on the presence or absence of subretinal fluid at month 6. PED radiomics features were then extracted from each image and images were classified as responding or recurring using a machine learning classifier applied to the radiomics features. Results Linear discriminant analysis classification of baseline features as responsive versus recurring resulted in classification performance of 64.0% (95% confidence interval [CI] = 0.63-0.65), area under the curve (AUC = 0.78, 95% CI = 0.72-0.82), sensitivity 0.79 (95% CI = 0.63-0.87), and specificity 0.58 (95% CI = 0.50-0.67). Further analysis of features in recurring eyes identified a significant shift toward non-responding mean feature values over 6 months. Conclusions Our results demonstrate the use of radiomics features as predictors for treatment response to as-needed anti-VEGF therapy. Our study demonstrates the potential for radiomics feature in clinical decision support for personalizing anti-VEGF therapy. Translational Relevance The ability to use PED texture features to predict treatment response facilitates personalized clinical decision making.
Collapse
Affiliation(s)
- Ryan Chace Williamson
- Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Amrish Selvam
- School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Manan Patel
- BJ Medical College, Ahmedabad, Gujarat, India
| | | | | | - Jose-Alain Sahel
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
25
|
Koseoglu ND, Grzybowski A, Liu TYA. Deep Learning Applications to Classification and Detection of Age-Related Macular Degeneration on Optical Coherence Tomography Imaging: A Review. Ophthalmol Ther 2023; 12:2347-2359. [PMID: 37493854 PMCID: PMC10441995 DOI: 10.1007/s40123-023-00775-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 07/14/2023] [Indexed: 07/27/2023] Open
Abstract
Age-related macular degeneration (AMD) is one of the leading causes of blindness in the elderly, more commonly in developed countries. Optical coherence tomography (OCT) is a non-invasive imaging device widely used for the diagnosis and management of AMD. Deep learning (DL) uses multilayered artificial neural networks (NN) for feature extraction, and is the cutting-edge technique for medical image analysis for diagnostic and prognostication purposes. Application of DL models to OCT image analysis has garnered significant interest in recent years. In this review, we aimed to summarize studies focusing on DL models used in classification and detection of AMD. Additionally, we provide a brief introduction to other DL applications in AMD, such as segmentation, prediction/prognostication, and models trained on multimodal imaging.
Collapse
Affiliation(s)
- Neslihan Dilruba Koseoglu
- Wilmer Eye Institute, Johns Hopkins University, 600 N. Wolfe St., Maumenee 726, Baltimore, MD, 21287, USA
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - T Y Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University, 600 N. Wolfe St., Maumenee 726, Baltimore, MD, 21287, USA.
| |
Collapse
|
26
|
Zhang T, Wei Q, Li Z, Meng W, Zhang M, Zhang Z. Segmentation of paracentral acute middle maculopathy lesions in spectral-domain optical coherence tomography images through weakly supervised deep convolutional networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107632. [PMID: 37329802 DOI: 10.1016/j.cmpb.2023.107632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/23/2023] [Accepted: 05/28/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVES Spectral-domain optical coherence tomography (SD-OCT) is a valuable tool for non-invasive imaging of the retina, allowing the discovery and visualization of localized lesions, the presence of which is associated with eye diseases. The present study introduces X-Net, a weakly supervised deep-learning framework for automated segmentation of paracentral acute middle maculopathy (PAMM) lesions in retinal SD-OCT images. Despite recent advances in the development of automatic methods for clinical analysis of OCT scans, there remains a scarcity of studies focusing on the automated detection of small retinal focal lesions. Additionally, most existing solutions depend on supervised learning, which can be time-consuming and require extensive image labeling, whereas X-Net offers a solution to these challenges. As far as we can determine, no prior study has addressed the segmentation of PAMM lesions in SD-OCT images. METHODS This study leverages 133 SD-OCT retinal images, each containing instances of paracentral acute middle maculopathy lesions. A team of eye experts annotated the PAMM lesions in these images using bounding boxes. Then, labeled data were used to train a U-Net that performs pre-segmentation, producing region labels of pixel-level accuracy. To attain a highly-accurate final segmentation, we introduced X-Net, a novel neural network made up of a master and a slave U-Net. During training, it takes the expert annotated, and pixel-level pre-segment annotated images and employs sophisticated strategies to ensure the highest segmentation accuracy. RESULTS The proposed method was rigorously evaluated on clinical retinal images excluded from training and achieved an accuracy of 99% with a high level of similarity between the automatic segmentation and expert annotation, as demonstrated by a mean Intersection-over-Union of 0.8. Alternative methods were tested on the same data. Single-stage neural networks proved insufficient for achieving satisfactory results, confirming that more advanced solutions, such as the proposed method, are necessary. We also found that X-Net using Attention U-net for both the pre-segmentation and X-Net arms for the final segmentation shows comparable performance to the proposed method, suggesting that the proposed approach remains a viable solution even when implemented with variants of the classic U-Net. CONCLUSIONS The proposed method exhibits reasonably high performance, validated through quantitative and qualitative evaluations. Medical eye specialists have also verified its validity and accuracy. Thus, it could be a viable tool in the clinical assessment of the retina. Additionally, the demonstrated approach for annotating the training set has proven to be effective in reducing the expert workload.
Collapse
Affiliation(s)
- Tianqiao Zhang
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin, China
| | - Qiaoqian Wei
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin, China
| | - Zhenzhen Li
- School of Information Engineering, Nanchang Institute of Technology, Nanchang, China
| | - Wenjing Meng
- Department of Library Services, Guilin University of Electronic Technology, Guilin, China
| | - Mengjiao Zhang
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin, China
| | - Zhengwei Zhang
- Department of Ophthalmology, Jiangnan University Medical Center, Wuxi, China; Department of Ophthalmology, Wuxi No.2 People's Hospital, Affiliated Wuxi Clinical College of Nantong University, Wuxi, China.
| |
Collapse
|
27
|
Link A, Pardo IL, Porr B, Franke T. AI based image analysis of red blood cells in oscillating microchannels. RSC Adv 2023; 13:28576-28582. [PMID: 37780736 PMCID: PMC10537593 DOI: 10.1039/d3ra04644c] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
The flow dynamics of red blood cells in vivo in blood capillaries and in vitro in microfluidic channels is complex. Cells can obtain different shapes such as discoid, parachute, slipper-like shapes and various intermediate states depending on flow conditions and their viscoelastic properties. We use artificial intelligence based analysis of red blood cells (RBCs) in an oscillating microchannel to distinguish healthy red blood cells from red blood cells treated with formaldehyde to chemically modify their viscoelastic behavior. We used TensorFlow to train and validate a deep learning model and achieved a testing accuracy of over 97%. This method is a first step to a non-invasive, label-free characterization of diseased red blood cells and will be useful for diagnostic purposes in haematology labs. This method provides quantitative data on the number of affected cells based on single cell classification.
Collapse
Affiliation(s)
- Andreas Link
- Division of Biomedical Engineering, School of Engineering, University of Glasgow Oakfield Avenue G12 8LT Glasgow UK
| | - Irene Luna Pardo
- Division of Biomedical Engineering, School of Engineering, University of Glasgow Oakfield Avenue G12 8LT Glasgow UK
| | - Bernd Porr
- Division of Biomedical Engineering, School of Engineering, University of Glasgow Oakfield Avenue G12 8LT Glasgow UK
| | - Thomas Franke
- Division of Biomedical Engineering, School of Engineering, University of Glasgow Oakfield Avenue G12 8LT Glasgow UK
| |
Collapse
|
28
|
Tzaridis S, Friedlander M. Optical coherence tomography: when a picture is worth a million words. J Clin Invest 2023:e174951. [PMID: 37731358 DOI: 10.1172/jci174951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/22/2023] Open
Affiliation(s)
- Simone Tzaridis
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, California, USA
- The Lowy Medical Research Institute, La Jolla, California, USA
- Department of Ophthalmology, University Hospital of Bonn, Bonn, Germany
| | - Martin Friedlander
- Department of Molecular Medicine, The Scripps Research Institute, La Jolla, California, USA
- The Lowy Medical Research Institute, La Jolla, California, USA
- Division of Ophthalmology, Scripps Clinic, La Jolla, California, USA
| |
Collapse
|
29
|
Ruamviboonsuk P, Ruamviboonsuk V, Tiwari R. Recent evidence of economic evaluation of artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:449-458. [PMID: 37459289 DOI: 10.1097/icu.0000000000000987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
PURPOSE OF REVIEW Health economic evaluation (HEE) is essential for assessing value of health interventions, including artificial intelligence. Recent approaches, current challenges, and future directions of HEE of artificial intelligence in ophthalmology are reviewed. RECENT FINDINGS Majority of recent HEEs of artificial intelligence in ophthalmology were for diabetic retinopathy screening. Two models, one conducted in the rural USA (5-year period) and another in China (35-year period), found artificial intelligence to be more cost-effective than without screening for diabetic retinopathy. Two additional models, which compared artificial intelligence with human screeners in Brazil and Thailand for the lifetime of patients, found artificial intelligence to be more expensive from a healthcare system perspective. In the Thailand analysis, however, artificial intelligence was less expensive when opportunity loss from blindness was included. An artificial intelligence model for screening retinopathy of prematurity was cost-effective in the USA. A model for screening age-related macular degeneration in Japan and another for primary angle close in China did not find artificial intelligence to be cost-effective, compared with no screening. The costs of artificial intelligence varied widely in these models. SUMMARY Like other medical fields, there is limited evidence in assessing the value of artificial intelligence in ophthalmology and more appropriate HEE models are needed.
Collapse
Affiliation(s)
- Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University
| | | | | |
Collapse
|
30
|
Nath S, Rahimy E, Kras A, Korot E. Toward safer ophthalmic artificial intelligence via distributed validation on real-world data. Curr Opin Ophthalmol 2023; 34:459-463. [PMID: 37459329 DOI: 10.1097/icu.0000000000000986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
PURPOSE OF REVIEW The current article provides an overview of the present approaches to algorithm validation, which are variable and largely self-determined, as well as solutions to address inadequacies. RECENT FINDINGS In the last decade alone, numerous machine learning applications have been proposed for ophthalmic diagnosis or disease monitoring. Remarkably, of these, less than 15 have received regulatory approval for implementation into clinical practice. Although there exists a vast pool of structured and relatively clean datasets from which to develop and test algorithms in the computational 'laboratory', real-world validation remains key to allow for safe, equitable, and clinically reliable implementation. Bottlenecks in the validation process stem from a striking paucity of regulatory guidance surrounding safety and performance thresholds, lack of oversight on critical postdeployment monitoring and context-specific recalibration, and inherent complexities of heterogeneous disease states and clinical environments. Implementation of secure, third-party, unbiased, pre and postdeployment validation offers the potential to address existing shortfalls in the validation process. SUMMARY Given the criticality of validation to the algorithm pipeline, there is an urgent need for developers, machine learning researchers, and end-user clinicians to devise a consensus approach, allowing for the rapid introduction of safe, equitable, and clinically valid machine learning implementations.
Collapse
Affiliation(s)
- Siddharth Nath
- Department of Ophthalmology and Visual Sciences, McGill University, Montréal, Québec, Canada
| | - Ehsan Rahimy
- Byers Eye Institute, Stanford University, Palo Alto, California, USA
| | - Ashley Kras
- Save Sight Institute, Sydney University, Sydney, Australia
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Edward Korot
- Byers Eye Institute, Stanford University, Palo Alto, California, USA
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Retina Specialists of Michigan, Grand Rapids, Michigan, USA
| |
Collapse
|
31
|
Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, Scales N, Tanwani A, Cole-Lewis H, Pfohl S, Payne P, Seneviratne M, Gamble P, Kelly C, Babiker A, Schärli N, Chowdhery A, Mansfield P, Demner-Fushman D, Agüera Y Arcas B, Webster D, Corrado GS, Matias Y, Chou K, Gottweis J, Tomasev N, Liu Y, Rajkomar A, Barral J, Semturs C, Karthikesalingam A, Natarajan V. Large language models encode clinical knowledge. Nature 2023; 620:172-180. [PMID: 37438534 PMCID: PMC10396962 DOI: 10.1038/s41586-023-06291-2] [Citation(s) in RCA: 298] [Impact Index Per Article: 298.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 06/05/2023] [Indexed: 07/14/2023]
Abstract
Large language models (LLMs) have demonstrated impressive capabilities, but the bar for clinical applications is high. Attempts to assess the clinical knowledge of models typically rely on automated evaluations based on limited benchmarks. Here, to address these limitations, we present MultiMedQA, a benchmark combining six existing medical question answering datasets spanning professional medicine, research and consumer queries and a new dataset of medical questions searched online, HealthSearchQA. We propose a human evaluation framework for model answers along multiple axes including factuality, comprehension, reasoning, possible harm and bias. In addition, we evaluate Pathways Language Model1 (PaLM, a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM2 on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA3, MedMCQA4, PubMedQA5 and Measuring Massive Multitask Language Understanding (MMLU) clinical topics6), including 67.6% accuracy on MedQA (US Medical Licensing Exam-style questions), surpassing the prior state of the art by more than 17%. However, human evaluation reveals key gaps. To resolve this, we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, knowledge recall and reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLMs for clinical applications.
Collapse
Affiliation(s)
| | | | - Tao Tu
- Google Research, Mountain View, CA, USA
| | | | - Jason Wei
- Google Research, Mountain View, CA, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | | | | | | | | | | |
Collapse
|
32
|
Hanson RLW, Airody A, Sivaprasad S, Gale RP. Optical coherence tomography imaging biomarkers associated with neovascular age-related macular degeneration: a systematic review. Eye (Lond) 2023; 37:2438-2453. [PMID: 36526863 PMCID: PMC9871156 DOI: 10.1038/s41433-022-02360-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/13/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
The aim of this systematic literature review is twofold, (1) detail the impact of retinal biomarkers identifiable via optical coherence tomography (OCT) on disease progression and response to treatment in neovascular age-related macular degeneration (nAMD) and (2) establish which biomarkers are currently identifiable by artificial intelligence (AI) models and the utilisation of this technology. Following the PRISMA guidelines, PubMed was searched for peer-reviewed publications dated between January 2016 and January 2022. POPULATION Patients diagnosed with nAMD with OCT imaging. SETTINGS Comparable settings to NHS hospitals. STUDY DESIGNS Randomised controlled trials, prospective/retrospective cohort studies and review articles. From 228 articles, 130 were full-text reviewed, 50 were removed for falling outside the scope of this review with 10 added from the author's inventory, resulting in the inclusion of 90 articles. From 9 biomarkers identified; intraretinal fluid (IRF), subretinal fluid, pigment epithelial detachment, subretinal hyperreflective material (SHRM), retinal pigmental epithelial (RPE) atrophy, drusen, outer retinal tabulation (ORT), hyperreflective foci (HF) and retinal thickness, 5 are considered pertinent to nAMD disease progression; IRF, SHRM, drusen, ORT and HF. A number of these biomarkers can be classified using current AI models. Significant retinal biomarkers pertinent to disease activity and progression in nAMD are identifiable via OCT; IRF being the most important in terms of the significant impact on visual outcome. Incorporating AI into ophthalmology practice is a promising advancement towards automated and reproducible analyses of OCT data with the ability to diagnose disease and predict future disease conversion. SYSTEMATIC REVIEW REGISTRATION This review has been registered with PROSPERO (registration ID: CRD42021233200).
Collapse
Affiliation(s)
- Rachel L W Hanson
- Academic Unit of Ophthalmology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK
| | - Archana Airody
- Academic Unit of Ophthalmology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK
| | - Sobha Sivaprasad
- Moorfields National Institute of Health Research, Biomedical Research Centre, London, UK
| | - Richard P Gale
- Academic Unit of Ophthalmology, York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK.
- Hull York Medical School, University of York, York, UK.
- York Biomedical Research Institute, University of York, York, UK.
| |
Collapse
|
33
|
Muntean GA, Marginean A, Groza A, Damian I, Roman SA, Hapca MC, Muntean MV, Nicoară SD. The Predictive Capabilities of Artificial Intelligence-Based OCT Analysis for Age-Related Macular Degeneration Progression-A Systematic Review. Diagnostics (Basel) 2023; 13:2464. [PMID: 37510207 PMCID: PMC10378064 DOI: 10.3390/diagnostics13142464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/16/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
The era of artificial intelligence (AI) has revolutionized our daily lives and AI has become a powerful force that is gradually transforming the field of medicine. Ophthalmology sits at the forefront of this transformation thanks to the effortless acquisition of an abundance of imaging modalities. There has been tremendous work in the field of AI for retinal diseases, with age-related macular degeneration being at the top of the most studied conditions. The purpose of the current systematic review was to identify and evaluate, in terms of strengths and limitations, the articles that apply AI to optical coherence tomography (OCT) images in order to predict the future evolution of age-related macular degeneration (AMD) during its natural history and after treatment in terms of OCT morphological structure and visual function. After a thorough search through seven databases up to 1 January 2022 using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 1800 records were identified. After screening, 48 articles were selected for full-text retrieval and 19 articles were finally included. From these 19 articles, 4 articles concentrated on predicting the anti-VEGF requirement in neovascular AMD (nAMD), 4 articles focused on predicting anti-VEGF efficacy in nAMD patients, 3 articles predicted the conversion from early or intermediate AMD (iAMD) to nAMD, 1 article predicted the conversion from iAMD to geographic atrophy (GA), 1 article predicted the conversion from iAMD to both nAMD and GA, 3 articles predicted the future growth of GA and 3 articles predicted the future outcome for visual acuity (VA) after anti-VEGF treatment in nAMD patients. Since using AI methods to predict future changes in AMD is only in its initial phase, a systematic review provides the opportunity of setting the context of previous work in this area and can present a starting point for future research.
Collapse
Affiliation(s)
- George Adrian Muntean
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Anca Marginean
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Adrian Groza
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Ioana Damian
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Sara Alexia Roman
- Faculty of Medicine, "Iuliu Hatieganu" University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
| | - Mădălina Claudia Hapca
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Maximilian Vlad Muntean
- Plastic Surgery Department, "Prof. Dr. I. Chiricuta" Institute of Oncology, 400015 Cluj-Napoca, Romania
| | - Simona Delia Nicoară
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| |
Collapse
|
34
|
Khavandi S, Lim E, Higham A, de Pennington N, Bindra M, Maling S, Adams M, Mole G. User-acceptability of an automated telephone call for post-operative follow-up after uncomplicated cataract surgery. Eye (Lond) 2023; 37:2069-2076. [PMID: 36274084 PMCID: PMC10333311 DOI: 10.1038/s41433-022-02289-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 09/18/2022] [Accepted: 10/10/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Innovative technology is recommended to address the current capacity challenges facing the NHS. This study evaluates the patient acceptability of automated telephone follow-up after routine cataract surgery using Dora (Ufonia Limited, Oxford, United Kingdom), which to our knowledge is the first AI-powered clinical assistant to be used in the NHS. Dora has a natural-language, phone conversation with patients about their symptoms after cataract surgery. METHODS This is a prospective mixed-methods cohort study that was conducted at Buckinghamshire Healthcare NHS Foundation Trust. All patients who were followed up using Dora were asked to give a Net Promoter Score (NPS), and 24 patients were randomly selected to complete the validated Telephone Usability Questionnaire (TUQ) as well as extended semi-structured interviews that underwent thematic analysis. RESULTS A total of 170 autonomous calls were completed. The median NPS score was 9 out of 10. The TUQ (scored out of 5) showed high rates of acceptability, with an overall mean score of 4.0. Simplicity, time saving, and ease of use scored the highest with a median of 5, whilst 'speaking to Dora feels the same as speaking to a clinician' scored a median of 3. The main themes extracted from the qualitative data were 'I can see why you're doing it', 'It went quite well actually', 'I just trust human beings I suppose'. CONCLUSION We found high levels of patient acceptability when using Dora across three acceptability measures. Dora provides a potential solution to reduce pressure on hospital capacity whilst also providing a convenient service for patients.
Collapse
Affiliation(s)
- Sarah Khavandi
- Imperial College School of Medicine, Imperial College London, London, UK
- Ufonia Limited, 3-5 Hythe Bridge Street, Oxford, UK
| | - Ernest Lim
- Ufonia Limited, 3-5 Hythe Bridge Street, Oxford, UK.
- Imperial College Healthcare NHS Trust, London, UK.
| | - Aisling Higham
- Oxford University Hospital NHS Foundation Trust, Oxford, UK
| | | | - Mandeep Bindra
- Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK
| | - Sarah Maling
- Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK
| | - Mike Adams
- Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK
- Royal College of Ophthalmology, London, UK
- United Kingdom & Ireland Society of Cataract & Refractive Surgeons, Wirral, UK
| | - Guy Mole
- Ufonia Limited, 3-5 Hythe Bridge Street, Oxford, UK
- Oxford University Hospital NHS Foundation Trust, Oxford, UK
| |
Collapse
|
35
|
Ren X, Feng W, Ran R, Gao Y, Lin Y, Fu X, Tao Y, Wang T, Wang B, Ju L, Chen Y, He L, Xi W, Liu X, Ge Z, Zhang M. Artificial intelligence to distinguish retinal vein occlusion patients using color fundus photographs. Eye (Lond) 2023; 37:2026-2032. [PMID: 36302974 PMCID: PMC10333217 DOI: 10.1038/s41433-022-02239-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 08/04/2022] [Accepted: 09/02/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Our aim is to establish an AI model for distinguishing color fundus photographs (CFP) of RVO patients from normal individuals. METHODS The training dataset included 2013 CFP from fellow eyes of RVO patients and 8536 age- and gender-matched normal CFP. Model performance was assessed in two independent testing datasets. We evaluated the performance of the AI model using the area under the receiver operating characteristic curve (AUC), accuracy, precision, specificity, sensitivity, and confusion matrices. We further explained the probable clinical relevance of the AI by extracting and comparing features of the retinal images. RESULTS Our model achieved an average AUC was 0.9866 (95% CI: 0.9805-0.9918), accuracy was 0.9534 (95% CI: 0.9421-0.9639), precision was 0.9123 (95% CI: 0.8784-9453), specificity was 0.9810 (95% CI: 0.9729-0.9884), and sensitivity was 0.8367 (95% CI: 0.7953-0.8756) for identifying fundus images of RVO patients in training dataset. In independent external datasets 1, the AUC of the RVO group was 0.8102 (95% CI: 0.7979-0.8226), the accuracy of 0.7752 (95% CI: 0.7633-0.7875), the precision of 0.7041 (95% CI: 0.6873-0.7211), specificity of 0.6499 (95% CI: 0.6305-0.6679) and sensitivity of 0.9124 (95% CI: 0.9004-0.9241) for RVO group. There were significant differences in retinal arteriovenous ratio, optic cup to optic disc ratio, and optic disc tilt angle (p = 0.001, p = 0.0001, and p = 0.0001, respectively) between the two groups in training dataset. CONCLUSION We trained an AI model to classify color fundus photographs of RVO patients with stable performance both in internal and external datasets. This may be of great importance for risk prediction in patients with retinal venous occlusion.
Collapse
Affiliation(s)
- Xiang Ren
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Wei Feng
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Ruijin Ran
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Minda Hospital of Hubei Minzu University, Enshi, China
| | - Yunxia Gao
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Yu Lin
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Xiangyu Fu
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Yunhan Tao
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Ting Wang
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Bin Wang
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Lie Ju
- Beijing Airdoc Technology Co Ltd, Beijing, China
- ECSE, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Yuzhong Chen
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Lanqing He
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Wu Xi
- Chengdu Ikangguobin Health Examination Center Ltd, Chengdu, China
| | - Xiaorong Liu
- Chengdu Ikangguobin Health Examination Center Ltd, Chengdu, China
| | - Zongyuan Ge
- ECSE, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
- eResearch Centre, Monash University, Melbourne, VIC, Australia
| | - Ming Zhang
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China.
| |
Collapse
|
36
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
37
|
Zhao PY, Bommakanti N, Yu G, Aaberg MT, Patel TP, Paulus YM. Deep learning for automated detection of neovascular leakage on ultra-widefield fluorescein angiography in diabetic retinopathy. Sci Rep 2023; 13:9165. [PMID: 37280345 DOI: 10.1038/s41598-023-36327-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 06/01/2023] [Indexed: 06/08/2023] Open
Abstract
Diabetic retinopathy is a leading cause of blindness in working-age adults worldwide. Neovascular leakage on fluorescein angiography indicates progression to the proliferative stage of diabetic retinopathy, which is an important distinction that requires timely ophthalmic intervention with laser or intravitreal injection treatment to reduce the risk of severe, permanent vision loss. In this study, we developed a deep learning algorithm to detect neovascular leakage on ultra-widefield fluorescein angiography images obtained from patients with diabetic retinopathy. The algorithm, an ensemble of three convolutional neural networks, was able to accurately classify neovascular leakage and distinguish this disease marker from other angiographic disease features. With additional real-world validation and testing, our algorithm could facilitate identification of neovascular leakage in the clinical setting, allowing timely intervention to reduce the burden of blinding diabetic eye disease.
Collapse
Affiliation(s)
- Peter Y Zhao
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Nikhil Bommakanti
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Gina Yu
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Michael T Aaberg
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Tapan P Patel
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Yannis M Paulus
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA.
| |
Collapse
|
38
|
Rivail A, Vogl WD, Riedl S, Grechenig C, Coulibaly LM, Reiter GS, Guymer RH, Wu Z, Schmidt-Erfurth U, Bogunović H. Deep survival modeling of longitudinal retinal OCT volumes for predicting the onset of atrophy in patients with intermediate AMD. BIOMEDICAL OPTICS EXPRESS 2023; 14:2449-2464. [PMID: 37342683 PMCID: PMC10278641 DOI: 10.1364/boe.487206] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/30/2023] [Accepted: 04/10/2023] [Indexed: 06/23/2023]
Abstract
In patients with age-related macular degeneration (AMD), the risk of progression to late stages is highly heterogeneous, and the prognostic imaging biomarkers remain unclear. We propose a deep survival model to predict the progression towards the late atrophic stage of AMD. The model combines the advantages of survival modelling, accounting for time-to-event and censoring, and the advantages of deep learning, generating prediction from raw 3D OCT scans, without the need for extracting a predefined set of quantitative biomarkers. We demonstrate, in an extensive set of evaluations, based on two large longitudinal datasets with 231 eyes from 121 patients for internal evaluation, and 280 eyes from 140 patients for the external evaluation, that this model improves the risk estimation performance over standard deep learning classification models.
Collapse
Affiliation(s)
- Antoine Rivail
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Wolf-Dieter Vogl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Christoph Grechenig
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Leonard M. Coulibaly
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Gregor S. Reiter
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Robyn H. Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
39
|
Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, Yokoiwa R, Sato S, Hanyuda A, Ogawa Y, Hirayama M, Tsubota K, Sato Y, Shimazaki J, Negishi K. Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Sci Rep 2023; 13:5822. [PMID: 37037877 PMCID: PMC10085985 DOI: 10.1038/s41598-023-33021-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/06/2023] [Indexed: 04/12/2023] Open
Abstract
The use of artificial intelligence (AI) in the diagnosis of dry eye disease (DED) remains limited due to the lack of standardized image formats and analysis models. To overcome these issues, we used the Smart Eye Camera (SEC), a video-recordable slit-lamp device, and collected videos of the anterior segment of the eye. This study aimed to evaluate the accuracy of the AI algorithm in estimating the tear film breakup time and apply this model for the diagnosis of DED according to the Asia Dry Eye Society (ADES) DED diagnostic criteria. Using the retrospectively corrected DED videos of 158 eyes from 79 patients, 22,172 frames were annotated by the DED specialist to label whether or not the frame had breakup. The AI algorithm was developed using the training dataset and machine learning. The DED criteria of the ADES was used to determine the diagnostic performance. The accuracy of tear film breakup time estimation was 0.789 (95% confidence interval (CI) 0.769-0.809), and the area under the receiver operating characteristic curve of this AI model was 0.877 (95% CI 0.861-0.893). The sensitivity and specificity of this AI model for the diagnosis of DED was 0.778 (95% CI 0.572-0.912) and 0.857 (95% CI 0.564-0.866), respectively. We successfully developed a novel AI-based diagnostic model for DED. Our diagnostic model has the potential to enable ophthalmology examination outside hospitals and clinics.
Collapse
Affiliation(s)
- Eisuke Shimizu
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan.
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan.
- Yokohama Keiai Eye Clinic, Courtley House 2F, 1-11-17 Wada, Hodogaya-ku, Kanagawa, 240-0065, Japan.
| | - Toshiki Ishikawa
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Makoto Tanji
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Naomichi Agata
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Shintaro Nakayama
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Yo Nakahara
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Ryota Yokoiwa
- OUI Inc., DF Building 510, 2-2-8 Minami-Aoyama, Minato-ku, Tokyo, 107-0062, Japan
| | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
- Yokohama Keiai Eye Clinic, Courtley House 2F, 1-11-17 Wada, Hodogaya-ku, Kanagawa, 240-0065, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yoko Ogawa
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Masatoshi Hirayama
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Kazuo Tsubota
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Jun Shimazaki
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13 Sugano, Ichikawa-shi, Chiba, 272-8513, Japan
| | - Kazuno Negishi
- Department of Ophthalmology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
40
|
Ayhan MS, Faber H, Kühlewein L, Inhoffen W, Aliyeva G, Ziemssen F, Berens P. Multitask Learning for Activity Detection in Neovascular Age-Related Macular Degeneration. Transl Vis Sci Technol 2023; 12:12. [PMID: 37052912 PMCID: PMC10103736 DOI: 10.1167/tvst.12.4.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023] Open
Abstract
Purpose The purpose of this study was to provide a comparison of performance and explainability of a multitask convolutional deep neuronal network to single-task networks for activity detection in neovascular age-related macular degeneration (nAMD). Methods From 70 patients (46 women and 24 men) who attended the University Eye Hospital Tübingen, 3762 optical coherence tomography B-scans (right eye = 2011 and left eye = 1751) were acquired with Heidelberg Spectralis, Heidelberg, Germany. B-scans were graded by a retina specialist and an ophthalmology resident, and then used to develop a multitask deep learning model to predict disease activity in neovascular age-related macular degeneration along with the presence of sub- and intraretinal fluid. We used performance metrics for comparison to single-task networks and visualized the deep neural network (DNN)-based decision with t-distributed stochastic neighbor embedding and clinically validated saliency mapping techniques. Results The multitask model surpassed single-task networks in accuracy for activity detection (94.2% vs. 91.2%). The area under the curve of the receiver operating curve was 0.984 for the multitask model versus 0.974 for the single-task model. Furthermore, compared to single-task networks, visualizations via t-distributed stochastic neighbor embedding and saliency maps highlighted that multitask networks' decisions for activity detection in neovascular age-related macular degeneration were highly consistent with the presence of both sub- and intraretinal fluid. Conclusions Multitask learning increases the performance of neuronal networks for predicting disease activity, while providing clinicians with an easily accessible decision control, which resembles human reasoning. Translational Relevance By improving nAMD activity detection performance and transparency of automated decisions, multitask DNNs can support the translation of machine learning research into clinical decision support systems for nAMD activity detection.
Collapse
Affiliation(s)
- Murat Seçkin Ayhan
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Hanna Faber
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Laura Kühlewein
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Werner Inhoffen
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Gulnar Aliyeva
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Focke Ziemssen
- University Eye Clinic, University of Tübingen, Tübingen, Germany
- University Eye Clinic, University of Leipzig, Leipzig, Germany
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, Tübingen, Germany
- Hertie Institute for AI in Brain Health, University of Tübingen, Tübingen, Germany
| |
Collapse
|
41
|
Zhang Y, Huang K, Li M, Yuan S, Chen Q. Learn Single-horizon Disease Evolution for Predictive Generation of Post-therapeutic Neovascular Age-related Macular Degeneration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107364. [PMID: 36716636 DOI: 10.1016/j.cmpb.2023.107364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 01/16/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Most of the existing disease prediction methods in the field of medical image processing fall into two classes, namely image-to-category predictions and image-to-parameter predictions.Few works have focused on image-to-image predictions. Different from multi-horizon predictions in other fields, ophthalmologists prefer to show more confidence in single-horizon predictions due to the low tolerance of predictive risk. METHODS We propose a single-horizon disease evolution network (SHENet) to predictively generate post-therapeutic SD-OCT images by inputting pre-therapeutic SD-OCT images with neovascular age-related macular degeneration (nAMD). In SHENet, a feature encoder converts the input SD-OCT images to deep features, then a graph evolution module predicts the process of disease evolution in high-dimensional latent space and outputs the predicted deep features, and lastly, feature decoder recovers the predicted deep features to SD-OCT images. We further propose an evolution reinforcement module to ensure the effectiveness of disease evolution learning and obtain realistic SD-OCT images by adversarial training. RESULTS SHENet is validated on 383 SD-OCT cubes of 22 nAMD patients based on three well-designed schemes (P-0, P-1 and P-M) based on the quantitative and qualitative evaluations. Three metrics (PSNR, SSIM, 1-LPIPS) are used here for quantitative evaluations. Compared with other generative methods, the generative SD-OCT images of SHENet have the highest image quality (P-0: 23.659, P-1: 23.875, P-M: 24.198) by PSNR. Besides, SHENet achieves the best structure protection (P-0: 0.326, P-1: 0.337, P-M: 0.349) by SSIM and content prediction (P-0: 0.609, P-1: 0.626, P-M: 0.642) by 1-LPIPS. Qualitative evaluations also demonstrate that SHENet has a better visual effect than other methods. CONCLUSIONS SHENet can generate post-therapeutic SD-OCT images with both high prediction performance and good image quality, which has great potential to help ophthalmologists forecast the therapeutic effect of nAMD.
Collapse
Affiliation(s)
- Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, 210094, China.
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| |
Collapse
|
42
|
Moradi M, Chen Y, Du X, Seddon JM. Deep ensemble learning for automated non-advanced AMD classification using optimized retinal layer segmentation and SD-OCT scans. Comput Biol Med 2023; 154:106512. [PMID: 36701964 DOI: 10.1016/j.compbiomed.2022.106512] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/30/2022] [Accepted: 12/31/2022] [Indexed: 01/11/2023]
Abstract
BACKGROUND Accurate retinal layer segmentation in optical coherence tomography (OCT) images is crucial for quantitatively analyzing age-related macular degeneration (AMD) and monitoring its progression. However, previous retinal segmentation models depend on experienced experts and manually annotating retinal layers is time-consuming. On the other hand, accuracy of AMD diagnosis is directly related to the segmentation model's performance. To address these issues, we aimed to improve AMD detection using optimized retinal layer segmentation and deep ensemble learning. METHOD We integrated a graph-cut algorithm with a cubic spline to automatically annotate 11 retinal boundaries. The refined images were fed into a deep ensemble mechanism that combined a Bagged Tree and end-to-end deep learning classifiers. We tested the developed deep ensemble model on internal and external datasets. RESULTS The total error rates for our segmentation model using the boundary refinement approach was significantly lower than OCT Explorer segmentations (1.7% vs. 7.8%, p-value = 0.03). We utilized the refinement approach to quantify 169 imaging features using Zeiss SD-OCT volume scans. The presence of drusen and thickness of total retina, neurosensory retina, and ellipsoid zone to inner-outer segment (EZ-ISOS) thickness had higher contributions to AMD classification compared to other features. The developed ensemble learning model obtained a higher diagnostic accuracy in a shorter time compared with two human graders. The area under the curve (AUC) for normal vs. early AMD was 99.4%. CONCLUSION Testing results showed that the developed framework is repeatable and effective as a potentially valuable tool in retinal imaging research.
Collapse
Affiliation(s)
- Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Xian Du
- Department of Mechanical and Industrial Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Johanna M Seddon
- Department of Ophthalmology & Visual Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States.
| |
Collapse
|
43
|
Grote T, Berens P. Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice. THE JOURNAL OF MEDICINE AND PHILOSOPHY 2023; 48:84-97. [PMID: 36630292 DOI: 10.1093/jmp/jhac034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) and highlight their relevance for medical diagnostics. Among the problems we inspect are the theoretical foundations of deep learning (which are not yet adequately understood), the opacity of algorithmic decisions, and the vulnerabilities of machine learning models, as well as concerns regarding the quality of medical data used to train the models. Building on this, we discuss different desiderata for an uncertainty amelioration strategy that ensures that the integration of machine learning into clinical settings proves to be medically beneficial in a meaningful way.
Collapse
|
44
|
Automated large-scale prediction of exudative AMD progression using machine-read OCT biomarkers. PLOS DIGITAL HEALTH 2023; 2:e0000106. [PMID: 36812608 PMCID: PMC9931262 DOI: 10.1371/journal.pdig.0000106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/14/2023] [Indexed: 02/17/2023]
Abstract
Age-related Macular Degeneration (AMD) is a major cause of irreversible vision loss in individuals over 55 years old in the United States. One of the late-stage manifestations of AMD, and a major cause of vision loss, is the development of exudative macular neovascularization (MNV). Optical Coherence Tomography (OCT) is the gold standard to identify fluid at different levels within the retina. The presence of fluid is considered the hallmark to define the presence of disease activity. Anti-vascular growth factor (anti-VEGF) injections can be used to treat exudative MNV. However, given the limitations of anti-VEGF treatment, as burdensome need for frequent visits and repeated injections to sustain efficacy, limited durability of the treatment, poor or no response, there is a great interest in detecting early biomarkers associated with a higher risk for AMD progression to exudative forms in order to optimize the design of early intervention clinical trials. The annotation of structural biomarkers on optical coherence tomography (OCT) B-scans is a laborious, complex and time-consuming process, and discrepancies between human graders can introduce variability into this assessment. To address this issue, a deep-learning model (SLIVER-net) was proposed, which could identify AMD biomarkers on structural OCT volumes with high precision and without human supervision. However, the validation was performed on a small dataset, and the true predictive power of these detected biomarkers in the context of a large cohort has not been evaluated. In this retrospective cohort study, we perform the largest-scale validation of these biomarkers to date. We also assess how these features combined with other EHR data (demographics, comorbidities, etc) affect and/or improve the prediction performance relative to known factors. Our hypothesis is that these biomarkers can be identified by a machine learning algorithm without human supervision, in a way that they preserve their predictive nature. The way we test this hypothesis is by building several machine learning models utilizing these machine-read biomarkers and assessing their added predictive power. We found that not only can we show that the machine-read OCT B-scan biomarkers are predictive of AMD progression, we also observe that our proposed combined OCT and EHR data-based algorithm outperforms the state-of-the-art solution in clinically relevant metrics and provides actionable information which has the potential to improve patient care. In addition, it provides a framework for automated large-scale processing of OCT volumes, making it possible to analyze vast archives without human supervision.
Collapse
|
45
|
Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK. [Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extensionDiretrizes para relatórios de ensaios clínicos com intervenções que utilizam inteligência artificial: a extensão CONSORT-AI]. Rev Panam Salud Publica 2023; 48:e13. [PMID: 38352035 PMCID: PMC10863743 DOI: 10.26633/rpsp.2024.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/23/2020] [Indexed: 02/16/2024] Open
Abstract
The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
Collapse
Affiliation(s)
- Xiaoxuan Liu
- Moorfields Eye Hospital NHS Foundation TrustLondresReino UnidoMoorfields Eye Hospital NHS Foundation Trust, Londres, Reino Unido.
- Academic Unit of OphthalmologyInstitute of Inflammation and AgeingUniversity of BirminghamBirminghamReino UnidoAcademic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, Reino Unido.
- University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoUniversity Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
- Health Data Research Reino UnidoLondresReino UnidoHealth Data Research Reino Unido, Londres, Reino Unido.
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
| | - Samantha Cruz Rivera
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
- Centre for Patient Reported Outcomes ResearchInstitute of Applied Health ResearchUniversity of BirminghamBirmingham Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham.
- Institute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoInstitute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
| | - David Moher
- Centre for JournalologyClinical Epidemiology ProgramOttawa Hospital Research InstituteOttawaCanadáCentre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canadá.
- School of Epidemiology and Public HealthFaculty of MedicineUniversity of OttawaOttawaCanadaSchool of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada.
| | - Melanie J. Calvert
- Health Data Research Reino UnidoLondresReino UnidoHealth Data Research Reino Unido, Londres, Reino Unido.
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
- Centre for Patient Reported Outcomes ResearchInstitute of Applied Health ResearchUniversity of BirminghamBirmingham Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham.
- Institute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoInstitute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
- National Institute of Health Research Birmingham Biomedical Research CentreUniversity of Birmingham and University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoNational Institute of Health Research Birmingham Biomedical Research Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
- National Institute of Health Research Applied Research Collaborative West MidlandsCoventryReino Unido.National Institute of Health Research Applied Research Collaborative West Midlands, Coventry, Reino Unido.
- National Institute of Health Research Surgical Reconstruction and Microbiology CentreUniversity of Birmingham and University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoNational Institute of Health Research Surgical Reconstruction and Microbiology Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
| | - Alastair K. Denniston
- Academic Unit of OphthalmologyInstitute of Inflammation and AgeingUniversity of BirminghamBirminghamReino UnidoAcademic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, Reino Unido.
- University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoUniversity Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
- Health Data Research Reino UnidoLondresReino UnidoHealth Data Research Reino Unido, Londres, Reino Unido.
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
- Centre for Patient Reported Outcomes ResearchInstitute of Applied Health ResearchUniversity of BirminghamBirmingham Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham.
- NIHR Biomedical Research Center at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of OphthalmologyLondresReino UnidoNIHR Biomedical Research Center at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, Londres, Reino Unido.
| | | |
Collapse
|
46
|
Taribagil P, Hogg HDJ, Balaskas K, Keane PA. Integrating artificial intelligence into an ophthalmologist’s workflow: obstacles and opportunities. EXPERT REVIEW OF OPHTHALMOLOGY 2023. [DOI: 10.1080/17469899.2023.2175672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Affiliation(s)
- Priyal Taribagil
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - HD Jeffry Hogg
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Population Health Science, Population Health Science Institute, Newcastle University, Newcastle upon Tyne, UK
- Department of Ophthalmology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Freeman Road, Newcastle upon Tyne, UK
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Medical Retina, Institute of Ophthalmology, University College of London Institute of Ophthalmology, London, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Medical Retina, Institute of Ophthalmology, University College of London Institute of Ophthalmology, London, UK
| |
Collapse
|
47
|
Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ. [Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extensionDiretrizes para protocolos de ensaios clínicos com intervenções que utilizam inteligência artificial: a extensão SPIRIT-AI]. Rev Panam Salud Publica 2023; 48:e12. [PMID: 38304411 PMCID: PMC10832304 DOI: 10.26633/rpsp.2024.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/23/2020] [Indexed: 02/03/2024] Open
Abstract
The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human-AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial.
Collapse
Affiliation(s)
- Samantha Cruz Rivera
- Centre for Patient Reported Outcomes ResearchInstitute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoCentre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
- Institute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoInstitute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
| | - Xiaoxuan Liu
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
- Academic Unit of OphthalmologyInstitute of Inflammation and AgeingUniversity of BirminghamBirminghamReino UnidoAcademic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, Reino Unido.
- University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoUniversity Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
- Health Data Research UKLondresReino UnidoHealth Data Research UK, Londres, Reino Unido.
- Moorfields Eye Hospital NHS Foundation TrustLondresReino UnidoMoorfields Eye Hospital NHS Foundation Trust, Londres, Reino Unido.
| | - An-Wen Chan
- Department of Medicine, Women’s College Research InstituteWomen’s College HospitalUniversity of TorontoOntarioCanadáDepartment of Medicine, Women’s College Research Institute, Women’s College Hospital, University of Toronto, Ontario, Canadá.
| | - Alastair K. Denniston
- Centre for Patient Reported Outcomes ResearchInstitute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoCentre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
- Academic Unit of OphthalmologyInstitute of Inflammation and AgeingUniversity of BirminghamBirminghamReino UnidoAcademic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, Reino Unido.
- University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoUniversity Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
- Health Data Research UKLondresReino UnidoHealth Data Research UK, Londres, Reino Unido.
- National Institute of Health Research Biomedical Research Centre for OphthalmologyMoorfields Hospital London NHS Foundation Trust and University College LondonInstitute of OphthalmologyLondresReino UnidoNational Institute of Health Research Biomedical Research Centre for Ophthalmology, Moorfields Hospital London NHS Foundation Trust and University College London, Institute of Ophthalmology, Londres, Reino Unido.
| | - Melanie J. Calvert
- Centre for Patient Reported Outcomes ResearchInstitute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoCentre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
- Institute of Applied Health ResearchUniversity of BirminghamBirminghamReino UnidoInstitute of Applied Health Research, University of Birmingham, Birmingham, Reino Unido.
- Birmingham Health Partners Centre for Regulatory Science and InnovationUniversity of BirminghamBirminghamReino UnidoBirmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, Reino Unido.
- Health Data Research UKLondresReino UnidoHealth Data Research UK, Londres, Reino Unido.
- National Institute of Health Research Birmingham Biomedical Research CentreUniversity of Birmingham and University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoNational Institute of Health Research Birmingham Biomedical Research Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
- National Institute of Health Research Applied Research Collaborative West MidlandsCoventryReino UnidoNational Institute of Health Research Applied Research Collaborative West Midlands, Coventry, Reino Unido.
- National Institute of Health Research Surgical Reconstruction and Microbiology CentreUniversity of Birmingham and University Hospitals Birmingham NHS Foundation TrustBirminghamReino UnidoNational Institute of Health Research Surgical Reconstruction and Microbiology Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, Reino Unido.
| | | |
Collapse
|
48
|
Deep Learning in Optical Coherence Tomography Angiography: Current Progress, Challenges, and Future Directions. Diagnostics (Basel) 2023; 13:diagnostics13020326. [PMID: 36673135 PMCID: PMC9857993 DOI: 10.3390/diagnostics13020326] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 01/18/2023] Open
Abstract
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions.
Collapse
|
49
|
iERM: An Interpretable Deep Learning System to Classify Epiretinal Membrane for Different Optical Coherence Tomography Devices: A Multi-Center Analysis. J Clin Med 2023; 12:jcm12020400. [PMID: 36675327 PMCID: PMC9862104 DOI: 10.3390/jcm12020400] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/29/2022] [Accepted: 01/03/2023] [Indexed: 01/06/2023] Open
Abstract
Background: Epiretinal membranes (ERM) have been found to be common among individuals >50 years old. However, the severity grading assessment for ERM based on optical coherence tomography (OCT) images has remained a challenge due to lacking reliable and interpretable analysis methods. Thus, this study aimed to develop a two-stage deep learning (DL) system named iERM to provide accurate automatic grading of ERM for clinical practice. Methods: The iERM was trained based on human segmentation of key features to improve classification performance and simultaneously provide interpretability to the classification results. We developed and tested iERM using a total of 4547 OCT B-Scans of four different commercial OCT devices that were collected from nine international medical centers. Results: As per the results, the integrated network effectively improved the grading performance by 1−5.9% compared with the traditional classification DL model and achieved high accuracy scores of 82.9%, 87.0%, and 79.4% in the internal test dataset and two external test datasets, respectively. This is comparable to retinal specialists whose average accuracy scores are 87.8% and 79.4% in two external test datasets. Conclusion: This study proved to be a benchmark method to improve the performance and enhance the interpretability of the traditional DL model with the implementation of segmentation based on prior human knowledge. It may have the potential to provide precise guidance for ERM diagnosis and treatment.
Collapse
|
50
|
Ivanics T, So D, Claasen MPAW, Wallace D, Patel MS, Gravely A, Choi WJ, Shwaartz C, Walker K, Erdman L, Sapisochin G. Machine learning-based mortality prediction models using national liver transplantation registries are feasible but have limited utility across countries. Am J Transplant 2023; 23:64-71. [PMID: 36695623 DOI: 10.1016/j.ajt.2022.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 10/04/2022] [Accepted: 10/14/2022] [Indexed: 01/13/2023]
Abstract
Many countries curate national registries of liver transplant (LT) data. These registries are often used to generate predictive models; however, potential performance and transferability of these models remain unclear. We used data from 3 national registries and developed machine learning algorithm (MLA)-based models to predict 90-day post-LT mortality within and across countries. Predictive performance and external validity of each model were assessed. Prospectively collected data of adult patients (aged ≥18 years) who underwent primary LTs between January 2008 and December 2018 from the Canadian Organ Replacement Registry (Canada), National Health Service Blood and Transplantation (United Kingdom), and United Network for Organ Sharing (United States) were used to develop MLA models to predict 90-day post-LT mortality. Models were developed using each registry individually (based on variables inherent to the individual databases) and using all 3 registries combined (variables in common between the registries [harmonized]). The model performance was evaluated using area under the receiver operating characteristic (AUROC) curve. The number of patients included was as follows: Canada, n = 1214; the United Kingdom, n = 5287; and the United States, n = 59,558. The best performing MLA-based model was ridge regression across both individual registries and harmonized data sets. Model performance diminished from individualized to the harmonized registries, especially in Canada (individualized ridge: AUROC, 0.74; range, 0.73-0.74; harmonized: AUROC, 0.68; range, 0.50-0.73) and US (individualized ridge: AUROC, 0.71; range, 0.70-0.71; harmonized: AUROC, 0.66; range, 0.66-0.66) data sets. External model performance across countries was poor overall. MLA-based models yield a fair discriminatory potential when used within individual databases. However, the external validity of these models is poor when applied across countries. Standardization of registry-based variables could facilitate the added value of MLA-based models in informing decision making in future LTs.
Collapse
Affiliation(s)
- Tommy Ivanics
- Multi-Organ Transplant Program, University Health Network Toronto, Ontario, Canada; Department of Surgery, Henry Ford Hospital, Detroit, Michigan, USA; Department of Surgical Sciences, Akademiska Sjukhuset, Uppsala University, Uppsala, Sweden
| | - Delvin So
- The Centre of Computational Medicine, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Marco P A W Claasen
- Multi-Organ Transplant Program, University Health Network Toronto, Ontario, Canada; Department of Surgery, division of HPB & Transplant Surgery, Erasmus MC Transplant Institute, University Medical Centre Rotterdam, Rotterdam, Netherlands
| | - David Wallace
- Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine and Institute of Liver Studies, King's College Hospital NHS Foundation Trust, London, UK
| | - Madhukar S Patel
- Division of Surgical Transplantation, Department of Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Annabel Gravely
- Multi-Organ Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Woo Jin Choi
- Multi-Organ Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Chaya Shwaartz
- Multi-Organ Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Kate Walker
- Department of Nephrology and Transplantation, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Lauren Erdman
- The Centre of Computational Medicine, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Gonzalo Sapisochin
- Multi-Organ Transplant Program, University Health Network Toronto, Ontario, Canada.
| |
Collapse
|