1
|
Larsen TJ, Pettersen MB, Nygaard Jensen H, Lynge Pedersen M, Lund-Andersen H, Jørgensen ME, Byberg S. The use of artificial intelligence to assess diabetic eye disease among the Greenlandic population. Int J Circumpolar Health 2024; 83:2314802. [PMID: 38359160 PMCID: PMC10877649 DOI: 10.1080/22423982.2024.2314802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/01/2024] [Indexed: 02/17/2024] Open
Abstract
Background: Retina fundus images conducted in Greenland are telemedically assessed for diabetic retinopathy by ophthalmological nurses in Denmark. Applying an AI grading solution, in a Greenlandic setting, could potentially improve the efficiency and cost-effectiveness of DR screening.Method: We developed an AI model using retina fundus photos, performed on persons registered with diabetes in Greenland and Denmark, using Optos® ultra wide-field scanning laser ophthalmoscope, graded according to ICDR.Using the ResNet50 network we compared the model's ability to distinguish between different images of ICDR severity levels in a confusion matrix.Results: Comparing images with ICDR level 0 to images of ICDR level 4 resulted in an accuracy of 0.9655, AUC of 0.9905, sensitivity and specificity of 96.6%.Comparing ICDR levels 0,1,2 with ICDR levels 3,4, we achieved a performance with an accuracy of 0.8077, an AUC of 0.8728, a sensitivity of 84.6% and a specificity of 78.8%. For the other comparisons, we achieved a modest performance.Conclusion: We developed an AI model using Greenlandic data, to automatically detect DR on Optos retina fundus images. The sensitivity and specificity were too low for our model to be applied directly in a clinical setting, thus optimising the model should be prioritised.
Collapse
Affiliation(s)
- Trine Jul Larsen
- Greenland Center of Health Research, Institute of Nursing and Health Science, University of Greenland, Nuuk, Greenland
| | | | | | - Michael Lynge Pedersen
- Greenland Center of Health Research, Institute of Nursing and Health Science, University of Greenland, Nuuk, Greenland
- Rigshospitalet-Glostrup University Hospital, Glostrup, Denmark
| | - Henrik Lund-Andersen
- Clinical Epidemiology, Steno Diabetes Center Copenhagen, Copenhagen, Denmark
- Rigshospitalet-Glostrup University Hospital, Glostrup, Denmark
| | | | - Stine Byberg
- Clinical Epidemiology, Steno Diabetes Center Copenhagen, Copenhagen, Denmark
| |
Collapse
|
2
|
Zhu A, Tailor P, Verma R, Zhang I, Schott B, Ye C, Szirth B, Habiel M, Khouri AS. Implementation of deep learning artificial intelligence in vision-threatening disease screenings for an underserved community during COVID-19. J Telemed Telecare 2024; 30:1590-1597. [PMID: 36908254 PMCID: PMC10014445 DOI: 10.1177/1357633x231158832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 02/05/2023] [Indexed: 03/14/2023]
Abstract
INTRODUCTION Age-related macular degeneration, diabetic retinopathy, and glaucoma are vision-threatening diseases that are leading causes of vision loss. Many studies have validated deep learning artificial intelligence for image-based diagnosis of vision-threatening diseases. Our study prospectively investigated deep learning artificial intelligence applications in student-run non-mydriatic screenings for an underserved, primarily Hispanic community during COVID-19. METHODS Five supervised student-run community screenings were held in West New York, New Jersey. Participants underwent non-mydriatic 45-degree retinal imaging by medical students. Images were uploaded to a cloud-based deep learning artificial intelligence for vision-threatening disease referral. An on-site tele-ophthalmology grader and remote clinical ophthalmologist graded images, with adjudication by a senior ophthalmologist to establish the gold standard diagnosis, which was used to assess the performance of deep learning artificial intelligence. RESULTS A total of 385 eyes from 195 screening participants were included (mean age 52.43 ± 14.5 years, 40.0% female). A total of 48 participants were referred for at least one vision-threatening disease. Deep learning artificial intelligence marked 150/385 (38.9%) eyes as ungradable, compared to 10/385 (2.6%) ungradable as per the human gold standard (p < 0.001). Deep learning artificial intelligence had 63.2% sensitivity, 94.5% specificity, 32.0% positive predictive value, and 98.4% negative predictive value in vision-threatening disease referrals. Deep learning artificial intelligence successfully referred all 4 eyes with multiple vision-threatening diseases. Deep learning artificial intelligence graded images (35.6 ± 13.3 s) faster than the tele-ophthalmology grader (129 ± 41.0) and clinical ophthalmologist (68 ± 21.9, p < 0.001). DISCUSSION Deep learning artificial intelligence can increase the efficiency and accessibility of vision-threatening disease screenings, particularly in underserved communities. Deep learning artificial intelligence should be adaptable to different environments. Consideration should be given to how deep learning artificial intelligence can best be utilized in a real-world application, whether in computer-aided or autonomous diagnosis.
Collapse
Affiliation(s)
- Aretha Zhu
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Priya Tailor
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Rashika Verma
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Isis Zhang
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Brian Schott
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Catherine Ye
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Bernard Szirth
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Miriam Habiel
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Albert S Khouri
- Institute of Ophthalmology & Visual Science, Rutgers New Jersey Medical School, Newark, NJ, USA
| |
Collapse
|
3
|
Sükei E, Rumetshofer E, Schmidinger N, Mayr A, Schmidt-Erfurth U, Klambauer G, Bogunović H. Multi-modal representation learning in retinal imaging using self-supervised learning for enhanced clinical predictions. Sci Rep 2024; 14:26802. [PMID: 39500979 PMCID: PMC11538269 DOI: 10.1038/s41598-024-78515-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 10/31/2024] [Indexed: 11/08/2024] Open
Abstract
Self-supervised learning has become the cornerstone of building generalizable and transferable artificial intelligence systems in medical imaging. In particular, contrastive representation learning techniques trained on large multi-modal datasets have demonstrated impressive capabilities of producing highly transferable representations for different downstream tasks. In ophthalmology, large multi-modal datasets are abundantly available and conveniently accessible as modern retinal imaging scanners acquire both 2D fundus images and 3D optical coherence tomography (OCT) scans to assess the eye. In this context, we introduce a novel multi-modal contrastive learning-based pipeline to facilitate learning joint representations for the two retinal imaging modalities. After self-supervised pre-training on 153,306 scan pairs, we show that such a pre-training framework can provide both a retrieval system and encoders that produce comprehensive OCT and fundus image representations that generalize well for various downstream tasks on three independent external datasets, explicitly focusing on clinically pertinent prediction tasks. In addition, we show that interchanging OCT with lower-cost fundus imaging can preserve the predictive power of the trained models.
Collapse
Affiliation(s)
- Emese Sükei
- OPTIMA Lab, Department of of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| | - Elisabeth Rumetshofer
- LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz, Austria
| | - Niklas Schmidinger
- LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz, Austria
| | - Andreas Mayr
- LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz, Austria
| | - Ursula Schmidt-Erfurth
- OPTIMA Lab, Department of of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Günter Klambauer
- LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz, Austria
| | - Hrvoje Bogunović
- OPTIMA Lab, Department of of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
- Institute of Artificial Intelligence, Center for Medical Data Science, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
4
|
Niestrata M, Radia M, Jackson J, Allan B. Global review of publicly available image datasets for the anterior segment of the eye. J Cataract Refract Surg 2024; 50:1184-1190. [PMID: 39150312 DOI: 10.1097/j.jcrs.0000000000001538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/03/2024] [Indexed: 08/17/2024]
Abstract
This study comprehensively reviewed publicly available image datasets for the anterior segment, with a focus on cataract, refractive, and corneal surgeries. The goal was to assess characteristics of existing datasets and identify areas for improvement. PubMED and Google searches were performed using the search terms "refractive surgery," "anterior segment," "cornea," "corneal," "cataract" AND "database," with the related word of "imaging." Results of each of these searches were collated, identifying 26 publicly available anterior segment image datasets. Imaging modalities included optical coherence tomography, photography, and confocal microscopy. Most datasets were small, 80% originated in the U.S., China, or Europe. Over 50% of images were from normal eyes. Disease states represented included keratoconus, corneal ulcers, and Fuchs dystrophy. Most of the datasets were incompletely described. To promote accessibility going forward to 2030, the ESCRS Digital Health Special Interest Group will annually update a list of available image datasets for anterior segment at www.escrs.org .
Collapse
Affiliation(s)
- Magdalena Niestrata
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom (Niestrata, Allan); Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom (Radia, Allan); Data and Statistics Department, University of East London, London, United Kingdom (Jackson)
| | | | | | | |
Collapse
|
5
|
Vujosevic S, Limoli C, Nucci P. Novel artificial intelligence for diabetic retinopathy and diabetic macular edema: what is new in 2024? Curr Opin Ophthalmol 2024; 35:472-479. [PMID: 39259647 PMCID: PMC11426980 DOI: 10.1097/icu.0000000000001084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW Given the increasing global burden of diabetic retinopathy and the rapid advancements in artificial intelligence, this review aims to summarize the current state of artificial intelligence technology in diabetic retinopathy detection and management, assessing its potential to improve care and visual outcomes in real-world settings. RECENT FINDINGS Most recent studies focused on the integration of artificial intelligence in the field of diabetic retinopathy screening, focusing on real-world efficacy and clinical implementation of such artificial intelligence models. Additionally, artificial intelligence holds the potential to predict diabetic retinopathy progression, enhance personalized treatment strategies, and identify systemic disease biomarkers from ocular images through 'oculomics', moving towards a more precise, efficient, and accessible care. The emergence of foundation model architectures and generative artificial intelligence, which more clearly reflect the clinical care process, may enable rapid advances in diabetic retinopathy care, research and medical education. SUMMARY This review explores the emerging technology of artificial intelligence to assess the potential to improve patient outcomes and optimize personalized management in healthcare delivery and medical research. While artificial intelligence is expected to play an increasingly important role in diabetic retinopathy care, ongoing research and clinical trials are essential to address implementation issues and focus on long-term patient outcomes for successful real-world adoption of artificial intelligence in diabetic retinopathy.
Collapse
Affiliation(s)
- Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan
- Eye Clinic, IRCCS MultiMedica
| | - Celeste Limoli
- Department of Ophthalmology, University of Milan, Milan, Italy
| | - Paolo Nucci
- Department of Biomedical, Surgical and Dental Sciences, University of Milan
| |
Collapse
|
6
|
Reiter GS, Mai J, Riedl S, Birner K, Frank S, Bogunovic H, Schmidt-Erfurth U. AI in the clinical management of GA: A novel therapeutic universe requires novel tools. Prog Retin Eye Res 2024; 103:101305. [PMID: 39343193 DOI: 10.1016/j.preteyeres.2024.101305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 09/25/2024] [Accepted: 09/26/2024] [Indexed: 10/01/2024]
Abstract
Regulatory approval of the first two therapeutic substances for the management of geographic atrophy (GA) secondary to age-related macular degeneration (AMD) is a major breakthrough following failure of numerous previous trials. However, in the absence of therapeutic standards, diagnostic tools are a key challenge as functional parameters in GA are hard to provide. The majority of anatomical biomarkers are subclinical, necessitating advanced and sensitive image analyses. In contrast to fundus autofluorescence (FAF), optical coherence tomography (OCT) provides high-resolution visualization of neurosensory layers, including photoreceptors, and other features that are beyond the scope of human expert assessment. Artificial intelligence (AI)-based methodology strongly enhances identification and quantification of clinically relevant GA-related sub-phenotypes. Introduction of OCT-based biomarker analysis provides novel insight into the pathomechanisms of disease progression and therapeutic, moving beyond the limitations of conventional descriptive assessment. Accordingly, the Food and Drug Administration (FDA) has provided a paradigm-shift in recognizing ellipsoid zone (EZ) attenuation as a primary outcome measure in GA clinical trials. In this review, the transition from previous to future GA classification and management is described. With the advent of AI tools, diagnostic and therapeutic concepts have changed substantially in monitoring and screening of GA disease. Novel technology combined with pathophysiological knowledge and understanding of the therapeutic response to GA treatments, is currently opening the path for an automated, efficient and individualized patient care with great potential to improve access to timely treatment and reduce health disparities.
Collapse
Affiliation(s)
- Gregor S Reiter
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Julia Mai
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Klaudia Birner
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Frank
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Hrvoje Bogunovic
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| |
Collapse
|
7
|
Parravano M, Cennamo G, Di Antonio L, Grassi MO, Lupidi M, Rispoli M, Savastano MC, Veritti D, Vujosevic S. Multimodal imaging in diabetic retinopathy and macular edema: An update about biomarkers. Surv Ophthalmol 2024; 69:893-904. [PMID: 38942124 DOI: 10.1016/j.survophthal.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 06/18/2024] [Accepted: 06/24/2024] [Indexed: 06/30/2024]
Abstract
Diabetic macular edema (DME), defined as retinal thickening near, or involving the fovea caused by fluid accumulation in the retina, can lead to vision impairment and blindness in patients with diabetes. Current knowledge of retina anatomy and function and DME pathophysiology has taken great advantage of the availability of several techniques for visualizing the retina. Combining these techniques in a multimodal imaging approach to DME is recommended to improve diagnosis and to guide treatment decisions. We review the recent literature about the following retinal imaging technologies: optical coherence tomography (OCT), OCT angiography (OCTA), wide-field and ultrawide-field techniques applied to fundus photography, fluorescein angiography, and OCTA. The emphasis will be on characteristic DME features identified by these imaging technologies and their potential or established role as diagnostic, prognostic, or predictive biomarkers. The role of artificial intelligence in the assessment and interpretation of retina images is also discussed.
Collapse
Affiliation(s)
| | - Gilda Cennamo
- Eye Clinic, Public Health Department, University of Naples Federico II, Naples, Italy
| | - Luca Di Antonio
- UOC Ophthalmology and Surgery Department, ASL-1 Avezzano-Sulmona, L'Aquila, Italy
| | - Maria Oliva Grassi
- Eye Clinic, Azienda Ospedaliero-Universitaria Policlinico, University of Bari, Bari, Italy
| | - Marco Lupidi
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | | | - Maria Cristina Savastano
- Ophthalmology Unit, Fondazione Policlinico Universitario A. Gemelli, IRCCS, Rome, Italy; Catholic University "Sacro Cuore", Rome, Italy
| | - Daniele Veritti
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy; Eye Clinic, IRCCS MultiMedica, Milan, Italy.
| |
Collapse
|
8
|
Melo GB, Nakayama LF, Cardoso VS, Dos Santos LA, Malerbi FK. Synchronous Diagnosis of Diabetic Retinopathy by a Handheld Retinal Camera, Artificial Intelligence, and Simultaneous Specialist Confirmation. Ophthalmol Retina 2024; 8:1083-1092. [PMID: 38750937 DOI: 10.1016/j.oret.2024.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 05/07/2024] [Accepted: 05/07/2024] [Indexed: 06/16/2024]
Abstract
PURPOSE Diabetic retinopathy (DR) is a leading cause of preventable blindness, particularly in underserved regions where access to ophthalmic care is limited. This study presents a proof of concept for utilizing a portable handheld retinal camera with an embedded artificial intelligence (AI) platform, complemented by a synchronous remote confirmation by retina specialists, for DR screening in an underserved rural area. DESIGN Retrospective cohort study. SUBJECTS A total of 1115 individuals with diabetes. METHODS A retrospective analysis of a screening initiative conducted in 4 municipalities in Northeastern Brazil, targeting the diabetic population. A portable handheld retinal camera captured macula-centered and disc-centered images, which were analyzed by the AI system. Immediate push notifications were sent out to retina specialists upon the detection of significant abnormalities, enabling synchronous verification and confirmation, with on-site patient feedback within minutes. Referral criteria were established, and all referred patients underwent a complete ophthalmic work-up and subsequent treatment. MAIN OUTCOME MEASURES Proof-of-concept implementation success. RESULTS Out of 2052 invited individuals, 1115 participated, with a mean age of 60.93 years and diabetes duration of 7.52 years; 66.03% were women. The screening covered 2222 eyes, revealing various retinal conditions. Referable eyes for DR were 11.84%, with an additional 13% for other conditions (diagnoses included various stages of DR, media opacity, nevus, drusen, enlarged cup-to-disc ratio, pigmentary changes, and other). Artificial intelligence performance for overall detection of referable cases (both DR and other conditions) was as follows: sensitivity 84.23% (95% confidence interval (CI), 82.63-85.84), specificity 80.79% (95% CI, 79.05-82.53). When we assessed whether AI matched any clinical diagnosis, be it referable or not, sensitivity was 85.67% (95% CI, 84.12-87.22), specificity was 98.86 (95% CI, 98.39-99.33), and area under the curve was 0.92 (95% CI, 0.91-0.94). CONCLUSIONS The integration of a portable device, AI analysis, and synchronous medical validation has the potential to play a crucial role in preventing blindness from DR, especially in socially unequal scenarios. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Gustavo Barreto Melo
- Department of Ophthalmology, Federal University of São Paulo, São Paulo-SP, Brazil; Hospital de Olhos de Sergipe, Aracaju-SE, Brazil; Retina Clinic, São Paulo-SP, Brazil.
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Federal University of São Paulo, São Paulo-SP, Brazil; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | | | | | | |
Collapse
|
9
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
10
|
Ding X, Romano F, Garg I, Gan J, Vingopoulos F, Garcia MD, Overbey KM, Cui Y, Zhu Y, Bennett CF, Stettler I, Shan M, Finn MJ, Vavvas DG, Husain D, Patel NA, Kim LA, Miller JB. Expanded Field OCT Angiography Biomarkers for Predicting Clinically Significant Outcomes in Non-Proliferative Diabetic Retinopathy. Am J Ophthalmol 2024:S0002-9394(24)00485-9. [PMID: 39490720 DOI: 10.1016/j.ajo.2024.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 10/15/2024] [Accepted: 10/15/2024] [Indexed: 11/05/2024]
Abstract
PURPOSE To evaluate the utility of extended field swept-source Optical Coherence Tomography Angiography (SS-OCTA) imaging biomarkers in predicting the occurrence of clinically significant outcomes in eyes with Non-Proliferative Diabetic Retinopathy (NPDR). DESIGN Retrospective clinical case-control study. METHODS Single-center clinical study. 88 eyes with NPDR from 57 participants (median age: 64.0 years; mean duration of diabetes: 15.8 years) with at least two consecutive SS-OCTA scans over a follow-up period of at least six months were included. The presence of intraretinal microvascular abnormalities (IRMAs) at baseline and the stability of IRMAs during follow-up period on 12 × 12-mm angiograms were evaluated. Baseline nonperfusion ischemia index (ISI) and other SS-OCTA metrics were calculated on FIJI and ARI Network. Significant clinical outcomes were defined as occurrence of one or more of the following events at the last available clinical visit:1. significant DR progression (2-step DR progression or progression to proliferative DR (PDR)); 2) development of new center-involving diabetic macular edema (CI-DME); and 3) initiation of treatment with PRP or anti-VEGF injections during the follow-up period. Mixed-effects Cox regression models was used to explore these outcomes. RESULTS Following a clinical follow-up period lasting 25.1 ± 10.8 months, we observed significant clinical outcomes in 17 eyes (19.3%). Among these, 7 eyes (8.0%) experienced significant progression and 4 eyes (4.5%) developed CI-DME. Anti-VEGF injections were initiated in 15 eyes (17.0%), while PRP was initiated in 2 eyes (2.3%). Upon adjusting for age, the duration of DM, and prior Anti-VEGF treatments, our analysis revealed that non-stable IRMAs during the follow-up periods and a higher ischemia index at baseline were significantly associated with the occurrence of significant clinical outcomes with HRs of 3.88 (95% CI: 1.56-9.64; p=0.004) and 1.05 (95% CI: 1.02-1.09; p=0.004), respectively. CONCLUSIONS In conclusion, NPDR eyes with non-stable IRMAs over time and more ischemia at baseline are in higher risk of developing significant clinical outcomes. Our findings suggest that expanded field SS-OCTA may offer additional prognostic benefits for clinical DR staging and predicting high-risk patients.
Collapse
Affiliation(s)
- Xinyi Ding
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA; Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Francesco Romano
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA; Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Itika Garg
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Jenny Gan
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Filippos Vingopoulos
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Mauricio D Garcia
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Katherine M Overbey
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Ying Cui
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Ying Zhu
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Cade F Bennett
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Isabella Stettler
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Mridula Shan
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Matthew J Finn
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Demetrios G Vavvas
- Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Deeba Husain
- Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Nimesh A Patel
- Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Leo A Kim
- Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - John B Miller
- Harvard Retinal Imaging Lab, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA; Retina Service, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
11
|
Similié DE, Andersen JKH, Dinesen S, Savarimuthu TR, Grauslund J. Grading of diabetic retinopathy using a pre-segmenting deep learning classification model: Validation of an automated algorithm. Acta Ophthalmol 2024. [PMID: 39425597 DOI: 10.1111/aos.16781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 10/05/2024] [Indexed: 10/21/2024]
Abstract
PURPOSE To validate the performance of autonomous diabetic retinopathy (DR) grading by comparing a human grader and a self-developed deep-learning (DL) algorithm with gold-standard evaluation. METHODS We included 500, 6-field retinal images graded by an expert ophthalmologist (gold standard) according to the International Clinical Diabetic Retinopathy Disease Severity Scale as represented with DR levels 0-4 (97, 100, 100, 103, 100, respectively). Weighted kappa was calculated to measure the DR classification agreement for (1) a certified human grader without, and (2) with assistance from a DL algorithm and (3) the DL operating autonomously. Using any DR (level 0 vs. 1-4) as a cutoff, we calculated sensitivity, specificity, as well as positive and negative predictive values (PPV and NPV). Finally, we assessed lesion discrepancies between Model 3 and the gold standard. RESULTS As compared to the gold standard, weighted kappa for Models 1-3 was 0.88, 0.89 and 0.72, sensitivities were 95%, 94% and 78% and specificities were 82%, 84% and 81%. Extrapolating to a real-world DR prevalence of 23.8%, the PPV were 63%, 64% and 57% and the NPV were 98%, 98% and 92%. Discrepancies between the gold standard and Model 3 were mainly incorrect detection of artefacts (n = 49), missed microaneurysms (n = 26) and inconsistencies between the segmentation and classification (n = 51). CONCLUSION While the autonomous DL algorithm for DR classification only performed on par with a human grader for some measures in a high-risk population, extrapolations to a real-world population demonstrated an excellent 92% NPV, which could make it clinically feasible to use autonomously to identify non-DR patients.
Collapse
Affiliation(s)
| | - Jakob K H Andersen
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Sebastian Dinesen
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark
| | - Thiusius R Savarimuthu
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Department of Ophthalmology, Vestfold Hospital Trust, Tønsberg, Norway
| |
Collapse
|
12
|
Zhang Q, Zhang P, Chen N, Zhu Z, Li W, Wang Q. Trends and hotspots in the field of diabetic retinopathy imaging research from 2000-2023. Front Med (Lausanne) 2024; 11:1481088. [PMID: 39444814 PMCID: PMC11496202 DOI: 10.3389/fmed.2024.1481088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Accepted: 09/27/2024] [Indexed: 10/25/2024] Open
Abstract
Background Diabetic retinopathy (DR) poses a major threat to diabetic patients' vision and is a critical public health issue. Imaging applications for DR have grown since the 21st century, aiding diagnosis, grading, and screening. This study uses bibliometric analysis to assess the field's advancements and key areas of interest. Methods This study performed a bibliometric analysis of DR imaging articles collected from the Web of Science Core Collection database between January 1st, 2000, and December 31st, 2023. The literature information was then analyzed through CiteSpace. Results The United States and China led in the number of publications, with 719 and 609, respectively. The University of London topped the institution list with 139 papers. Tien Yin Wong was the most prolific researcher. Invest. Ophthalmol. Vis. Sci. published the most articles (105). Notable burst keywords were "deep learning," "artificial intelligence," et al. Conclusion The United States is at the forefront of DR research, with the University of London as the top institution and Invest. Ophthalmol. Vis. Sci. as the most published journal. Tien Yin Wong is the most influential researcher. Hotspots like "deep learning," and "artificial intelligence," have seen a significant rise, indicating artificial intelligence's growing role in DR imaging.
Collapse
Affiliation(s)
- Qing Zhang
- The Third Affiliated Hospital of Xinxiang Medical University, Xinxiang Medical University, Xinxiang, China
| | - Ping Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Naimei Chen
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China
| | - Zhentao Zhu
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China
| | - Wangting Li
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Qiang Wang
- Department of Ophthalmology, Third Affiliated Hospital, Wenzhou Medical University, Zhejiang, China
| |
Collapse
|
13
|
Jabara M, Kose O, Perlman G, Corcos S, Pelletier MA, Possik E, Tsoukas M, Sharma A. Artificial Intelligence-Based Digital Biomarkers for Type 2 Diabetes: A Review. Can J Cardiol 2024; 40:1922-1933. [PMID: 39111729 DOI: 10.1016/j.cjca.2024.07.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 07/27/2024] [Accepted: 07/29/2024] [Indexed: 09/10/2024] Open
Abstract
Type 2 diabetes mellitus (T2DM), a complex metabolic disorder that burdens the health care system, requires early detection and treatment. Recent strides in digital health technologies, coupled with artificial intelligence (AI), may have the potential to revolutionize T2DM screening, diagnosis of complications, and management through the development of digital biomarkers. This review provides an overview of the potential applications of AI-driven biomarkers in the context of screening, diagnosing complications, and managing patients with T2DM. The benefits of using multisensor devices to develop digital biomarkers are discussed. The summary of these findings and patterns between model architecture and sensor type are presented. In addition, we highlight the pivotal role of AI techniques in clinical intervention and implementation, encompassing clinical decision support systems, telemedicine interventions, and population health initiatives. Challenges such as data privacy, algorithm interpretability, and regulatory considerations are also highlighted, alongside future research directions to explore the use of AI-driven digital biomarkers in T2DM screening and management.
Collapse
Affiliation(s)
- Mariam Jabara
- Centre for Outcome Research & Evaluation, McGill University Health Centre, Montréal, Québec, Canada; Division of Experimental Medicine, Faculty of Medicine and Health Science, McGill University, Montréal, Québec, Canada
| | - Orhun Kose
- Division of Experimental Medicine, Faculty of Medicine and Health Science, McGill University, Montréal, Québec, Canada; DREAM-CV Lab, Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - George Perlman
- Division of Experimental Medicine, Faculty of Medicine and Health Science, McGill University, Montréal, Québec, Canada; DREAM-CV Lab, Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Simon Corcos
- HOP-Child Technologies, Sherbrooke, Québec, Canada
| | | | - Elite Possik
- DREAM-CV Lab, Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Michael Tsoukas
- Centre for Outcome Research & Evaluation, McGill University Health Centre, Montréal, Québec, Canada; Department of Endocrinology, McGill University Health Centre, Montréal, Québec, Canada
| | - Abhinav Sharma
- Centre for Outcome Research & Evaluation, McGill University Health Centre, Montréal, Québec, Canada; Division of Experimental Medicine, Faculty of Medicine and Health Science, McGill University, Montréal, Québec, Canada; DREAM-CV Lab, Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.
| |
Collapse
|
14
|
de Lacy N, Lam WY, Ramshaw M. RiskPath: Explainable deep learning for multistep biomedical prediction in longitudinal data. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.09.19.24313909. [PMID: 39371168 PMCID: PMC11451668 DOI: 10.1101/2024.09.19.24313909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
Predicting individual and population risk for disease outcomes and identifying persons at elevated risk is a key prerequisite for targeting interventions to improve health. However, current risk stratification tools for the common, chronic diseases that develop over the lifecourse and represent the majority of disease morbidity, mortality and healthcare costs are aging and achieve only moderate predictive performance. In some common, highly morbid conditions such as mental illness no risk stratification tools are yet available. There is an urgent need to improve predictive performance for chronic diseases and understand how cumulative, multifactorial risks aggregate over time so that intervention programs can be targeted earlier and more effectively in the disease course. Chronic diseases are the end outcomes of multifactorial risks that increment over years and represent cumulative, temporally-sensitive risk pathways. However, tools in current clinical use were constructed in older data and utilize inputs from a single data collection step. Here, we present RiskPath, a multistep deep learning method for temporally-sensitive biomedical risk prediction tailored for the constraints and demands of biomedical practice that achieves very strong performance and full translational explainability. RiskPath delineates and quantifies cumulative multifactorial risk pathways and allows the user to explore performance-complexity tradeoffs and constrain models as required by clinical use cases. Our results highlight the potential for developing a new generation of risk stratification tools and risk pathway mapping in time-dependent diseases and health outcomes by leveraging powerful timeseries deep learning methods in the wealth of biomedical data now appearing in large, longitudinal open science datasets.
Collapse
Affiliation(s)
- Nina de Lacy
- Department of Psychiatry, University of Utah, Salt Lake City, Utah
| | - Wai Yin Lam
- Scientific Computing Institute, University of Utah, Salt Lake City, Utah
| | - Michael Ramshaw
- Department of Psychiatry, University of Utah, Salt Lake City, Utah
| |
Collapse
|
15
|
Chia MA, Antaki F, Zhou Y, Turner AW, Lee AY, Keane PA. Foundation models in ophthalmology. Br J Ophthalmol 2024; 108:1341-1348. [PMID: 38834291 PMCID: PMC11503093 DOI: 10.1136/bjo-2024-325459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/26/2024] [Indexed: 06/06/2024]
Abstract
Foundation models represent a paradigm shift in artificial intelligence (AI), evolving from narrow models designed for specific tasks to versatile, generalisable models adaptable to a myriad of diverse applications. Ophthalmology as a specialty has the potential to act as an exemplar for other medical specialties, offering a blueprint for integrating foundation models broadly into clinical practice. This review hopes to serve as a roadmap for eyecare professionals seeking to better understand foundation models, while equipping readers with the tools to explore the use of foundation models in their own research and practice. We begin by outlining the key concepts and technological advances which have enabled the development of these models, providing an overview of novel training approaches and modern AI architectures. Next, we summarise existing literature on the topic of foundation models in ophthalmology, encompassing progress in vision foundation models, large language models and large multimodal models. Finally, we outline major challenges relating to privacy, bias and clinical validation, and propose key steps forward to maximise the benefit of this powerful technology.
Collapse
Affiliation(s)
- Mark A Chia
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, Quebec, Canada
| | - Yukun Zhou
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
- University of Western Australia, Perth, Western Australia, Australia
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
16
|
Upadhyaya S, Rao DP, Kavitha S, Ballae Ganeshrao S, Negiloni K, Bhandary S, Savoy FM, Venkatesh R. Diagnostic Performance of the Offline Medios Artificial Intelligence for Glaucoma Detection in a Rural Tele-Ophthalmology Setting. Ophthalmol Glaucoma 2024:S2589-4196(24)00173-X. [PMID: 39277171 DOI: 10.1016/j.ogla.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 08/31/2024] [Accepted: 09/09/2024] [Indexed: 09/17/2024]
Abstract
PURPOSE This study assesses the diagnostic efficacy of offline Medios Artificial Intelligence (AI) glaucoma software in a primary eye care setting, using nonmydriatic fundus images from Remidio's Fundus-on-Phone (FOP NM-10). Artificial intelligence results were compared with tele-ophthalmologists' diagnoses and with a glaucoma specialist's assessment for those participants referred to a tertiary eye care hospital. DESIGN Prospective cross-sectional study PARTICIPANTS: Three hundred three participants from 6 satellite vision centers of a tertiary eye hospital. METHODS At the vision center, participants underwent comprehensive eye evaluations, including clinical history, visual acuity measurement, slit lamp examination, intraocular pressure measurement, and fundus photography using the FOP NM-10 camera. Medios AI-Glaucoma software analyzed 42-degree disc-centric fundus images, categorizing them as normal, glaucoma, or suspect. Tele-ophthalmologists who were glaucoma fellows with a minimum of 3 years of ophthalmology and 1 year of glaucoma fellowship training, masked to artificial intelligence (AI) results, remotely diagnosed subjects based on the history and disc appearance. All participants labeled as disc suspects or glaucoma by AI or tele-ophthalmologists underwent further comprehensive glaucoma evaluation at the base hospital, including clinical examination, Humphrey visual field analysis, and OCT. Artificial intelligence and tele-ophthalmologist diagnoses were then compared with a glaucoma specialist's diagnosis. MAIN OUTCOME MEASURES Sensitivity and specificity of Medios AI. RESULTS Out of 303 participants, 299 with at least one eye of sufficient image quality were included in the study. The remaining 4 participants did not have sufficient image quality in both eyes. Medios AI identified 39 participants (13%) with referable glaucoma. The AI exhibited a sensitivity of 0.91 (95% confidence interval [CI]: 0.71-0.99) and specificity of 0.93 (95% CI: 0.89-0.96) in detecting referable glaucoma (definite perimetric glaucoma) when compared to tele-ophthalmologist. The agreement between AI and the glaucoma specialist was 80.3%, surpassing the 55.3% agreement between the tele-ophthalmologist and the glaucoma specialist amongst those participants who were referred to the base hospital. Both AI and the tele-ophthalmologist relied on fundus photos for diagnoses, whereas the glaucoma specialist's assessments at the base hospital were aided by additional tools such as Humphrey visual field analysis and OCT. Furthermore, AI had fewer false positive referrals (2 out of 10) compared to the tele-ophthalmologist (9 out of 10). CONCLUSIONS Medios offline AI exhibited promising sensitivity and specificity in detecting referable glaucoma from remote vision centers in southern India when compared with teleophthalmologists. It also demonstrated better agreement with glaucoma specialist's diagnosis for referable glaucoma participants. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India.
| | | | | | | | - Kalpa Negiloni
- Remidio Innovative Solutions Private Limited, Bengaluru, India
| | - Shreya Bhandary
- Remidio Innovative Solutions Private Limited, Bengaluru, India
| | - Florian M Savoy
- Medios Technologies, Remidio Innovative Solutions, Singapore
| | | |
Collapse
|
17
|
Baget-Bernaldiz M, Fontoba-Poveda B, Romero-Aroca P, Navarro-Gil R, Hernando-Comerma A, Bautista-Perez A, Llagostera-Serra M, Morente-Lorenzo C, Vizcarro M, Mira-Puerto A. Artificial Intelligence-Based Screening System for Diabetic Retinopathy in Primary Care. Diagnostics (Basel) 2024; 14:1992. [PMID: 39272776 PMCID: PMC11394635 DOI: 10.3390/diagnostics14171992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 09/01/2024] [Accepted: 09/06/2024] [Indexed: 09/15/2024] Open
Abstract
BACKGROUND This study aimed to test an artificial intelligence-based reading system (AIRS) capable of reading retinographies of type 2 diabetic (T2DM) patients and a predictive algorithm (DRPA) that predicts the risk of each patient with T2DM of developing diabetic retinopathy (DR). METHODS We tested the ability of the AIRS to read and classify 15,297 retinal photographs from our database of diabetics and 1200 retinal images taken with Messidor-2 into the different DR categories. We tested the DRPA in a sample of 40,129 T2DM patients. The results obtained by the AIRS and the DRPA were then compared with those provided by four retina specialists regarding sensitivity (S), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), accuracy (ACC), and area under the curve (AUC). RESULTS The results of testing the AIRS for identifying referral DR (RDR) in our database were ACC = 98.6, S = 96.7, SP = 99.8, PPV = 99.0, NPV = 98.0, and AUC = 0.958, and in Messidor-2 were ACC = 96.78%, S = 94.64%, SP = 99.14%, PPV = 90.54%, NPV = 99.53%, and AUC = 0.918. The results of our DRPA when predicting the presence of any type of DR were ACC = 0.97, S = 0.89, SP = 0.98, PPV = 0.79, NPV = 0.98, and AUC = 0.92. CONCLUSIONS The AIRS performed well when reading and classifying the retinographies of T2DM patients with RDR. The DRPA performed well in predicting the absence of DR based on some clinical variables.
Collapse
Affiliation(s)
- Marc Baget-Bernaldiz
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Benilde Fontoba-Poveda
- Responsible for Diabetic Retinopathy Eye Screening Program in Primary Care in Baix Llobregat Barcelona (Spain), Institut d'Investigació Sanitaria Pere Virgili [IISPV], 43204 Reus, Spain
| | - Pedro Romero-Aroca
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Raul Navarro-Gil
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Adriana Hernando-Comerma
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Angel Bautista-Perez
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Monica Llagostera-Serra
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Cristian Morente-Lorenzo
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Montse Vizcarro
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| | - Alejandra Mira-Puerto
- Ophthalmology Service, Hospital Universitari Sant Joan, Institut d'Investigació Sanitària Pere Virgili [IISPV], Universitat Rovira i Virgili, 43204 Reus, Spain
| |
Collapse
|
18
|
Youssef A, Nichol AA, Martinez-Martin N, Larson DB, Abramoff M, Wolf RM, Char D. Ethical Considerations in the Design and Conduct of Clinical Trials of Artificial Intelligence. JAMA Netw Open 2024; 7:e2432482. [PMID: 39240560 PMCID: PMC11380101 DOI: 10.1001/jamanetworkopen.2024.32482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/07/2024] Open
Abstract
Importance Safe integration of artificial intelligence (AI) into clinical settings often requires randomized clinical trials (RCT) to compare AI efficacy with conventional care. Diabetic retinopathy (DR) screening is at the forefront of clinical AI applications, marked by the first US Food and Drug Administration (FDA) De Novo authorization for an autonomous AI for such use. Objective To determine the generalizability of the 7 ethical research principles for clinical trials endorsed by the National Institute of Health (NIH), and identify ethical concerns unique to clinical trials of AI. Design, Setting, and Participants This qualitative study included semistructured interviews conducted with 11 investigators engaged in the design and implementation of clinical trials of AI for DR screening from November 11, 2022, to February 20, 2023. The study was a collaboration with the ACCESS (AI for Children's Diabetic Eye Exams) trial, the first clinical trial of autonomous AI in pediatrics. Participant recruitment initially utilized purposeful sampling, and later expanded with snowball sampling. Study methodology for analysis combined a deductive approach to explore investigators' perspectives of the 7 ethical principles for clinical research endorsed by the NIH and an inductive approach to uncover the broader ethical considerations implementing clinical trials of AI within care delivery. Results A total of 11 participants (mean [SD] age, 47.5 [12.0] years; 7 male [64%], 4 female [36%]; 3 Asian [27%], 8 White [73%]) were included, with diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, and AI development. Key themes revealed several ethical challenges unique to clinical trials of AI. These themes included difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios across various patient subgroups, and addressing the complexities inherent in the data use terms of informed consent. Conclusions and Relevance This qualitative study identified practical ethical challenges that investigators need to consider and negotiate when conducting AI clinical trials, exemplified by the DR screening use-case. These considerations call for further guidance on where to focus empirical and normative ethical efforts to best support conduct clinical trials of AI and minimize unintended harm to trial participants.
Collapse
Affiliation(s)
- Alaa Youssef
- Departments of Radiology, Stanford University School of Medicine, Stanford, California
| | - Ariadne A Nichol
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California
| | - Nicole Martinez-Martin
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California
- Department of Psychiatry, Stanford University School of Medicine, Stanford, California
| | - David B Larson
- Departments of Radiology, Stanford University School of Medicine, Stanford, California
| | - Michael Abramoff
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospital and Clinics, Iowa City
- Electrical and Computer Engineering, University of Iowa, Iowa City
| | - Risa M Wolf
- Division of Endocrinology, Department of Pediatrics, The Johns Hopkins School of Medicine, Baltimore, Maryland
| | - Danton Char
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California
- Department of Anesthesiology, Division of Pediatric Cardiac Anesthesia, Stanford, California
| |
Collapse
|
19
|
Antaki F, Hammana I, Tessier MC, Boucher A, David Jetté ML, Beauchemin C, Hammamji K, Ong AY, Rhéaume MA, Gauthier D, Harissi-Dagher M, Keane PA, Pomp A. Implementation of Artificial Intelligence-Based Diabetic Retinopathy Screening in a Tertiary Care Hospital in Quebec: Prospective Validation Study. JMIR Diabetes 2024; 9:e59867. [PMID: 39226095 PMCID: PMC11408885 DOI: 10.2196/59867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/28/2024] [Accepted: 07/06/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) affects about 25% of people with diabetes in Canada. Early detection of DR is essential for preventing vision loss. OBJECTIVE We evaluated the real-world performance of an artificial intelligence (AI) system that analyzes fundus images for DR screening in a Quebec tertiary care center. METHODS We prospectively recruited adult patients with diabetes at the Centre hospitalier de l'Université de Montréal (CHUM) in Montreal, Quebec, Canada. Patients underwent dual-pathway screening: first by the Computer Assisted Retinal Analysis (CARA) AI system (index test), then by standard ophthalmological examination (reference standard). We measured the AI system's sensitivity and specificity for detecting referable disease at the patient level, along with its performance for detecting any retinopathy and diabetic macular edema (DME) at the eye level, and potential cost savings. RESULTS This study included 115 patients. CARA demonstrated a sensitivity of 87.5% (95% CI 71.9-95.0) and specificity of 66.2% (95% CI 54.3-76.3) for detecting referable disease at the patient level. For any retinopathy detection at the eye level, CARA showed 88.2% sensitivity (95% CI 76.6-94.5) and 71.4% specificity (95% CI 63.7-78.1). For DME detection, CARA had 100% sensitivity (95% CI 64.6-100) and 81.9% specificity (95% CI 75.6-86.8). Potential yearly savings from implementing CARA at the CHUM were estimated at CAD $245,635 (US $177,643.23, as of July 26, 2024) considering 5000 patients with diabetes. CONCLUSIONS Our study indicates that integrating a semiautomated AI system for DR screening demonstrates high sensitivity for detecting referable disease in a real-world setting. This system has the potential to improve screening efficiency and reduce costs at the CHUM, but more work is needed to validate it.
Collapse
Affiliation(s)
- Fares Antaki
- Institute of Ophthalmology, University College London, London, United Kingdom
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
- The CHUM School of Artificial Intelligence in Healthcare, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Imane Hammana
- Health Technology Assessment Unit, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Marie-Catherine Tessier
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Andrée Boucher
- Division of Endocrinology, Department of Medicine, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Maud Laurence David Jetté
- Direction du soutien à la transformation, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | | | - Karim Hammamji
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Ariel Yuhan Ong
- Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Oxford Eye Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Marc-André Rhéaume
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Danny Gauthier
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Mona Harissi-Dagher
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom
| | - Alfons Pomp
- Health Technology Assessment Unit, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Surgery, University of Montréal, Montreal, QC, Canada
| |
Collapse
|
20
|
Krogh M, Jensen MB, Sig Ager Jensen M, Hentze Hansen M, Germund Nielsen M, Vorum H, Kristensen JK. Exploring general practice staff perspectives on a teaching concept based on instruction videos for diabetic retinopathy screening - an interview study. Scand J Prim Health Care 2024:1-10. [PMID: 39225788 DOI: 10.1080/02813432.2024.2396873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
OBJECTIVE The aim of this study is to explore general practice staff perspectives regarding a teaching concept based on instructional videos for conducting DR screenings. Furthermore, this study aims to investigate the competencies acquired by the staff through this teaching concept. DESIGN AND SETTING Qualitative cross-sectional study conducted in general practice clinics in the North Denmark Region. METHOD A teaching concept was developed based on instruction videos to teach general practice staff to conduct diabetic retinopathy screenings with automated grading through artificial intelligence. Semi-structured interviews were performed with 16 staff members to investigate their perspectives on the concept and acquired competencies. RESULTS This study found no substantial resistance to the teaching concept from staff; however, participants' satisfaction with the methods employed in the instruction session, the progression of learning curves, screening competencies, and their acceptance of a known knowledge gap during screenings varied slightly among the participants. CONCLUSION This study showed that the teaching concept can be used to teach general practice staff to conduct diabetic retinopathy screenings. Staffs' perspectives on the teaching concept and acquired competencies varied, and this study suggest few adjustments to the concept to accommodate staff's preferences and establish more consistent competencies.
Collapse
Affiliation(s)
- Malene Krogh
- Center for General Practice, Aalborg University, Aalborg, Denmark
| | | | | | - Malene Hentze Hansen
- Department of Otorhinolaryngology, Head and Neck Surgery, Aalborg University Hospital, Aalborg, Denmark
| | - Marie Germund Nielsen
- The Clinical Nursing Research Unit, Aalborg University Hospital, Aalborg, Denmark
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Henrik Vorum
- Department of Ophthalmology, Aalborg University Hospital, Aalborg, Denmark
| | | |
Collapse
|
21
|
Scott IA, Miller T, Crock C. Using conversant artificial intelligence to improve diagnostic reasoning: ready for prime time? Med J Aust 2024; 221:240-243. [PMID: 39086025 DOI: 10.5694/mja2.52401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 04/22/2024] [Indexed: 08/02/2024]
Affiliation(s)
- Ian A Scott
- University of Queensland, Brisbane, QLD
- Princess Alexandra Hospital, Brisbane, QLD
| | | | - Carmel Crock
- Royal Victorian Eye and Ear Hospital, Melbourne, VIC
| |
Collapse
|
22
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
23
|
Mai J, Schmidt-Erfurth U. Role of Artificial Intelligence in Retinal Diseases. Klin Monbl Augenheilkd 2024; 241:1023-1031. [PMID: 39284358 PMCID: PMC11405099 DOI: 10.1055/a-2378-6138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Artificial intelligence (AI) has already found its way into ophthalmology, with the first approved algorithms that can be used in clinical routine. Retinal diseases in particular are proving to be an important area of application for AI, as they are the main cause of blindness and the number of patients suffering from retinal diseases is constantly increasing. At the same time, regular imaging using high-resolution modalities in a standardised and reproducible manner generates immense amounts of data that can hardly be processed by human experts. In addition, ophthalmology is constantly experiencing new developments and breakthroughs that require a re-evaluation of patient management in routine clinical practice. AI is able to analyse these volumes of data efficiently and objectively and also provide new insights into disease progression and therapeutic mechanisms by identifying relevant biomarkers. AI can make a significant contribution to screening, classification and prognosis of various retinal diseases and can ultimately be a clinical decision support system, that significantly reduces the burden on both everyday clinical practice and the healthcare system, by making more efficient use of costly and time-consuming resources.
Collapse
Affiliation(s)
- Julia Mai
- Universitätsklinik für Augenheilkunde und Optometrie, Medizinische Universität Wien, Österreich
| | - Ursula Schmidt-Erfurth
- Universitätsklinik für Augenheilkunde und Optometrie, Medizinische Universität Wien, Österreich
| |
Collapse
|
24
|
Abramoff MD, Char D. What Do We Do with Physicians When Autonomous AI-Enabled Workflow is Better for Patient Outcomes? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:93-96. [PMID: 39225989 DOI: 10.1080/15265161.2024.2377111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
|
25
|
Li H, Jia W, Vujosevic S, Sabanayagam C, Grauslund J, Sivaprasad S, Wong TY. Current research and future strategies for the management of vision-threatening diabetic retinopathy. Asia Pac J Ophthalmol (Phila) 2024; 13:100109. [PMID: 39395715 DOI: 10.1016/j.apjo.2024.100109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 09/28/2024] [Accepted: 10/08/2024] [Indexed: 10/14/2024] Open
Abstract
Diabetic retinopathy (DR) is a major ocular complication of diabetes and the leading cause of blindness and visual impairment, particularly among adults of working-age adults. Although the medical and economic burden of DR is significant and its global prevalence is expected to increase, particularly in low- and middle-income countries, a large portion of vision loss caused by DR remains preventable through early detection and timely intervention. This perspective reviewed the latest developments in research and innovation in three areas, first novel biomarkers (including advanced imaging modalities, serum biomarkers, and artificial intelligence technology) to predict the incidence and progression of DR, second, screening and early detection of referable DR and vision-threatening DR (VTDR), and finally, novel therapeutic strategies for VTDR, including diabetic macular oedema (DME), with the goal of reducing diabetic blindness.
Collapse
Affiliation(s)
- Huating Li
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Shanghai, China
| | - Weiping Jia
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Shanghai, China
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy; Eye Clinic, IRCCS MultiMedica, Milan, Italy
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Ophthalmology, Vestfold Hospital Trust, Tønsberg, Norway
| | - Sobha Sivaprasad
- NIHR Moorfields Clinical Research Facility, Moorfields Eye Hospital, London, United Kingdom
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China.
| |
Collapse
|
26
|
Senthil R, Anand T, Somala CS, Saravanan KM. Bibliometric analysis of artificial intelligence in healthcare research: Trends and future directions. Future Healthc J 2024; 11:100182. [PMID: 39310219 PMCID: PMC11414662 DOI: 10.1016/j.fhj.2024.100182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 08/06/2024] [Accepted: 08/30/2024] [Indexed: 09/25/2024]
Abstract
Objective The presence of artificial intelligence (AI) in healthcare is a powerful and game-changing force that is completely transforming the industry as a whole. Using sophisticated algorithms and data analytics, AI has unparalleled prospects for improving patient care, streamlining operational efficiency, and fostering innovation across the healthcare ecosystem. This study conducts a comprehensive bibliometric analysis of research on AI in healthcare, utilising the SCOPUS database as the primary data source. Methods Preliminary findings from 2013 identified 153 publications on AI and healthcare. Between 2019 and 2023, the number of publications increased exponentially, indicating significant growth and development in the field. The analysis employs various bibliometric indicators to assess research production performance, science mapping techniques, and thematic mapping analysis. Results The study reveals insights into research hotspots, thematic focus, and emerging trends in AI and healthcare research. Based on an extensive examination of the Scopus database provides a brief overview and suggests potential avenues for further investigation. Conclusion This article provides valuable contributions to understanding the current landscape of AI in healthcare, offering insights for future research directions and informing strategic decision making in the field.
Collapse
Affiliation(s)
- Renganathan Senthil
- Department of Bioinformatics, School of Lifesciences, Vels Institute of Science Technology and Advanced Studies (VISTAS), Pallavaram, Chennai 600117, Tamil Nadu, India
| | - Thirunavukarasou Anand
- SRIIC Lab, Faculty of Clinical Research, Sri Ramachandra Institute of Higher Education and Research, Chennai 600116, Tamil Nadu, India
- B Aatral Biosciences Private Limited, Bangalore 560091, Karnataka, India
| | | | - Konda Mani Saravanan
- B Aatral Biosciences Private Limited, Bangalore 560091, Karnataka, India
- Department of Biotechnology, Bharath Institute of Higher Education and Research, Chennai 600073, Tamil Nadu, India
| |
Collapse
|
27
|
Holm S. Ethical trade-offs in AI for mental health. Front Psychiatry 2024; 15:1407562. [PMID: 39267699 PMCID: PMC11390554 DOI: 10.3389/fpsyt.2024.1407562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/15/2024] [Indexed: 09/15/2024] Open
Abstract
It is expected that machine learning algorithms will enable better diagnosis, prognosis, and treatment in psychiatry. A central argument for deploying algorithmic methods in clinical decision-making in psychiatry is that they may enable not only faster and more accurate clinical judgments but also that they may provide a more objective foundation for clinical decisions. This article argues that the outputs of algorithms are never objective in the sense of being unaffected by human values and possibly biased choices. And it suggests that the best way to approach this is to ensure awareness of and transparency about the ethical trade-offs that must be made when developing an algorithm for mental health.
Collapse
Affiliation(s)
- Sune Holm
- Department of Food and Resource Economics, University of Copenhagen, Frederiksberg, Denmark
| |
Collapse
|
28
|
He H, Zhu J, Ye Z, Bao H, Shou J, Liu Y, Chen F. Using multimodal ultrasound including full-time-series contrast-enhanced ultrasound cines for identifying the nature of thyroid nodules. Front Oncol 2024; 14:1340847. [PMID: 39267842 PMCID: PMC11390443 DOI: 10.3389/fonc.2024.1340847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 08/07/2024] [Indexed: 09/15/2024] Open
Abstract
Background Based on the conventional ultrasound images of thyroid nodules, contrast-enhanced ultrasound (CEUS) videos were analyzed to investigate whether CEUS improves the classification accuracy of benign and malignant thyroid nodules using machine learning (ML) radiomics and compared with radiologists. Materials and methods The B-mode ultrasound (B-US), real-time elastography (RTE), color doppler flow images (CDFI) and CEUS cines of patients from two centers were retrospectively gathered. Then, the region of interest (ROI) was delineated to extract the radiomics features. Seven ML algorithms combined with four kinds of radiomics data (B-US, B-US + CDFI + RTE, CEUS, and B-US + CDFI + RTE + CEUS) were applied to establish 28 models. The diagnostic performance of ML models was compared with interpretations from expert and nonexpert readers. Results A total of 181 thyroid nodules from 181 patients of 64 men (mean age, 42 years +/- 12) and 117 women (mean age, 46 years +/- 12) were included. Adaptive boosting (AdaBoost) achieved the highest area under the receiver operating characteristic curve (AUC) of 0.89 in the test set among 28 models when combined with B-US + CDFI + RTE + CEUS data and an AUC of 0.72 and 0.66 when combined with B-US and B-US + CDFI + RTE data. The AUC achieved by senior and junior radiologists was 0.78 versus (vs.) 0.69 (p > 0.05), 0.79 vs. 0.64 (p < 0.05), and 0.88 vs. 0.69 (p < 0.05) combined with B-US, B-US+CDFI+RTE and B-US+CDFI+RTE+CEUS, respectively. Conclusion With the addition of CEUS, the diagnostic performance was enhanced for all seven classifiers and senior radiologists based on conventional ultrasound images, while no enhancement was observed for junior radiologists. The diagnostic performance of ML models was similar to senior radiologists, but superior to those junior radiologists.
Collapse
Affiliation(s)
- Hanlu He
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Ultrasound, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Junyan Zhu
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Ultrasound, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhengdu Ye
- Department of Ultrasound, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Haiwei Bao
- Department of Ultrasound, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jinduo Shou
- Department of Ultrasound, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Ying Liu
- Department of Ultrasound, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Fen Chen
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Ultrasound, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
29
|
Dos Reis MA, Künas CA, da Silva Araújo T, Schneiders J, de Azevedo PB, Nakayama LF, Rados DRV, Umpierre RN, Berwanger O, Lavinsky D, Malerbi FK, Navaux POA, Schaan BD. Advancing healthcare with artificial intelligence: diagnostic accuracy of machine learning algorithm in diagnosis of diabetic retinopathy in the Brazilian population. Diabetol Metab Syndr 2024; 16:209. [PMID: 39210394 PMCID: PMC11360296 DOI: 10.1186/s13098-024-01447-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND In healthcare systems in general, access to diabetic retinopathy (DR) screening is limited. Artificial intelligence has the potential to increase care delivery. Therefore, we trained and evaluated the diagnostic accuracy of a machine learning algorithm for automated detection of DR. METHODS We included color fundus photographs from individuals from 4 databases (primary and specialized care settings), excluding uninterpretable images. The datasets consist of images from Brazilian patients, which differs from previous work. This modification allows for a more tailored application of the model to Brazilian patients, ensuring that the nuances and characteristics of this specific population are adequately captured. The sample was fractionated in training (70%) and testing (30%) samples. A convolutional neural network was trained for image classification. The reference test was the combined decision from three ophthalmologists. The sensitivity, specificity, and area under the ROC curve of the algorithm for detecting referable DR (moderate non-proliferative DR; severe non-proliferative DR; proliferative DR and/or clinically significant macular edema) were estimated. RESULTS A total of 15,816 images (4590 patients) were included. The overall prevalence of any degree of DR was 26.5%. Compared with human evaluators (manual method of diagnosing DR performed by an ophthalmologist), the deep learning algorithm achieved an area under the ROC curve of 0.98 (95% CI 0.97-0.98), with a specificity of 94.6% (95% CI 93.8-95.3) and a sensitivity of 93.5% (95% CI 92.2-94.9) at the point of greatest efficiency to detect referable DR. CONCLUSIONS A large database showed that this deep learning algorithm was accurate in detecting referable DR. This finding aids to universal healthcare systems like Brazil, optimizing screening processes and can serve as a tool for improving DR screening, making it more agile and expanding care access.
Collapse
Affiliation(s)
- Mateus A Dos Reis
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil.
- Universidade Feevale, Novo Hamburgo, RS, Brazil.
| | - Cristiano A Künas
- Institute of Informatics, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Thiago da Silva Araújo
- Institute of Informatics, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Josiane Schneiders
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | | | - Luis F Nakayama
- Department of Ophthalmology and Visual Sciences, Universidade Federal de São Paulo, São Paulo, Brazil
- Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dimitris R V Rados
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- TelessaúdeRS Project, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Roberto N Umpierre
- TelessaúdeRS Project, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- Department of Social Medicine, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
| | - Otávio Berwanger
- The George Institute for Global Health, Imperial College London, London, UK
| | - Daniel Lavinsky
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- Department of Ophthalmology, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
| | - Fernando K Malerbi
- Department of Ophthalmology and Visual Sciences, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Philippe O A Navaux
- Institute of Informatics, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Beatriz D Schaan
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- Institute for Health Technology Assessment (IATS) - CNPq, Porto Alegre, Brazil
- Endocrinology Unit, Hospital de Clínicas de Porto Alegre, Porto Alegre, RS, Brazil
| |
Collapse
|
30
|
Li Z, Wang L, Qiang W, Chen K, Wang Z, Zhang Y, Xie H, Wu S, Jiang J, Chen W. DeepMonitoring: a deep learning-based monitoring system for assessing the quality of cornea images captured by smartphones. Front Cell Dev Biol 2024; 12:1447067. [PMID: 39258227 PMCID: PMC11385315 DOI: 10.3389/fcell.2024.1447067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 08/19/2024] [Indexed: 09/12/2024] Open
Abstract
Smartphone-based artificial intelligence (AI) diagnostic systems could assist high-risk patients to self-screen for corneal diseases (e.g., keratitis) instead of detecting them in traditional face-to-face medical practices, enabling the patients to proactively identify their own corneal diseases at an early stage. However, AI diagnostic systems have significantly diminished performance in low-quality images which are unavoidable in real-world environments (especially common in patient-recorded images) due to various factors, hindering the implementation of these systems in clinical practice. Here, we construct a deep learning-based image quality monitoring system (DeepMonitoring) not only to discern low-quality cornea images created by smartphones but also to identify the underlying factors contributing to the generation of such low-quality images, which can guide operators to acquire high-quality images in a timely manner. This system performs well across validation, internal, and external testing sets, with AUCs ranging from 0.984 to 0.999. DeepMonitoring holds the potential to filter out low-quality cornea images produced by smartphones, facilitating the application of smartphone-based AI diagnostic systems in real-world clinical settings, especially in the context of self-screening for corneal diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Lei Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Kuan Chen
- Cangnan Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Yi Zhang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Shanjun Wu
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
31
|
Nguyen V, Iyengar S, Rasheed H, Apolo G, Li Z, Kumar A, Nguyen H, Bohner A, Dhodapkar R, Do J, Duong A, Gluckstein J, Hong K, Humayun L, James A, Lee J, Nguyen K, Wong B, Ambite JL, Kesselman C, Daskivich L, Pazzani M, Xu BY. Expert-Level Detection of Referable Glaucoma from Fundus Photographs in a Safety Net Population: The AI and Teleophthalmology in Los Angeles Initiative. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.08.25.24312563. [PMID: 39252888 PMCID: PMC11383486 DOI: 10.1101/2024.08.25.24312563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Purpose To develop and test a deep learning (DL) algorithm for detecting referable glaucoma in the Los Angeles County (LAC) Department of Health Services (DHS) teleretinal screening program. Methods Fundus photographs and patient-level labels of referable glaucoma (defined as cup-to-disc ratio [CDR] ≥ 0.6) provided by 21 trained optometrist graders were obtained from the LAC DHS teleretinal screening program. A DL algorithm based on the VGG-19 architecture was trained using patient-level labels generalized to images from both eyes. Area under the receiver operating curve (AUC), sensitivity, and specificity were calculated to assess algorithm performance using an independent test set that was also graded by 13 clinicians with one to 15 years of experience. Algorithm performance was tested using reference labels provided by either LAC DHS optometrists or an expert panel of 3 glaucoma specialists. Results 12,098 images from 5,616 patients (2,086 referable glaucoma, 3,530 non-glaucoma) were used to train the DL algorithm. In this dataset, mean age was 56.8 ± 10.5 years with 54.8% females and 68.2% Latinos, 8.9% Blacks, 2.7% Caucasians, and 6.0% Asians. 1,000 images from 500 patients (250 referable glaucoma, 250 non-glaucoma) with similar demographics (p ≥ 0.57) were used to test the DL algorithm. Algorithm performance matched or exceeded that of all independent clinician graders in detecting patient-level referable glaucoma based on LAC DHS optometrist (AUC = 0.92) or expert panel (AUC = 0.93) reference labels. Clinician grader sensitivity (range: 0.33-0.99) and specificity (range: 0.68-0.98) ranged widely and did not correlate with years of experience (p ≥ 0.49). Algorithm performance (AUC = 0.93) also matched or exceeded the sensitivity (range: 0.78-1.00) and specificity (range: 0.32-0.87) of 6 LAC DHS optometrists in the subsets of the test dataset they graded based on expert panel reference labels. Conclusions A DL algorithm for detecting referable glaucoma developed using patient-level data provided by trained LAC DHS optometrists approximates or exceeds performance by ophthalmologists and optometrists, who exhibit variable sensitivity and specificity unrelated to experience level. Implementation of this algorithm in screening workflows could help reallocate eye care resources and provide more reproducible and timely glaucoma care.
Collapse
|
32
|
Datta D, Ray S, Martinez L, Newman D, Dalmida SG, Hashemi J, Sareli C, Eckardt P. Feature Identification Using Interpretability Machine Learning Predicting Risk Factors for Disease Severity of In-Patients with COVID-19 in South Florida. Diagnostics (Basel) 2024; 14:1866. [PMID: 39272651 PMCID: PMC11394003 DOI: 10.3390/diagnostics14171866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 08/16/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
Objective: The objective of the study was to establish an AI-driven decision support system by identifying the most important features in the severity of disease for Intensive Care Unit (ICU) with Mechanical Ventilation (MV) requirement, ICU, and InterMediate Care Unit (IMCU) admission for hospitalized patients with COVID-19 in South Florida. The features implicated in the risk factors identified by the model interpretability can be used to forecast treatment plans faster before critical conditions exacerbate. Methods: We analyzed eHR data from 5371 patients diagnosed with COVID-19 from South Florida Memorial Healthcare Systems admitted between March 2020 and January 2021 to predict the need for ICU with MV, ICU, and IMCU admission. A Random Forest classifier was trained on patients' data augmented by SMOTE, collected at hospital admission. We then compared the importance of features utilizing different model interpretability analyses, such as SHAP, MDI, and Permutation Importance. Results: The models for ICU with MV, ICU, and IMCU admission identified the following factors overlapping as the most important predictors among the three outcomes: age, race, sex, BMI, diarrhea, diabetes, hypertension, early stages of kidney disease, and pneumonia. It was observed that individuals over 65 years ('older adults'), males, current smokers, and BMI classified as 'overweight' and 'obese' were at greater risk of severity of illness. The severity was intensified by the co-occurrence of two interacting features (e.g., diarrhea and diabetes). Conclusions: The top features identified by the models' interpretability were from the 'sociodemographic characteristics', 'pre-hospital comorbidities', and 'medications' categories. However, 'pre-hospital comorbidities' played a vital role in different critical conditions. In addition to individual feature importance, the feature interactions also provide crucial information for predicting the most likely outcome of patients' conditions when urgent treatment plans are needed during the surge of patients during the pandemic.
Collapse
Affiliation(s)
- Debarshi Datta
- Christine E. Lynn College of Nursing, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Subhosit Ray
- Christine E. Lynn College of Nursing, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Laurie Martinez
- Christine E. Lynn College of Nursing, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - David Newman
- Christine E. Lynn College of Nursing, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Safiya George Dalmida
- Christine E. Lynn College of Nursing, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Javad Hashemi
- College of Engineering & Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | | | - Paula Eckardt
- Memorial Healthcare System, Hollywood, FL 33021, USA
| |
Collapse
|
33
|
Liu Z, Han X, Gao L, Chen S, Huang W, Li P, Wu Z, Wang M, Zheng Y. Cost-effectiveness of incorporating self-imaging optical coherence tomography into fundus photography-based diabetic retinopathy screening. NPJ Digit Med 2024; 7:225. [PMID: 39181938 PMCID: PMC11344775 DOI: 10.1038/s41746-024-01222-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 08/13/2024] [Indexed: 08/27/2024] Open
Abstract
Diabetic macular edema (DME) has emerged as the foremost cause of vision loss in the population with diabetes. Early detection of DME is paramount, yet the prevailing screening, relying on two-dimensional and labor-intensive fundus photography (FP), results in frequent unwarranted referrals and overlooked diagnoses. Self-imaging optical coherence tomography (SI-OCT), offering fully automated, three-dimensional macular imaging, holds the potential to enhance DR screening. We conducted an observational study within a cohort of 1822 participants with diabetes, who received comprehensive assessments, including visual acuity testing, FP, and SI-OCT examinations. We compared the performance of three screening strategies: the conventional FP-based strategy, a combination strategy of FP and SI-OCT, and a simulated combination strategy of FP and manual SD-OCT. Additionally, we undertook a cost-effectiveness analysis utilizing Markov models to evaluate the costs and benefits of the three strategies for referable DR. We found that the FP + SI-OCT strategy demonstrated superior sensitivity (87.69% vs 61.53%) and specificity (98.29% vs 92.47%) in detecting DME when compared to the FP-based strategy. Importantly, the FP + SI-OCT strategy outperformed the FP-based strategy, with an incremental cost-effectiveness ratio (ICER) of $8016 per quality-adjusted life year (QALY), while the FP + SD-OCT strategy was less cost-effective, with an ICER of $45,754/QALY. Our results were robust to extensive sensitivity analyses, with the FP + SI-OCT strategy standing as the dominant choice in 69.36% of simulations conducted at the current willingness-to-pay threshold. In summary, incorporating SI-OCT into FP-based screening offers substantial enhancements in sensitivity, specificity for detecting DME, and most notably, cost-effectiveness for DR screening.
Collapse
Affiliation(s)
- Zitian Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Le Gao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shida Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Peng Li
- MOPTIM Imaging Technique Co. Ltd, Shenzhen, China
| | - Zhiyan Wu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Mengchi Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| |
Collapse
|
34
|
Riotto E, Gasser S, Potic J, Sherif M, Stappler T, Schlingemann R, Wolfensberger T, Konstantinidis L. Accuracy of Autonomous Artificial Intelligence-Based Diabetic Retinopathy Screening in Real-Life Clinical Practice. J Clin Med 2024; 13:4776. [PMID: 39200918 PMCID: PMC11355215 DOI: 10.3390/jcm13164776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/24/2024] [Accepted: 08/08/2024] [Indexed: 09/02/2024] Open
Abstract
Background: In diabetic retinopathy, early detection and intervention are crucial in preventing vision loss and improving patient outcomes. In the era of artificial intelligence (AI) and machine learning, new promising diagnostic tools have emerged. The IDX-DR machine (Digital Diagnostics, Coralville, IA, USA) represents a diagnostic tool that combines advanced imaging techniques, AI algorithms, and deep learning methodologies to identify and classify diabetic retinopathy. Methods: All patients that participated in our AI-based DR screening were considered for this study. For this study, all retinal images were additionally reviewed retrospectively by two experienced retinal specialists. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy were calculated for the IDX-DR machine compared to the graders' responses. Results: We included a total of 2282 images from 1141 patients who were screened between January 2021 and January 2023 at the Jules Gonin Eye Hospital in Lausanne, Switzerland. Sensitivity was calculated to be 100% for 'no DR', 'mild DR', and 'moderate DR'. Specificity for no DR', 'mild DR', 'moderate DR', and 'severe DR' was calculated to be, respectively, 78.4%, 81.2%, 93.4%, and 97.6%. PPV was calculated to be, respectively, 36.7%, 24.6%, 1.4%, and 0%. NPV was calculated to be 100% for each category. Accuracy was calculated to be higher than 80% for 'no DR', 'mild DR', and 'moderate DR'. Conclusions: In this study, based in Jules Gonin Eye Hospital in Lausanne, we compared the autonomous diagnostic AI system of the IDX-DR machine detecting diabetic retinopathy to human gradings established by two experienced retinal specialists. Our results showed that the ID-x DR machine constantly overestimates the DR stages, thus permitting the clinicians to fully trust negative results delivered by the screening software. Nevertheless, all fundus images classified as 'mild DR' or greater should always be controlled by a specialist in order to assert whether the predicted stage is truly present.
Collapse
|
35
|
Wang Y, Han X, Li C, Luo L, Yin Q, Zhang J, Peng G, Shi D, He M. Impact of Gold-Standard Label Errors on Evaluating Performance of Deep Learning Models in Diabetic Retinopathy Screening: Nationwide Real-World Validation Study. J Med Internet Res 2024; 26:e52506. [PMID: 39141915 PMCID: PMC11358665 DOI: 10.2196/52506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/30/2023] [Accepted: 03/22/2024] [Indexed: 08/16/2024] Open
Abstract
BACKGROUND For medical artificial intelligence (AI) training and validation, human expert labels are considered the gold standard that represents the correct answers or desired outputs for a given data set. These labels serve as a reference or benchmark against which the model's predictions are compared. OBJECTIVE This study aimed to assess the accuracy of a custom deep learning (DL) algorithm on classifying diabetic retinopathy (DR) and further demonstrate how label errors may contribute to this assessment in a nationwide DR-screening program. METHODS Fundus photographs from the Lifeline Express, a nationwide DR-screening program, were analyzed to identify the presence of referable DR using both (1) manual grading by National Health Service England-certificated graders and (2) a DL-based DR-screening algorithm with validated good lab performance. To assess the accuracy of labels, a random sample of images with disagreement between the DL algorithm and the labels was adjudicated by ophthalmologists who were masked to the previous grading results. The error rates of labels in this sample were then used to correct the number of negative and positive cases in the entire data set, serving as postcorrection labels. The DL algorithm's performance was evaluated against both pre- and postcorrection labels. RESULTS The analysis included 736,083 images from 237,824 participants. The DL algorithm exhibited a gap between the real-world performance and the lab-reported performance in this nationwide data set, with a sensitivity increase of 12.5% (from 79.6% to 92.5%, P<.001) and a specificity increase of 6.9% (from 91.6% to 98.5%, P<.001). In the random sample, 63.6% (560/880) of negative images and 5.2% (140/2710) of positive images were misclassified in the precorrection human labels. High myopia was the primary reason for misclassifying non-DR images as referable DR images, while laser spots were predominantly responsible for misclassified referable cases. The estimated label error rate for the entire data set was 1.2%. The label correction was estimated to bring about a 12.5% enhancement in the estimated sensitivity of the DL algorithm (P<.001). CONCLUSIONS Label errors based on human image grading, although in a small percentage, can significantly affect the performance evaluation of DL algorithms in real-world DR screening.
Collapse
Affiliation(s)
- Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Cong Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qiuxia Yin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Guankai Peng
- Guangzhou Vision Tech Medical Technology Co, Ltd, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Centre for Eye and Vision Research, Hong Kong, China (Hong Kong)
| |
Collapse
|
36
|
Oualikene-Gonin W, Jaulent MC, Thierry JP, Oliveira-Martins S, Belgodère L, Maison P, Ankri J. Artificial intelligence integration in the drug lifecycle and in regulatory science: policy implications, challenges and opportunities. Front Pharmacol 2024; 15:1437167. [PMID: 39156111 PMCID: PMC11327028 DOI: 10.3389/fphar.2024.1437167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 07/18/2024] [Indexed: 08/20/2024] Open
Abstract
Artificial intelligence tools promise transformative impacts in drug development. Regulatory agencies face challenges in integrating AI while ensuring reliability and safety in clinical trial approvals, drug marketing authorizations, and post-market surveillance. Incorporating these technologies into the existing regulatory framework and agency practices poses notable challenges, particularly in evaluating the data and models employed for these purposes. Rapid adaptation of regulations and internal processes is essential for agencies to keep pace with innovation, though achieving this requires collective stakeholder collaboration. This article thus delves into the need for adaptations of regulations throughout the drug development lifecycle, as well as the utilization of AI within internal processes of medicine agencies.
Collapse
Affiliation(s)
- Wahiba Oualikene-Gonin
- Agence Nationale de Sécurité des Médicaments et des Produits de Santé (ANSM) Saint-Denis, Saint-Denis, France
| | - Marie-Christine Jaulent
- INSERM, Laboratoire d'Informatique Médicale et d'Ingénierie des Connaissances en e-Santé, LIMICS, Sorbonne Université, Paris, France
| | | | - Sofia Oliveira-Martins
- Faculty of Pharmacy of Lisbon University, Lisbon, Portugal
- CHRC – Comprehensive Health Research Center, Evora, Portugal
| | - Laetitia Belgodère
- Agence Nationale de Sécurité des Médicaments et des Produits de Santé (ANSM) Saint-Denis, Saint-Denis, France
| | - Patrick Maison
- Agence Nationale de Sécurité des Médicaments et des Produits de Santé (ANSM) Saint-Denis, Saint-Denis, France
- EA 7379, Faculté de Santé, Université Paris-Est Créteil, Créteil, France
- CHI Créteil, Créteil, France
| | - Joël Ankri
- Université de Versailles St Quentin-Paris Saclay, Inserm U1018, Guyancourt, France
| | | |
Collapse
|
37
|
Christopher M, Hallaj S, Jiravarnsirikul A, Baxter SL, Zangwill LM. Novel Technologies in Artificial Intelligence and Telemedicine for Glaucoma Screening. J Glaucoma 2024; 33:S26-S32. [PMID: 38506792 DOI: 10.1097/ijg.0000000000002367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 01/22/2024] [Indexed: 03/21/2024]
Abstract
PURPOSE To provide an overview of novel technologies in telemedicine and artificial intelligence (AI) approaches for cost-effective glaucoma screening. METHODS/RESULTS A narrative review was performed by summarizing research results, recent developments in glaucoma detection and care, and considerations related to telemedicine and AI in glaucoma screening. Telemedicine and AI approaches provide the opportunity for novel glaucoma screening programs in primary care, optometry, portable, and home-based settings. These approaches offer several advantages for glaucoma screening, including increasing access to care, lowering costs, identifying patients in need of urgent treatment, and enabling timely diagnosis and early intervention. However, challenges remain in implementing these systems, including integration into existing clinical workflows, ensuring equity for patients, and meeting ethical and regulatory requirements. Leveraging recent work towards standardized data acquisition as well as tools and techniques developed for automated diabetic retinopathy screening programs may provide a model for a cost-effective approach to glaucoma screening. CONCLUSION Leveraging novel technologies and advances in telemedicine and AI-based approaches to glaucoma detection show promise for improving our ability to detect moderate and advanced glaucoma in primary care settings and target higher individuals at high risk for having the disease.
Collapse
Affiliation(s)
- Mark Christopher
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Shahin Hallaj
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Anuwat Jiravarnsirikul
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Sally L Baxter
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Linda M Zangwill
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| |
Collapse
|
38
|
Wiens J, Spector-Bagdady K, Mukherjee B. Toward Realizing the Promise of AI in Precision Health Across the Spectrum of Care. Annu Rev Genomics Hum Genet 2024; 25:141-159. [PMID: 38724019 DOI: 10.1146/annurev-genom-010323-010230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
Significant progress has been made in augmenting clinical decision-making using artificial intelligence (AI) in the context of secondary and tertiary care at large academic medical centers. For such innovations to have an impact across the spectrum of care, additional challenges must be addressed, including inconsistent use of preventative care and gaps in chronic care management. The integration of additional data, including genomics and data from wearables, could prove critical in addressing these gaps, but technical, legal, and ethical challenges arise. On the technical side, approaches for integrating complex and messy data are needed. Data and design imperfections like selection bias, missing data, and confounding must be addressed. In terms of legal and ethical challenges, while AI has the potential to aid in leveraging patient data to make clinical care decisions, we also risk exacerbating existing disparities. Organizations implementing AI solutions must carefully consider how they can improve care for all and reduce inequities.
Collapse
Affiliation(s)
- Jenna Wiens
- Division of Computer Science and Engineering, College of Engineering, University of Michigan, Ann Arbor, Michigan, USA;
| | - Kayte Spector-Bagdady
- Department of Obstetrics and Gynecology and Center for Bioethics and Social Sciences in Medicine, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Bhramar Mukherjee
- Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
39
|
Rao DP, Savoy FM, Sivaraman A, Dutt S, Shahsuvaryan M, Jrbashyan N, Hambardzumyan N, Yeghiazaryan N, Das T. Evaluation of an AI algorithm trained on an ethnically diverse dataset to screen a previously unseen population for diabetic retinopathy. Indian J Ophthalmol 2024; 72:1162-1167. [PMID: 39078960 PMCID: PMC11451790 DOI: 10.4103/ijo.ijo_2151_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/22/2023] [Accepted: 02/02/2024] [Indexed: 10/06/2024] Open
Abstract
PURPOSE This study aimed to determine the generalizability of an artificial intelligence (AI) algorithm trained on an ethnically diverse dataset to screen for referable diabetic retinopathy (RDR) in the Armenian population unseen during AI development. METHODS This study comprised 550 patients with diabetes mellitus visiting the polyclinics of Armenia over 10 months requiring diabetic retinopathy (DR) screening. The Medios AI-DR algorithm was developed using a robust, diverse, ethnically balanced dataset with no inherent bias and deployed offline on a smartphone-based fundus camera. The algorithm here analyzed the retinal images captured using the target device for the presence of RDR (i.e., moderate non-proliferative diabetic retinopathy (NPDR) and/or clinically significant diabetic macular edema (CSDME) or more severe disease) and sight-threatening DR (STDR, i.e., severe NPDR and/or CSDME or more severe disease). The results compared the AI output to a consensus or majority image grading of three expert graders according to the International Clinical Diabetic Retinopathy severity scale. RESULTS On 478 subjects included in the analysis, the algorithm achieved a high classification sensitivity of 95.30% (95% CI: 91.9%-98.7%) and a specificity of 83.89% (95% CI: 79.9%-87.9%) for the detection of RDR. The sensitivity for STDR detection was 100%. CONCLUSION The study proved that Medios AI-DR algorithm yields good accuracy in screening for RDR in the Armenian population. In our literature search, this is the only smartphone-based, offline AI model validated in different populations.
Collapse
Affiliation(s)
- Divya P Rao
- AL& ML, Remidio Innovative Solutions, Inc, Glen Allen, USA
| | - Florian M Savoy
- AI&ML, Medios Technologies Pte Ltd, Remidio Innovative Solutions, Singapore
| | - Anand Sivaraman
- AI&ML, Remidio Innovative Solutions Pvt Ltd, Bengaluru, India
| | - Sreetama Dutt
- AI&ML, Remidio Innovative Solutions Pvt Ltd, Bengaluru, India
| | - Marianne Shahsuvaryan
- Ophthalmology, Yerevan State Medical University, Armenia
- Armenian Eyecare Project, Yerevan State University, Armenia
| | | | | | | | - Taraprasad Das
- Vitreoretinal Services, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, India
| |
Collapse
|
40
|
Ashayeri H, Jafarizadeh A, Yousefi M, Farhadi F, Javadzadeh A. Retinal imaging and Alzheimer's disease: a future powered by Artificial Intelligence. Graefes Arch Clin Exp Ophthalmol 2024; 262:2389-2401. [PMID: 38358524 DOI: 10.1007/s00417-024-06394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 01/22/2024] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative condition that primarily affects brain tissue. Because the retina and brain share the same embryonic origin, visual deficits have been reported in AD patients. Artificial Intelligence (AI) has recently received a lot of attention due to its immense power to process and detect image hallmarks and make clinical decisions (like diagnosis) based on images. Since retinal changes have been reported in AD patients, AI is being proposed to process images to predict, diagnose, and prognosis AD. As a result, the purpose of this review was to discuss the use of AI trained on retinal images of AD patients. According to previous research, AD patients experience retinal thickness and retinal vessel density changes, which can occasionally occur before the onset of the disease's clinical symptoms. AI and machine vision can detect and use these changes in the domains of disease prediction, diagnosis, and prognosis. As a result, not only have unique algorithms been developed for this condition, but also databases such as the Retinal OCTA Segmentation dataset (ROSE) have been constructed for this purpose. The achievement of high accuracy, sensitivity, and specificity in the classification of retinal images between AD and healthy groups is one of the major breakthroughs in using AI based on retinal images for AD. It is fascinating that researchers could pinpoint individuals with a positive family history of AD based on the properties of their eyes. In conclusion, the growing application of AI in medicine promises its future position in processing different aspects of patients with AD, but we need cohort studies to determine whether it can help to follow up with healthy persons at risk of AD for a quicker diagnosis or assess the prognosis of patients with AD.
Collapse
Affiliation(s)
- Hamidreza Ashayeri
- Neuroscience Research Center (NSRC), Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jafarizadeh
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Milad Yousefi
- Faculty of Mathematics, Statistics and Computer Sciences, University of Tabriz, Tabriz, Iran
| | - Fereshteh Farhadi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Alireza Javadzadeh
- Department of Ophthalmology, Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran.
| |
Collapse
|
41
|
Sheng B, Pushpanathan K, Guan Z, Lim QH, Lim ZW, Yew SME, Goh JHL, Bee YM, Sabanayagam C, Sevdalis N, Lim CC, Lim CT, Shaw J, Jia W, Ekinci EI, Simó R, Lim LL, Li H, Tham YC. Artificial intelligence for diabetes care: current and future prospects. Lancet Diabetes Endocrinol 2024; 12:569-595. [PMID: 39054035 DOI: 10.1016/s2213-8587(24)00154-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/28/2024] [Accepted: 05/16/2024] [Indexed: 07/27/2024]
Abstract
Artificial intelligence (AI) use in diabetes care is increasingly being explored to personalise care for people with diabetes and adapt treatments for complex presentations. However, the rapid advancement of AI also introduces challenges such as potential biases, ethical considerations, and implementation challenges in ensuring that its deployment is equitable. Ensuring inclusive and ethical developments of AI technology can empower both health-care providers and people with diabetes in managing the condition. In this Review, we explore and summarise the current and future prospects of AI across the diabetes care continuum, from enhancing screening and diagnosis to optimising treatment and predicting and managing complications.
Collapse
Affiliation(s)
- Bin Sheng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China; Key Laboratory of Artificial Intelligence, Ministry of Education, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Krithi Pushpanathan
- Centre of Innovation and Precision Eye Health, Department of Ophthalmology, National University of Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Quan Hziung Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Zhi Wei Lim
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Samantha Min Er Yew
- Centre of Innovation and Precision Eye Health, Department of Ophthalmology, National University of Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Yong Mong Bee
- Department of Endocrinology, Singapore General Hospital, Singapore; SingHealth Duke-National University of Singapore Diabetes Centre, Singapore Health Services, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Nick Sevdalis
- Centre for Behavioural and Implementation Science Interventions, National University of Singapore, Singapore
| | | | - Chwee Teck Lim
- Department of Biomedical Engineering, National University of Singapore, Singapore; Institute for Health Innovation and Technology, National University of Singapore, Singapore; Mechanobiology Institute, National University of Singapore, Singapore
| | - Jonathan Shaw
- Baker Heart and Diabetes Institute, Melbourne, VIC, Australia
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Elif Ilhan Ekinci
- Australian Centre for Accelerating Diabetes Innovations, Melbourne Medical School and Department of Medicine, University of Melbourne, Melbourne, VIC, Australia; Department of Endocrinology, Austin Health, Melbourne, VIC, Australia
| | - Rafael Simó
- Diabetes and Metabolism Research Unit, Vall d'Hebron University Hospital and Vall d'Hebron Research Institute, Barcelona, Spain; Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas, Instituto de Salud Carlos III, Madrid, Spain
| | - Lee-Ling Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia; Department of Medicine and Therapeutics, Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Asia Diabetes Foundation, Hong Kong Special Administrative Region, China
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Yih-Chung Tham
- Centre of Innovation and Precision Eye Health, Department of Ophthalmology, National University of Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.
| |
Collapse
|
42
|
Holm S. Data-driven decisions about individual patients: The case of medical AI. J Eval Clin Pract 2024; 30:735-740. [PMID: 37491780 DOI: 10.1111/jep.13904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 06/02/2023] [Accepted: 07/12/2023] [Indexed: 07/27/2023]
Abstract
There are high hopes that clinical decisions can be improved by adopting algorithms trained to estimate the likelihood that a patient suffers a condition C. Introducing work on the epistemic value of purely statistical evidence in legal epistemology I show that a certain type of AI devices for making medical decisions about persons rely on purely statistical evidence and that it raises an important question about the appropriateness of relying on such devices for allocating health resources. If the argument I present is sound, then it suggests a radical rethinking of the use of prevalent types of AI devices as well as the use of statistical evidence in medical practice more generally.
Collapse
Affiliation(s)
- Sune Holm
- Department of Food and Resource Economics, University of Copenhagen, Frederiksberg, Denmark
| |
Collapse
|
43
|
Benetz BAM, Shivade VS, Joseph NM, Romig NJ, McCormick JC, Chen J, Titus MS, Sawant OB, Clover JM, Yoganathan N, Menegay HJ, O'Brien RC, Wilson DL, Lass JH. Automatic Determination of Endothelial Cell Density From Donor Cornea Endothelial Cell Images. Transl Vis Sci Technol 2024; 13:40. [PMID: 39177992 PMCID: PMC11346145 DOI: 10.1167/tvst.13.8.40] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 06/21/2024] [Indexed: 08/24/2024] Open
Abstract
Purpose To determine endothelial cell density (ECD) from real-world donor cornea endothelial cell (EC) images using a self-supervised deep learning segmentation model. Methods Two eye banks (Eversight, VisionGift) provided 15,138 single, unique EC images from 8169 donors along with their demographics, tissue characteristics, and ECD. This dataset was utilized for self-supervised training and deep learning inference. The Cornea Image Analysis Reading Center (CIARC) provided a second dataset of 174 donor EC images based on image and tissue quality. These images were used to train a supervised deep learning cell border segmentation model. Evaluation between manual and automated determination of ECD was restricted to the 1939 test EC images with at least 100 cells counted by both methods. Results The ECD measurements from both methods were in excellent agreement with rc of 0.77 (95% confidence interval [CI], 0.75-0.79; P < 0.001) and bias of 123 cells/mm2 (95% CI, 114-131; P < 0.001); 81% of the automated ECD values were within 10% of the manual ECD values. When the analysis was further restricted to the cropped image, the rc was 0.88 (95% CI, 0.87-0.89; P < 0.001), bias was 46 cells/mm2 (95% CI, 39-53; P < 0.001), and 93% of the automated ECD values were within 10% of the manual ECD values. Conclusions Deep learning analysis provides accurate ECDs of donor images, potentially reducing analysis time and training requirements. Translational Relevance The approach of this study, a robust methodology for automatically evaluating donor cornea EC images, could expand the quantitative determination of endothelial health beyond ECD.
Collapse
Affiliation(s)
- Beth Ann M. Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | - Ved S. Shivade
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Naomi M. Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Nathan J. Romig
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - John C. McCormick
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jiawei Chen
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Onkar B. Sawant
- Eversight, Ann Arbor, MI, USA
- Center for Vision and Eye Banking Research, Eversight, Cleveland, OH, USA
| | | | | | - Harry J. Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | | | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jonathan H. Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| |
Collapse
|
44
|
Martelli E, Capoccia L, Di Francesco M, Cavallo E, Pezzulla MG, Giudice G, Bauleo A, Coppola G, Panagrosso M. Current Applications and Future Perspectives of Artificial and Biomimetic Intelligence in Vascular Surgery and Peripheral Artery Disease. Biomimetics (Basel) 2024; 9:465. [PMID: 39194444 DOI: 10.3390/biomimetics9080465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 07/05/2024] [Accepted: 07/24/2024] [Indexed: 08/29/2024] Open
Abstract
Artificial Intelligence (AI) made its first appearance in 1956, and since then it has progressively introduced itself in healthcare systems and patients' information and care. AI functions can be grouped under the following headings: Machine Learning (ML), Deep Learning (DL), Artificial Neural Network (ANN), Convolutional Neural Network (CNN), Computer Vision (CV). Biomimetic intelligence (BI) applies the principles of systems of nature to create biological algorithms, such as genetic and neural network, to be used in different scenarios. Chronic limb-threatening ischemia (CLTI) represents the last stage of peripheral artery disease (PAD) and has increased over recent years, together with the rise in prevalence of diabetes and population ageing. Nowadays, AI and BI grant the possibility of developing new diagnostic and treatment solutions in the vascular field, given the possibility of accessing clinical, biological, and imaging data. By assessing the vascular anatomy in every patient, as well as the burden of atherosclerosis, and classifying the level and degree of disease, sizing and planning the best endovascular treatment, defining the perioperative complications risk, integrating experiences and resources between different specialties, identifying latent PAD, thus offering evidence-based solutions and guiding surgeons in the choice of the best surgical technique, AI and BI challenge the role of the physician's experience in PAD treatment.
Collapse
Affiliation(s)
- Eugenio Martelli
- Division of Vascular Surgery, Department of Surgery, S Maria Goretti Hospital, 81100 Latina, Italy
- Department of General and Specialist Surgery, Sapienza University of Rome, 00161 Rome, Italy
- Faculty of Medicine, Saint Camillus International University of Health Sciences, 00131 Rome, Italy
| | - Laura Capoccia
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Marco Di Francesco
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Eduardo Cavallo
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Maria Giulia Pezzulla
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Giorgio Giudice
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Antonio Bauleo
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Giuseppe Coppola
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| | - Marco Panagrosso
- Division of Vascular and Endovascular Surgery, Department of Cardiovascular Sciences, S. Anna and S. Sebastiano Hospital, 81100 Caserta, Italy
| |
Collapse
|
45
|
Rodríguez-Miguel A, Arruabarrena C, Allendes G, Olivera M, Zarranz-Ventura J, Teus MA. Hybrid deep learning models for the screening of Diabetic Macular Edema in optical coherence tomography volumes. Sci Rep 2024; 14:17633. [PMID: 39085461 PMCID: PMC11291805 DOI: 10.1038/s41598-024-68489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/24/2024] [Indexed: 08/02/2024] Open
Abstract
Several studies published so far used highly selective image datasets from unclear sources to train computer vision models and that may lead to overestimated results, while those studies conducted in real-life remain scarce. To avoid image selection bias, we stacked convolutional and recurrent neural networks (CNN-RNN) to analyze complete optical coherence tomography (OCT) cubes in a row and predict diabetic macular edema (DME), in a real-world diabetic retinopathy screening program. A retrospective cohort study was carried out. Throughout 4-years, 5314 OCT cubes from 4408 subjects who attended to the diabetic retinopathy (DR) screening program were included. We arranged twenty-two (22) pre-trained CNNs in parallel with a bidirectional RNN layer stacked at the bottom, allowing the model to make a prediction for the whole OCT cube. The staff of retina experts built a ground truth of DME later used to train a set of these CNN-RNN models with different configurations. For each trained CNN-RNN model, we performed threshold tuning to find the optimal cut-off point for binary classification of DME. Finally, the best models were selected according to sensitivity, specificity, and area under the receiver operating characteristics curve (AUROC) with their 95% confidence intervals (95%CI). An ensemble of the best models was also explored. 5188 cubes were non-DME and 126 were DME. Three models achieved an AUROC of 0.94. Among these, sensitivity, and specificity (95%CI) ranged from 84.1-90.5 and 89.7-93.3, respectively, at threshold 1, from 89.7-92.1 and 80-83.1 at threshold 2, and from 80.2-81 and 93.8-97, at threshold 3. The ensemble model improved these results, and lower specificity was observed among subjects with sight-threatening DR. Analysis by age, gender, or grade of DME did not vary the performance of the models. CNN-RNN models showed high diagnostic accuracy for detecting DME in a real-world setting. This engine allowed us to detect extra-foveal DMEs commonly overlooked in other studies, and showed potential for application as the first filter of non-referable patients in an outpatient center within a population-based DR screening program, otherwise ended up in specialized care.
Collapse
Affiliation(s)
| | - Carolina Arruabarrena
- Department of Ophthalmology, Retina Unit, University Hospital "Príncipe de Asturias", 28805, Madrid, Spain
| | - Germán Allendes
- Department of Ophthalmology, Retina Unit, University Hospital "Príncipe de Asturias", 28805, Madrid, Spain
| | | | - Javier Zarranz-Ventura
- Hospital Clínic de Barcelona, University of Barcelona, 08036, Barcelona, Spain
- Institut de Investigacions Biomediques August Pi I Sunyer (IDIBAPS), 08036, Barcelona, Spain
| | - Miguel A Teus
- Department of Surgery, Medical and Social Sciences (Ophthalmology), University of Alcalá, 28871, Madrid, Spain
| |
Collapse
|
46
|
Huang JJ, Channa R, Wolf RM, Dong Y, Liang M, Wang J, Abramoff MD, Liu TYA. Autonomous artificial intelligence for diabetic eye disease increases access and health equity in underserved populations. NPJ Digit Med 2024; 7:196. [PMID: 39039218 PMCID: PMC11263546 DOI: 10.1038/s41746-024-01197-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 07/12/2024] [Indexed: 07/24/2024] Open
Abstract
Diabetic eye disease (DED) is a leading cause of blindness in the world. Annual DED testing is recommended for adults with diabetes, but adherence to this guideline has historically been low. In 2020, Johns Hopkins Medicine (JHM) began deploying autonomous AI for DED testing. In this study, we aimed to determine whether autonomous AI implementation was associated with increased adherence to annual DED testing, and how this differed across patient populations. JHM primary care sites were categorized as "non-AI" (no autonomous AI deployment) or "AI-switched" (autonomous AI deployment by 2021). We conducted a propensity score weighting analysis to compare change in adherence rates from 2019 to 2021 between non-AI and AI-switched sites. Our study included all adult patients with diabetes (>17,000) managed within JHM and has three major findings. First, AI-switched sites experienced a 7.6 percentage point greater increase in DED testing than non-AI sites from 2019 to 2021 (p < 0.001). Second, the adherence rate for Black/African Americans increased by 12.2 percentage points within AI-switched sites but decreased by 0.6% points within non-AI sites (p < 0.001), suggesting that autonomous AI deployment improved access to retinal evaluation for historically disadvantaged populations. Third, autonomous AI is associated with improved health equity, e.g. the adherence rate gap between Asian Americans and Black/African Americans shrank from 15.6% in 2019 to 3.5% in 2021. In summary, our results from real-world deployment in a large integrated healthcare system suggest that autonomous AI is associated with improvement in overall DED testing adherence, patient access, and health equity.
Collapse
Affiliation(s)
- Jane J Huang
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Roomasa Channa
- University of Wisconsin-Madison School of Medicine and Public Health, Madison, WI, USA
| | - Risa M Wolf
- Johns Hopkins Pediatric Diabetes Center, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Yiwen Dong
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD, USA
| | - Mavis Liang
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD, USA
| | - Jiangxia Wang
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD, USA
| | - Michael D Abramoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
| | - T Y Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
47
|
Burlina S, Radin S, Poggiato M, Cioccoloni D, Raimondo D, Romanello G, Tommasi C, Lombardi S. Screening for diabetic retinopathy with artificial intelligence: a real world evaluation. Acta Diabetol 2024:10.1007/s00592-024-02333-x. [PMID: 38995312 DOI: 10.1007/s00592-024-02333-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 07/02/2024] [Indexed: 07/13/2024]
Abstract
AIM Periodic screening for diabetic retinopathy (DR) is effective for preventing blindness. Artificial intelligence (AI) systems could be useful for increasing the screening of DR in diabetic patients. The aim of this study was to compare the performance of the DAIRET system in detecting DR to that of ophthalmologists in a real-world setting. METHODS Fundus photography was performed with a nonmydriatic camera in 958 consecutive patients older than 18 years who were affected by diabetes and who were enrolled in the DR screening in the Diabetes and Endocrinology Unit and in the Eye Unit of ULSS8 Berica (Italy) between June 2022 and June 2023. All retinal images were evaluated by DAIRET, which is a machine learning algorithm based on AI. In addition, all the images obtained were analysed by an ophthalmologist who graded the images. The results obtained by DAIRET were compared with those obtained by the ophthalmologist. RESULTS We included 958 patients, but only 867 (90.5%) patients had retinal images sufficient for evaluation by a human grader. The sensitivity for detecting cases of moderate DR and above was 1 (100%), and the sensitivity for detecting cases of mild DR was 0.84 ± 0.03. The specificity of detecting the absence of DR was lower (0.59 ± 0.04) because of the high number of false-positives. CONCLUSION DAIRET showed an optimal sensitivity in detecting all cases of referable DR (moderate DR or above) compared with that of a human grader. On the other hand, the specificity of DAIRET was low because of the high number of false-positives, which limits its cost-effectiveness.
Collapse
Affiliation(s)
- Silvia Burlina
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy.
| | - Sandra Radin
- Eye Unit, ULSS 8 Berica, Montecchio Maggiore, Veneto, VI, Italy
| | - Marzia Poggiato
- Eye Unit, ULSS 8 Berica, Montecchio Maggiore, Veneto, VI, Italy
| | - Dario Cioccoloni
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Daniele Raimondo
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Giovanni Romanello
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Chiara Tommasi
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| | - Simonetta Lombardi
- Diabetes and Endocrinology Unit, ULSS8 Berica, Arzignano, Veneto, VI, Italy
| |
Collapse
|
48
|
Radgoudarzi N, Gregg C, Quackenbush Q, Yiu G, Freeby M, Su G, Baxter S, Thorne C, Willard-Grace R. Implementation Mapping of the Collaborative University of California Teleophthalmology Initiative (CUTI): A Qualitative Study Using the Exploration, Preparation, Implementation, and Sustainment (EPIS) Framework. Cureus 2024; 16:e64179. [PMID: 39119397 PMCID: PMC11309586 DOI: 10.7759/cureus.64179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/09/2024] [Indexed: 08/10/2024] Open
Abstract
Background This study aimed to investigate the rationale, barriers, and facilitators of teleretinal camera implementation in primary care and endocrinology clinics for diabetic retinopathy (DR) screening across University of California (UC) health systems utilizing the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework. Methodology Institutional representatives from UC Los Angeles, San Diego, San Francisco, and Davis participated in a series of focus group meetings to elicit implementation facilitators and barriers for teleophthalmology programs within their campuses. Site representatives also completed a survey regarding their program's performance over the calendar year 2022 in the following areas: DR screening camera sites, payment sources and coding, screening workflows including clinical, information technology (IT), reading, results, pathologic findings, and follow-up, including patient outreach for abnormal results. Focus group and survey results were mapped to the EPIS framework to gain insights into the implementation process of these programs and identify areas for optimization. Results Four UC campuses with 20 active camera sites screened 7,450 patients in the calendar year 2022. The average DR screening rate across the four campuses was 55%. Variations between sources of payment, turn-around time, image-grading structure, image-report characteristics, IT infrastructure, and patient outreach strategies were identified between sites. Closing gaps in IT integration between data systems, ensuring the financial sustainability of the program, and optimizing patient outreach remain primary challenges across sites and serve as good opportunities for cross-institutional learning. Conclusions Despite the potential for long-term cost savings and improving access to care, numerous obstacles continue to hinder the widespread implementation of teleretinal DR screening. Implementation science approaches can identify strategies for addressing these challenges and optimizing implementation.
Collapse
Affiliation(s)
| | - Chhavi Gregg
- Informatics Services, University of California San Diego Health, San Diego, USA
| | - Quinn Quackenbush
- Family and Community Medicine, University of California San Diego Health, San Diego, USA
| | - Glenn Yiu
- Ophthalmology, University of California Davis Health, Sacramento, USA
| | - Matthew Freeby
- Endocrinology, University of California Los Angeles Health Systems, Los Angeles, USA
| | - George Su
- Pulmonary and Critical Care Medicine, University of California San Francisco Health Systems, San Francisco, USA
| | - Sally Baxter
- Ophthalmology, University of California San Diego Health, San Diego, USA
| | - Christine Thorne
- Primary Care, University of California San Diego Health, San Diego, USA
| | - Rachel Willard-Grace
- Primary Care, University of California San Francisco Health Systems, San Francisco, USA
| |
Collapse
|
49
|
Rojas-Carabali W, Agrawal R, Gutierrez-Sinisterra L, Baxter SL, Cifuentes-González C, Wei YC, Abisheganaden J, Kannapiran P, Wong S, Lee B, de-la-Torre A, Agrawal R. Natural Language Processing in medicine and ophthalmology: A review for the 21st-century clinician. Asia Pac J Ophthalmol (Phila) 2024; 13:100084. [PMID: 39059557 DOI: 10.1016/j.apjo.2024.100084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 07/08/2024] [Accepted: 07/19/2024] [Indexed: 07/28/2024] Open
Abstract
Natural Language Processing (NLP) is a subfield of artificial intelligence that focuses on the interaction between computers and human language, enabling computers to understand, generate, and derive meaning from human language. NLP's potential applications in the medical field are extensive and vary from extracting data from Electronic Health Records -one of its most well-known and frequently exploited uses- to investigating relationships among genetics, biomarkers, drugs, and diseases for the proposal of new medications. NLP can be useful for clinical decision support, patient monitoring, or medical image analysis. Despite its vast potential, the real-world application of NLP is still limited due to various challenges and constraints, meaning that its evolution predominantly continues within the research domain. However, with the increasingly widespread use of NLP, particularly with the availability of large language models, such as ChatGPT, it is crucial for medical professionals to be aware of the status, uses, and limitations of these technologies.
Collapse
Affiliation(s)
- William Rojas-Carabali
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Tan Tock Seng Hospital, National Healthcare Group Eye Institute, Singapore
| | - Rajdeep Agrawal
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | | | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA; Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, USA
| | - Carlos Cifuentes-González
- Neuroscience Research Group (NEUROS), Neurovitae Center for Neuroscience, Institute of Translational Medicine (IMT), Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario, Bogotá, Colombia
| | - Yap Chun Wei
- Health Services and Outcomes Research, National Healthcare Group, Singapore
| | - John Abisheganaden
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Health Services and Outcomes Research, National Healthcare Group, Singapore; Department of Respiratory Medicine, Tan Tock Seng Hospital, Singapore
| | | | - Sunny Wong
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Bernett Lee
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Alejandra de-la-Torre
- Neuroscience Research Group (NEUROS), Neurovitae Center for Neuroscience, Institute of Translational Medicine (IMT), Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario, Bogotá, Colombia
| | - Rupesh Agrawal
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Tan Tock Seng Hospital, National Healthcare Group Eye Institute, Singapore; Singapore Eye Research Institute, Singapore; Duke NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
50
|
Wu JH, Lin S, Moghimi S. Big data to guide glaucoma treatment. Taiwan J Ophthalmol 2024; 14:333-339. [PMID: 39430357 PMCID: PMC11488808 DOI: 10.4103/tjo.tjo-d-23-00068] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 06/06/2023] [Indexed: 10/22/2024] Open
Abstract
Ophthalmology has been at the forefront of the medical application of big data. Often harnessed with a machine learning approach, big data has demonstrated potential to transform ophthalmic care, as evidenced by prior success on clinical tasks such as the screening of ophthalmic diseases and lesions via retinal images. With the recent establishment of various large ophthalmic datasets, there has been greater interest in determining whether the benefits of big data may extend to the downstream process of ophthalmic disease management. An area of substantial investigation has been the use of big data to help guide or streamline management of glaucoma, which remains a leading cause of irreversible blindness worldwide. In this review, we summarize relevant studies utilizing big data and discuss the application of the findings in the risk assessment and treatment of glaucoma.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Hamilton Glaucoma Center, Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - Shan Lin
- Glaucoma Center of San Francisco, San Francisco, CA, United States
| | - Sasan Moghimi
- Hamilton Glaucoma Center, Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|