1
|
Pennesi ME, Wang YZ, Birch DG. Deep learning aided measurement of outer retinal layer metrics as biomarkers for inherited retinal degenerations: opportunities and challenges. Curr Opin Ophthalmol 2024:00055735-990000000-00196. [PMID: 39259656 DOI: 10.1097/icu.0000000000001088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW The purpose of this review was to provide a summary of currently available retinal imaging and visual function testing methods for assessing inherited retinal degenerations (IRDs), with the emphasis on the application of deep learning (DL) approaches to assist the determination of structural biomarkers for IRDs. RECENT FINDINGS (clinical trials for IRDs; discover effective biomarkers as endpoints; DL applications in processing retinal images to detect disease-related structural changes). SUMMARY Assessing photoreceptor loss is a direct way to evaluate IRDs. Outer retinal layer structures, including outer nuclear layer, ellipsoid zone, photoreceptor outer segment, RPE, are potential structural biomarkers for IRDs. More work may be needed on structure and function relationship.
Collapse
Affiliation(s)
- Mark E Pennesi
- Retina Foundation of the Southwest, Dallas, Texas
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yi-Zhong Wang
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| | - David G Birch
- Retina Foundation of the Southwest, Dallas, Texas
- Department of Ophthalmology, University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| |
Collapse
|
2
|
Ahn SJ. Classification of Hydroxychloroquine Retinopathy: A Literature Review and Proposal for Revision. Diagnostics (Basel) 2024; 14:1803. [PMID: 39202291 PMCID: PMC11353870 DOI: 10.3390/diagnostics14161803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Revised: 08/15/2024] [Accepted: 08/15/2024] [Indexed: 09/03/2024] Open
Abstract
Establishing universal standards for the nomenclature and classification of hydroxychloroquine retinopathy is essential. This review summarizes the classifications used for categorizing the patterns of hydroxychloroquine retinopathy and grading its severity in the literature, highlighting the limitations of these classifications based on recent findings. To overcome these limitations, I propose categorizing hydroxychloroquine retinopathy into four categories based on optical coherence tomography (OCT) findings: parafoveal (parafoveal damage only), pericentral (pericentral damage only), combined parafoveal and pericentral (both parafoveal and pericentral damage), and posterior polar (widespread damage over parafoveal, pericentral, and more peripheral areas), with or without foveal involvement. Alternatively, eyes can be categorized simply into parafoveal and pericentral retinopathy based on the most dominant area of damage, rather than the topographic distribution of overall retinal damage. Furthermore, I suggest a five-stage modified version of the current three-stage grading system of disease severity based on fundus autofluorescence (FAF) as follows: 0, no hyperautofluorescence (normal); 1, localized parafoveal or pericentral hyperautofluorescence on FAF; 2, hyperautofluorescence extending greater than 180° around the fovea; 3, combined retinal pigment epithelium (RPE) defects (hypoautofluorescence on FAF) without foveal involvement; and 4, fovea-involving hypoautofluorescence. These classification systems can better address the topographic characteristics of hydroxychloroquine retinopathy using disease patterns and assess the risk of vision-threatening retinopathy by stage, particularly with foveal involvement.
Collapse
Affiliation(s)
- Seong Joon Ahn
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul 04763, Republic of Korea
| |
Collapse
|
3
|
Kulbay M, Tuli N, Akdag A, Kahn Ali S, Qian CX. Optogenetics and Targeted Gene Therapy for Retinal Diseases: Unravelling the Fundamentals, Applications, and Future Perspectives. J Clin Med 2024; 13:4224. [PMID: 39064263 PMCID: PMC11277578 DOI: 10.3390/jcm13144224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 07/15/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024] Open
Abstract
With a common aim of restoring physiological function of defective cells, optogenetics and targeted gene therapies have shown great clinical potential and novelty in the branch of personalized medicine and inherited retinal diseases (IRDs). The basis of optogenetics aims to bypass defective photoreceptors by introducing opsins with light-sensing capabilities. In contrast, targeted gene therapies, such as methods based on CRISPR-Cas9 and RNA interference with noncoding RNAs (i.e., microRNA, small interfering RNA, short hairpin RNA), consists of inducing normal gene or protein expression into affected cells. Having partially leveraged the challenges limiting their prompt introduction into the clinical practice (i.e., engineering, cell or tissue delivery capabilities), it is crucial to deepen the fields of knowledge applied to optogenetics and targeted gene therapy. The aim of this in-depth and novel literature review is to explain the fundamentals and applications of optogenetics and targeted gene therapies, while providing decision-making arguments for ophthalmologists. First, we review the biomolecular principles and engineering steps involved in optogenetics and the targeted gene therapies mentioned above by bringing a focus on the specific vectors and molecules for cell signalization. The importance of vector choice and engineering methods are discussed. Second, we summarize the ongoing clinical trials and most recent discoveries for optogenetics and targeted gene therapies for IRDs. Finally, we then discuss the limits and current challenges of each novel therapy. We aim to provide for the first time scientific-based explanations for clinicians to justify the specificity of each therapy for one disease, which can help improve clinical decision-making tasks.
Collapse
Affiliation(s)
- Merve Kulbay
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3S5, Canada;
| | - Nicolas Tuli
- Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada (A.A.)
| | - Arjin Akdag
- Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada (A.A.)
| | - Shigufa Kahn Ali
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada;
| | - Cynthia X. Qian
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada;
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
| |
Collapse
|
4
|
Vasireddi HK, K SD, Reddy GNVR. An enumerative pre-processing approach for retinopathy severity grading using an interpretable classifier: a comparative study. Graefes Arch Clin Exp Ophthalmol 2024; 262:2247-2267. [PMID: 38400856 DOI: 10.1007/s00417-024-06396-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/23/2024] [Accepted: 01/30/2024] [Indexed: 02/26/2024] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved. METHODS An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined. RESULTS The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively. CONCLUSION This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.
Collapse
Affiliation(s)
- Hemanth Kumar Vasireddi
- Computer Science and Engineering, National Institute of Technology, Silchar, 788010, Assam, India
- Computer Science Engineering, Raghu Engineering College, Visakhapatnam, 531162, Andhra Pradesh, India
| | - Suganya Devi K
- Computer Science and Engineering, National Institute of Technology, Silchar, 788010, Assam, India.
| | - G N V Raja Reddy
- Computer Science and Engineering, National Institute of Technology, Silchar, 788010, Assam, India
- Computer Science Engineering, GITAM University, Visakhapatnam, 530045, Andhra Pradesh, India
| |
Collapse
|
5
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
6
|
Swaminathan U, Daigavane S. Unveiling the Potential: A Comprehensive Review of Artificial Intelligence Applications in Ophthalmology and Future Prospects. Cureus 2024; 16:e61826. [PMID: 38975538 PMCID: PMC11227442 DOI: 10.7759/cureus.61826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Accepted: 06/06/2024] [Indexed: 07/09/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative force in healthcare, particularly in the field of ophthalmology. This comprehensive review examines the current applications of AI in ophthalmology, highlighting its significant contributions to diagnostic accuracy, treatment efficacy, and patient care. AI technologies, such as deep learning algorithms, have demonstrated exceptional performance in the early detection and diagnosis of various eye conditions, including diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma. Additionally, AI has enhanced the analysis of ophthalmic imaging techniques like optical coherence tomography (OCT) and fundus photography, facilitating more precise disease monitoring and management. The review also explores AI's role in surgical assistance, predictive analytics, and personalized treatment plans, showcasing its potential to revolutionize clinical practice and improve patient outcomes. Despite these advancements, challenges such as data privacy, regulatory hurdles, and ethical considerations remain. The review underscores the need for continued research and collaboration among clinicians, researchers, technology developers, and policymakers to address these challenges and fully harness the potential of AI in improving eye health worldwide. By integrating AI with teleophthalmology and developing AI-driven wearable devices, the future of ophthalmic care promises enhanced accessibility, efficiency, and efficacy, ultimately reducing the global burden of visual impairment and blindness.
Collapse
Affiliation(s)
- Uma Swaminathan
- Ophthalmology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Sachin Daigavane
- Ophthalmology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
7
|
Mhibik B, Kouadio D, Jung C, Bchir C, Toutée A, Maestri F, Gulic K, Miere A, Falcione A, Touati M, Monnet D, Bodaghi B, Touhami S. AUTOMATED DETECTION OF VITRITIS USING ULTRAWIDE-FIELD FUNDUS PHOTOGRAPHS AND DEEP LEARNING. Retina 2024; 44:1034-1044. [PMID: 38261816 DOI: 10.1097/iae.0000000000004049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
BACKGROUND/PURPOSE Evaluate the performance of a deep learning algorithm for the automated detection and grading of vitritis on ultrawide-field imaging. METHODS Cross-sectional noninterventional study. Ultrawide-field fundus retinophotographs of uveitis patients were used. Vitreous haze was defined according to the six steps of the Standardization of Uveitis Nomenclature classification. The deep learning framework TensorFlow and the DenseNet121 convolutional neural network were used to perform the classification task. The best fitted model was tested in a validation study. RESULTS One thousand one hundred eighty-one images were included. The performance of the model for the detection of vitritis was good with a sensitivity of 91%, a specificity of 89%, an accuracy of 0.90, and an area under the receiver operating characteristics curve of 0.97. When used on an external set of images, the accuracy for the detection of vitritis was 0.78. The accuracy to classify vitritis in one of the six Standardization of Uveitis Nomenclature grades was limited (0.61) but improved to 0.75 when the grades were grouped into three categories. When accepting an error of one grade, the accuracy for the six-class classification increased to 0.90, suggesting the need for a larger sample to improve the model performances. CONCLUSION A new deep learning model based on ultrawide-field fundus imaging that produces an efficient tool for the detection of vitritis was described. The performance of the model for the grading into three categories of increasing vitritis severity was acceptable. The performance for the six-class grading of vitritis was limited but can probably be improved with a larger set of images.
Collapse
Affiliation(s)
- Bayram Mhibik
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Desire Kouadio
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Créteil, France
| | - Camille Jung
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Créteil, France
| | - Chemsedine Bchir
- Department of Mathematics and Engineering Applications, Sorbonne Université, Paris, France ; and
| | - Adelaide Toutée
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Federico Maestri
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Karmen Gulic
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Alexandra Miere
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Créteil, France
| | - Alessandro Falcione
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Myriam Touati
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Dominique Monnet
- Department of Ophthalmology, Université de Paris, Cochin University Hospital, Paris, France
| | - Bahram Bodaghi
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| | - Sara Touhami
- Department of Ophthalmology, Sorbonne Université, Pitié Salpêtrière University Hospital, Paris, France
| |
Collapse
|
8
|
Mehmood A, Ko J, Kim H, Kim J. Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas. SENSORS (BASEL, SWITZERLAND) 2024; 24:2678. [PMID: 38732784 PMCID: PMC11085662 DOI: 10.3390/s24092678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/13/2024]
Abstract
Artificial retinas have revolutionized the lives of many blind people by enabling their ability to perceive vision via an implanted chip. Despite significant advancements, there are some limitations that cannot be ignored. Presenting all objects captured in a scene makes their identification difficult. Addressing this limitation is necessary because the artificial retina can utilize a very limited number of pixels to represent vision information. This problem in a multi-object scenario can be mitigated by enhancing images such that only the major objects are considered to be shown in vision. Although simple techniques like edge detection are used, they fall short in representing identifiable objects in complex scenarios, suggesting the idea of integrating primary object edges. To support this idea, the proposed classification model aims at identifying the primary objects based on a suggested set of selective features. The proposed classification model can then be equipped into the artificial retina system for filtering multiple primary objects to enhance vision. The suitability of handling multi-objects enables the system to cope with real-world complex scenarios. The proposed classification model is based on a multi-label deep neural network, specifically designed to leverage from the selective feature set. Initially, the enhanced images proposed in this research are compared with the ones that utilize an edge detection technique for single, dual, and multi-object images. These enhancements are also verified through an intensity profile analysis. Subsequently, the proposed classification model's performance is evaluated to show the significance of utilizing the suggested features. This includes evaluating the model's ability to correctly classify the top five, four, three, two, and one object(s), with respective accuracies of up to 84.8%, 85.2%, 86.8%, 91.8%, and 96.4%. Several comparisons such as training/validation loss and accuracies, precision, recall, specificity, and area under a curve indicate reliable results. Based on the overall evaluation of this study, it is concluded that using the suggested set of selective features not only improves the classification model's performance, but aligns with the specific problem to address the challenge of correctly identifying objects in multi-object scenarios. Therefore, the proposed classification model designed on the basis of selective features is considered to be a very useful tool in supporting the idea of optimizing image enhancement.
Collapse
Affiliation(s)
- Asif Mehmood
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, 1342 Seongnamdaero, Sujeong-gu, Seongnam-si 13120, Republic of Korea;
| | - Jungbeom Ko
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon 21936, Republic of Korea;
| | - Hyunchul Kim
- School of Information, University of California, 102 South Hall 4600, Berkeley, CA 94720, USA;
| | - Jungsuk Kim
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, 1342 Seongnamdaero, Sujeong-gu, Seongnam-si 13120, Republic of Korea;
- Research and Development Laboratory, Cellico Company, Seongnam-si 13449, Republic of Korea
| |
Collapse
|
9
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
10
|
Veritti D, Rubinato L, Sarao V, De Nardin A, Foresti GL, Lanzetta P. Behind the mask: a critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024; 262:975-982. [PMID: 37747539 PMCID: PMC10907411 DOI: 10.1007/s00417-023-06245-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 07/24/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023] Open
Abstract
PURPOSE This narrative review aims to provide an overview of the dangers, controversial aspects, and implications of artificial intelligence (AI) use in ophthalmology and other medical-related fields. METHODS We conducted a decade-long comprehensive search (January 2013-May 2023) of both academic and grey literature, focusing on the application of AI in ophthalmology and healthcare. This search included key web-based academic databases, non-traditional sources, and targeted searches of specific organizations and institutions. We reviewed and selected documents for relevance to AI, healthcare, ethics, and guidelines, aiming for a critical analysis of ethical, moral, and legal implications of AI in healthcare. RESULTS Six main issues were identified, analyzed, and discussed. These include bias and clinical safety, cybersecurity, health data and AI algorithm ownership, the "black-box" problem, medical liability, and the risk of widening inequality in healthcare. CONCLUSION Solutions to address these issues include collecting high-quality data of the target population, incorporating stronger security measures, using explainable AI algorithms and ensemble methods, and making AI-based solutions accessible to everyone. With careful oversight and regulation, AI-based systems can be used to supplement physician decision-making and improve patient care and outcomes.
Collapse
Affiliation(s)
- Daniele Veritti
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy.
| | - Leopoldo Rubinato
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
| | - Valentina Sarao
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Gian Luca Foresti
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| |
Collapse
|
11
|
Yadav S, Ong J, Zarnegar A, Driban M, Selvam A, Arora S, Singh SR, Chhablani J. Pigment epithelial detachment composition indices in central serous chorioretinopathy as a biomarker for disease activity: A computational methodology and 1 year outcomes. Eur J Ophthalmol 2024:11206721241235052. [PMID: 38409789 DOI: 10.1177/11206721241235052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2024]
Abstract
PURPOSE Investigation of pigment epithelial detachment (PED) characteristics in central serous chorioretinopathy (CSCR) is underrepresented in the literature. We present a novel computational approach to quantify PED composition indices (PEDCI) in CSCR and track changes over time. METHODS 34 eyes with active CSCR were analyzed quarterly over a 1-year period. Cases were categorized into acute and chronic CSCR depending on a symptom duration of less than 3 months or more than 3 months respectively. PED, retinal and choroidal dimensions were manually measured, and interval changes were compared using repeated measures of variance ANOVA. PED composition analysis involved manual segmentation followed by automated sub segmentation of PED areas to identify serous, neovascular and fibrous tissues. PEDCI for each component was compared among cases of acute and chronic CSCR. RESULTS CMT and NSD-h decreased by 65.2 µm (p = 0.01), and 86.5 µm (p < 0.01) respectively at 12 months. At baseline, 7/17 acute CSCR eyes and 8/15 chronic CSCR eyes had a concomitant PED; acute cases had both serous and neovascular components (PEDCI-S: 16.95%, PEDCI-N: 40.3%), whereas chronic cases only had a neovascular component (PEDCI-S: 0%, PEDCI-N: 30.5%). At 12-month follow-up, 6/7 of acute CSCR group and 6/8 chronic CSCR group had a concomitant PED; PEDCI-S was largest for acute CSCR (53.4%) and PEDCI-N was largest for chronic CSCR (46.7%). CONCLUSION We identify a novel biomarker PEDCI to differentiate acute and chronic CSCR with higher PEDCI-S in acute CSCR, and higher PEDCI-N in chronic CSCR.
Collapse
Affiliation(s)
- Sanya Yadav
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Joshua Ong
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Arman Zarnegar
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Supriya Arora
- Bahamas Vision Center and Princess Margaret Hospital, Nassau, NP, Bahamas
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
12
|
Piotr R, Robert R, Marek N, Michał I. Artificial intelligence enhanced ophthalmological screening in children: insights from a cohort study in Lubelskie Voivodeship. Sci Rep 2024; 14:254. [PMID: 38168543 PMCID: PMC10761970 DOI: 10.1038/s41598-023-50665-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 12/22/2023] [Indexed: 01/05/2024] Open
Abstract
This study aims to investigate the prevalence of visual impairments, such as myopia, hyperopia, and astigmatism, among school-age children (7-9 years) in Lubelskie Voivodeship (Republic of Poland) and apply artificial intelligence (AI) in the detection of severe ocular diseases. A total of 1049 participants (1.7% of the total child population in the region) were examined through a combination of standardized visual acuity tests, autorefraction, and assessment of fundus images by a convolutional neural network (CNN) model. The results from this artificial intelligence (AI) model were juxtaposed with assessments conducted by two experienced ophthalmologists to gauge the model's accuracy. The results demonstrated myopia, hyperopia, and astigmatism prevalences of 3.7%, 16.9%, and 7.8%, respectively, with myopia showing a significant age-related increase and hyperopia decreasing with age. The AI model performance was evaluated using the Dice coefficient, reaching 93.3%, indicating that the CNN model was highly accurate. The study underscores the utility of AI in the early detection and diagnosis of severe ocular diseases, providing a foundation for future research to improve paediatric ophthalmic screening and treatment outcomes.
Collapse
Affiliation(s)
- Regulski Piotr
- Laboratory of Digital Imaging and Virtual Reality, Department of Dental and Maxillofacial Radiology, Medical University of Warsaw, Binieckiego 6 St., 02-097, Warsaw, Poland.
| | - Rejdak Robert
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, Lublin, Poland
| | | | - Iwański Michał
- Laboratory of Digital Imaging and Virtual Reality, Department of Dental and Maxillofacial Radiology, Medical University of Warsaw, Binieckiego 6 St., 02-097, Warsaw, Poland
| |
Collapse
|
13
|
Pinto-Coelho L. How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications. Bioengineering (Basel) 2023; 10:1435. [PMID: 38136026 PMCID: PMC10740686 DOI: 10.3390/bioengineering10121435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 12/12/2023] [Accepted: 12/15/2023] [Indexed: 12/24/2023] Open
Abstract
The integration of artificial intelligence (AI) into medical imaging has guided in an era of transformation in healthcare. This literature review explores the latest innovations and applications of AI in the field, highlighting its profound impact on medical diagnosis and patient care. The innovation segment explores cutting-edge developments in AI, such as deep learning algorithms, convolutional neural networks, and generative adversarial networks, which have significantly improved the accuracy and efficiency of medical image analysis. These innovations have enabled rapid and accurate detection of abnormalities, from identifying tumors during radiological examinations to detecting early signs of eye disease in retinal images. The article also highlights various applications of AI in medical imaging, including radiology, pathology, cardiology, and more. AI-based diagnostic tools not only speed up the interpretation of complex images but also improve early detection of disease, ultimately delivering better outcomes for patients. Additionally, AI-based image processing facilitates personalized treatment plans, thereby optimizing healthcare delivery. This literature review highlights the paradigm shift that AI has brought to medical imaging, highlighting its role in revolutionizing diagnosis and patient care. By combining cutting-edge AI techniques and their practical applications, it is clear that AI will continue shaping the future of healthcare in profound and positive ways.
Collapse
Affiliation(s)
- Luís Pinto-Coelho
- ISEP—School of Engineering, Polytechnic Institute of Porto, 4200-465 Porto, Portugal;
- INESCTEC, Campus of the Engineering Faculty of the University of Porto, 4200-465 Porto, Portugal
| |
Collapse
|
14
|
Ding X, Huang Y, Zhao Y, Tian X, Feng G, Gao Z. Accurate Segmentation and Tracking of Chorda Tympani in Endoscopic Middle Ear Surgery with Artificial Intelligence. EAR, NOSE & THROAT JOURNAL 2023:1455613231212051. [PMID: 38083840 DOI: 10.1177/01455613231212051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023] Open
Abstract
Objective: We introduce a novel endoscopic middle ear surgery dataset specifically designed for evaluating deep learning (DL)-based semantic segmentation of chorda tympani. Methods: We curated a dataset comprising 8240 images from 25 patients, divided into a training set (20%, 1648 images), validation set (5%, 412 images), and test set (75%, 6180 images). We employed data enhancement techniques to expand the picture size of the training and validation sets by 5 times (training set: 8240 images, verification set: 2060 images). Subsequently, we employed a multistage transfer learning training method to establish, train, and validate various convolutional neural networks. Results: On the validation set of 2060 labeled images, our proposed network achieved good results, with the U-net exhibiting the highest effectiveness (mIOU = 0.8737, mPA = 0.9263). Furthermore, when applied to the test dataset of 6180 raw images and contrasted with the prediction of otologists, the overall performance of the U-net was excellent (accuracy = 0.911, precision = 0.9823, sensitivity = 0.8777, specificity = 0.9714). Conclusions: Our findings demonstrate that DL can be successfully employed for automatic segmentation of chorda tympani in endoscopic middle ear surgery, yielding high-performance results. This study validates the potential feasibility of future intelligent navigation technologies to assist in endoscopic middle ear surgery.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| |
Collapse
|
15
|
Wang YZ, Juroch K, Birch DG. Deep Learning-Assisted Measurements of Photoreceptor Ellipsoid Zone Area and Outer Segment Volume as Biomarkers for Retinitis Pigmentosa. Bioengineering (Basel) 2023; 10:1394. [PMID: 38135984 PMCID: PMC10740805 DOI: 10.3390/bioengineering10121394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/13/2023] [Accepted: 11/29/2023] [Indexed: 12/24/2023] Open
Abstract
The manual segmentation of retinal layers from OCT scan images is time-consuming and costly. The deep learning approach has potential for the automatic delineation of retinal layers to significantly reduce the burden of human graders. In this study, we compared deep learning model (DLM) segmentation with manual correction (DLM-MC) to conventional manual grading (MG) for the measurements of the photoreceptor ellipsoid zone (EZ) area and outer segment (OS) volume in retinitis pigmentosa (RP) to assess whether DLM-MC can be a new gold standard for retinal layer segmentation and for the measurement of retinal layer metrics. Ninety-six high-speed 9 mm 31-line volume scans obtained from 48 patients with RPGR-associated XLRP were selected based on the following criteria: the presence of an EZ band within the scan limit and a detectable EZ in at least three B-scans in a volume scan. All the B-scan images in each volume scan were manually segmented for the EZ and proximal retinal pigment epithelium (pRPE) by two experienced human graders to serve as the ground truth for comparison. The test volume scans were also segmented by a DLM and then manually corrected for EZ and pRPE by the same two graders to obtain DLM-MC segmentation. The EZ area and OS volume were determined by interpolating the discrete two-dimensional B-scan EZ-pRPE layer over the scan area. Dice similarity, Bland-Altman analysis, correlation, and linear regression analyses were conducted to assess the agreement between DLM-MC and MG for the EZ area and OS volume measurements. For the EZ area, the overall mean dice score (SD) between DLM-MC and MG was 0.8524 (0.0821), which was comparable to 0.8417 (0.1111) between two MGs. For the EZ area > 1 mm2, the average dice score increased to 0.8799 (0.0614). When comparing DLM-MC to MG, the Bland-Altman plots revealed a mean difference (SE) of 0.0132 (0.0953) mm2 and a coefficient of repeatability (CoR) of 1.8303 mm2 for the EZ area and a mean difference (SE) of 0.0080 (0.0020) mm3 and a CoR of 0.0381 mm3 for the OS volume. The correlation coefficients (95% CI) were 0.9928 (0.9892-0.9952) and 0.9938 (0.9906-0.9958) for the EZ area and OS volume, respectively. The linear regression slopes (95% CI) were 0.9598 (0.9399-0.9797) and 1.0104 (0.9909-1.0298), respectively. The results from this study suggest that the manual correction of deep learning model segmentation can generate EZ area and OS volume measurements in excellent agreement with those of conventional manual grading in RP. Because DLM-MC is more efficient for retinal layer segmentation from OCT scan images, it has the potential to reduce the burden of human graders in obtaining quantitative measurements of biomarkers for assessing disease progression and treatment outcomes in RP.
Collapse
Affiliation(s)
- Yi-Zhong Wang
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
- Department of Ophthalmology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| | - Katherine Juroch
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
| | - David Geoffrey Birch
- Retina Foundation of the Southwest, 9600 North Central Expressway, Suite 200, Dallas, TX 75231, USA; (K.J.); (D.G.B.)
- Department of Ophthalmology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| |
Collapse
|