1
|
Chen JS, Reddy AJ, Al-Sharif E, Shoji MK, Kalaw FGP, Eslani M, Lang PZ, Arya M, Koretz ZA, Bolo KA, Arnett JJ, Roginiel AC, Do JL, Robbins SL, Camp AS, Scott NL, Rudell JC, Weinreb RN, Baxter SL, Granet DB. Analysis of ChatGPT Responses to Ophthalmic Cases: Can ChatGPT Think like an Ophthalmologist? OPHTHALMOLOGY SCIENCE 2025; 5:100600. [PMID: 39346575 PMCID: PMC11437840 DOI: 10.1016/j.xops.2024.100600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 08/09/2024] [Accepted: 08/13/2024] [Indexed: 10/01/2024]
Abstract
Objective Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists' abilities to distinguish between responses generated by clinicians versus ChatGPT. Design Cross-sectional mixed-methods study. Subjects Sixteen ophthalmologists from a single academic center, of which 10 were board-eligible and 6 were board-certified, were recruited to participate in this study. Methods Prompt engineering was used to ensure ChatGPT output discussions in the style of the ophthalmologist author of the Medical College of Wisconsin Ophthalmic Case Studies. Cases where ChatGPT accurately identified the primary diagnoses were included and then paired. Masked human-generated and ChatGPT-generated discussions were sent to participating ophthalmologists to identify the author of the discussions. Response confidence was assessed using a 5-point Likert scale score, and subjective feedback was manually reviewed. Main Outcome Measures Accuracy of ophthalmologist identification of discussion author, as well as subjective perceptions of human-generated versus ChatGPT-generated discussions. Results Overall, ChatGPT correctly identified the primary diagnosis in 15 of 17 (88.2%) cases. Two cases were excluded from the paired comparison due to hallucinations or fabrications of nonuser-provided data. Ophthalmologists correctly identified the author in 77.9% ± 26.6% of the 13 included cases, with a mean Likert scale confidence rating of 3.6 ± 1.0. No significant differences in performance or confidence were found between board-certified and board-eligible ophthalmologists. Subjectively, ophthalmologists found that discussions written by ChatGPT tended to have more generic responses, irrelevant information, hallucinated more frequently, and had distinct syntactic patterns (all P < 0.01). Conclusions Large language models have the potential to synthesize clinical data and generate ophthalmic discussions. While these findings have exciting implications for artificial intelligence-assisted health care delivery, more rigorous real-world evaluation of these models is necessary before clinical deployment. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S Chen
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Akshay J Reddy
- School of Medicine, California University of Science and Medicine, Colton, California
| | - Eman Al-Sharif
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- Surgery Department, College of Medicine, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Marissa K Shoji
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Fritz Gerald P Kalaw
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Medi Eslani
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Paul Z Lang
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Malvika Arya
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Zachary A Koretz
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Kyle A Bolo
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Justin J Arnett
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Aliya C Roginiel
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Jiun L Do
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Shira L Robbins
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Andrew S Camp
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Nathan L Scott
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Jolene C Rudell
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Robert N Weinreb
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Sally L Baxter
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - David B Granet
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| |
Collapse
|
2
|
Xiaojian Y, Zhanbo Q, Jian C, Zefeng W, Jian L, Jin L, Yuefen P, Shuwen H. Deep learning application in prediction of cancer molecular alterations based on pathological images: a bibliographic analysis via CiteSpace. J Cancer Res Clin Oncol 2024; 150:467. [PMID: 39422817 DOI: 10.1007/s00432-024-05992-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 10/09/2024] [Indexed: 10/19/2024]
Abstract
BACKGROUND The advancements in artificial intelligence (AI) technology for image recognition were propelling molecular pathology research into a new era. OBJECTIVE To summarize the hot spots and research trends in the field of molecular pathology image recognition. METHODS Relevant articles from January 1st, 2010, to August 25th, 2023, were retrieved from the Web of Science Core Collection. Subsequently, CiteSpace was employed for bibliometric and visual analysis, generating diverse network diagrams illustrating keywords, highly cited references, hot topics, and research trends. RESULTS A total of 110 relevant articles were extracted from a pool of 10,205 articles. The overall publication count exhibited a rising trend each year. The leading contributors in terms of institutions, countries, and authors were Maastricht University (11 articles), the United States (38 articles), and Kather Jacob Nicholas (9 articles), respectively. Half of the top ten research institutions, based on publication volume, were affiliated with Germany. The most frequently cited article was authored by Nicolas Coudray et al. accumulating 703 citations. The keyword "Deep learning" had the highest frequency in 2019. Notably, the highlighted keywords from 2022 to 2023 included "microsatellite instability", and there were 21 articles focusing on utilizing algorithms to recognize microsatellite instability (MSI) in colorectal cancer (CRC) pathological images. CONCLUSION The use of DL is expected to provide a new strategy to effectively solve the current problem of time-consuming and expensive molecular pathology detection. Therefore, further research is needed to address issues, such as data quality and standardization, model interpretability, and resource and infrastructure requirements.
Collapse
Affiliation(s)
- Yu Xiaojian
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Qu Zhanbo
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Chu Jian
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Wang Zefeng
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Liu Jian
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Liu Jin
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Pan Yuefen
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China.
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China.
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China.
| | - Han Shuwen
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China.
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China.
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China.
- ASIR(Institute - Association of intelligent systems and robotics), Rueil-Malmaison, France.
| |
Collapse
|
3
|
Balas M, Micieli JA, Wong JCY. Integrating AI with tele-ophthalmology in Canada: a review. CANADIAN JOURNAL OF OPHTHALMOLOGY 2024:S0008-4182(24)00259-X. [PMID: 39255951 DOI: 10.1016/j.jcjo.2024.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 05/21/2024] [Accepted: 08/18/2024] [Indexed: 09/12/2024]
Abstract
The field of ophthalmology is rapidly advancing, with technological innovations enhancing the diagnosis and management of eye diseases. Tele-ophthalmology, or the use of telemedicine for ophthalmology, has emerged as a promising solution to improve access to eye care services, particularly for patients in remote or underserved areas. Despite its potential benefits, tele-ophthalmology faces significant challenges, including the need for high volumes of medical images to be analyzed and interpreted by trained clinicians. Artificial intelligence (AI) has emerged as a powerful tool in ophthalmology, capable of assisting clinicians in diagnosing and treating a variety of conditions. Integrating AI models into existing tele-ophthalmology infrastructure has the potential to revolutionize eye care services by reducing costs, improving efficiency, and increasing access to specialized care. By automating the analysis and interpretation of clinical data and medical images, AI models can reduce the burden on human clinicians, allowing them to focus on patient care and disease management. Available literature on the current status of tele-ophthalmology in Canada and successful AI models in ophthalmology was acquired and examined using the Arksey and O'Malley framework. This review covers literature up to 2022 and is split into 3 sections: 1) existing Canadian tele-ophthalmology infrastructure, with its benefits and drawbacks; 2) preeminent AI models in ophthalmology, across a variety of ocular conditions; and 3) bridging the gap between Canadian tele-ophthalmology and AI in a safe and effective manner.
Collapse
Affiliation(s)
- Michael Balas
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, ON, Canada; Division of Neurology, Department of Medicine, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada; Department of Ophthalmology, St. Michael's Hospital, Toronto, ON, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, ON, Canada.
| |
Collapse
|
4
|
Kıran Yenice E, Kara C, Erdaş ÇB. Automated detection of type 1 ROP, type 2 ROP and A-ROP based on deep learning. Eye (Lond) 2024; 38:2644-2648. [PMID: 38918566 PMCID: PMC11385231 DOI: 10.1038/s41433-024-03184-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024] Open
Abstract
PURPOSE To provide automatic detection of Type 1 retinopathy of prematurity (ROP), Type 2 ROP, and A-ROP by deep learning-based analysis of fundus images obtained by clinical examination using convolutional neural networks. MATERIAL AND METHODS A total of 634 fundus images of 317 premature infants born at 23-34 weeks of gestation were evaluated. After image pre-processing, we obtained a rectangular region (ROI). RegNetY002 was used for algorithm training, and stratified 10-fold cross-validation was applied during training to evaluate and standardize our model. The model's performance was reported as accuracy and specificity and described by the receiver operating characteristic (ROC) curve and area under the curve (AUC). RESULTS The model achieved 0.98 accuracy and 0.98 specificity in detecting Type 2 ROP versus Type 1 ROP and A-ROP. On the other hand, as a result of the analysis of ROI regions, the model achieved 0.90 accuracy and 0.95 specificity in detecting Stage 2 ROP versus Stage 3 ROP and 0.91 accuracy and 0.92 specificity in detecting A-ROP versus Type 1 ROP. The AUC scores were 0.98 for Type 2 ROP versus Type 1 ROP and A-ROP, 0.85 for Stage 2 ROP versus Stage 3 ROP, and 0.91 for A-ROP versus Type 1 ROP. CONCLUSION Our study demonstrated that ROP classification by DL-based analysis of fundus images can be distinguished with high accuracy and specificity. Integrating DL-based artificial intelligence algorithms into clinical practice may reduce the workload of ophthalmologists in the future and provide support in decision-making in the management of ROP.
Collapse
Affiliation(s)
- Eşay Kıran Yenice
- Department of Ophthalmology, University of Health Sciences, Etlik Zübeyde Hanım Maternity and Women's Health Teaching and Research Hospital, Ankara, Turkey.
| | - Caner Kara
- Department of Ophthalmology, Etlik City Hospital, Ankara, Turkey
| | | |
Collapse
|
5
|
Huang YP, Vadloori S, Kang EYC, Fukushima Y, Takahashi R, Wu WC. Computer-aided detection of retinopathy of prematurity severity assessment via vessel tortuosity measurement in preterm infants' fundus images. Eye (Lond) 2024:10.1038/s41433-024-03285-w. [PMID: 39097674 DOI: 10.1038/s41433-024-03285-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/12/2024] [Accepted: 07/23/2024] [Indexed: 08/05/2024] Open
Abstract
OBJECTIVE To develop a computer-aided diagnostic system for retinopathy of prematurity (ROP) disease using retinal vessel morphological features. METHODS A total of 200 fundus images from 136 preterm infants with stage 1 to 3 ROP were analysed. Two methods were developed to measure vessel tortuosity: the peak-and-valley method and the polynomial curve fitting method. Correlations between temporal artery tortuosity (TAT) and temporal vein tortuosity (TVT) with ROP severity were investigated, and vessel tortuosity relationships with vessel angles (TAA and TVA) and vessel widths (TAW and TVW). A separate dataset from Japan containing 126 images from 97 preterm patients was used for verification. RESULTS Both methods identified similar tortuosity in images without ROP and mild ROP cases. However, the polynomial curve fit method demonstrated enhanced tortuosity detection in stages 2 and 3 ROP compared to the peak and valley method. A strong positive correlation was revealed between ROP severity and increased arterial and venous tortuosity (P < 0.0001). A significant negative correlation between TAA and TAT (r = -0.485, P < 0.0001) and TVA and TVT (r = -0.281, P < 0.0001), and a significant positive correlation between TAW and TAT (r = 0.204, P value = 0.0040) were identified. Similar results were found in the test dataset from Japan. CONCLUSIONS ROP severity was associated with increased retinal tortuosity and retinal vessel width while displaying a decrease in retinal vascular angle. This quantitative analysis of retinal vessels provides crucial insights for advancing ROP diagnosis and understanding its progression.
Collapse
Grants
- CORPG3L0131, CMRPG3M0131~2, and CMRPG3L0151~3 Chang Gung Memorial Hospital (CGMH)
- CORPG3L0131, CMRPG3M0131~2, and CMRPG3L0151~3 Chang Gung Memorial Hospital (CGMH)
- NTUT-CGMH-110-01 and NTUT-CGMH-109-01 National Taipei University of Technology (NTUT)
- NTUT-CGMH-110-01 and NTUT-CGMH-109-01 National Taipei University of Technology (NTUT)
- NTUT-CGMH-110-01 and NTUT-CGMH-109-01 National Taipei University of Technology (NTUT)
- NTUT-CGMH-110-01 and NTUT-CGMH-109-01 National Taipei University of Technology (NTUT)
- NTUT-CGMH-110-01 and NTUT-CGMH-109-01 National Taipei University of Technology (NTUT)
- NTUT-CGMH-110-01 and NTUT-CGMH-109-01 National Taipei University of Technology (NTUT)
- MOST 109-2314-B-182A-019-MY3 Ministry of Science and Technology, Taiwan (Ministry of Science and Technology of Taiwan)
- MOST 109-2314-B-182A-019-MY3 Ministry of Science and Technology, Taiwan (Ministry of Science and Technology of Taiwan)
- MOST 109-2314-B-182A-019-MY3 Ministry of Science and Technology, Taiwan (Ministry of Science and Technology of Taiwan)
- MOST 109-2314-B-182A-019-MY3 Ministry of Science and Technology, Taiwan (Ministry of Science and Technology of Taiwan)
- MOST 109-2314-B-182A-019-MY3 Ministry of Science and Technology, Taiwan (Ministry of Science and Technology of Taiwan)
- MOST 109-2314-B-182A-019-MY3 Ministry of Science and Technology, Taiwan (Ministry of Science and Technology of Taiwan)
- CORPG3L0131, CMRPG3M0131~2, and CMRPG3L0151~3 Chang Gung Memorial Hospital, Linkou (Linkou Chang Gung Memorial Hospital)
- CORPG3L0131, CMRPG3M0131~2, and CMRPG3L0151~3 Chang Gung Memorial Hospital, Linkou (Linkou Chang Gung Memorial Hospital)
- CORPG3L0131, CMRPG3M0131~2, and CMRPG3L0151~3 Chang Gung Memorial Hospital, Linkou (Linkou Chang Gung Memorial Hospital)
- CORPG3L0131, CMRPG3M0131~2, and CMRPG3L0151~3 Chang Gung Memorial Hospital, Linkou (Linkou Chang Gung Memorial Hospital)
Collapse
Affiliation(s)
- Yo-Ping Huang
- Department of Electrical Engineering, National Penghu University of Science and Technology, Penghu, 88046, Taiwan.
- Department of Electrical Engineering, National Taipei University of Technology, Taipei, 10608, Taiwan.
- Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung, 41349, Taiwan.
| | - Spandana Vadloori
- Department of Electrical Engineering, National Penghu University of Science and Technology, Penghu, 88046, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou, 33305, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, 33305, Taiwan
| | - Yoko Fukushima
- Department of Ophthalmology, Osaka University, Osaka, 565-0871, Japan
| | - Rie Takahashi
- Department of Ophthalmology, Fukuoka University, Fukuoka, 814-0180, Japan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou, 33305, Taiwan.
- College of Medicine, Chang Gung University, Taoyuan, 33305, Taiwan.
| |
Collapse
|
6
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
7
|
Zhu J, Yan Y, Jiang W, Zhang S, Niu X, Wan S, Cong Y, Hu X, Zheng B, Yang Y. A Deep Learning Model for Automatically Quantifying the Anterior Segment in Ultrasound Biomicroscopy Images of Implantable Collamer Lens Candidates. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1262-1272. [PMID: 38777640 DOI: 10.1016/j.ultrasmedbio.2024.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/24/2024] [Accepted: 05/03/2024] [Indexed: 05/25/2024]
Abstract
OBJECTIVE This study aimed to develop and evaluate a deep learning-based model that could automatically measure anterior segment (AS) parameters on preoperative ultrasound biomicroscopy (UBM) images of implantable Collamer lens (ICL) surgery candidates. METHODS A total of 1164 panoramic UBM images were preoperatively obtained from 321 patients who received ICL surgery in the Eye Center of Renmin Hospital of Wuhan University (Wuhan, China) to develop an imaging database. First, the UNet++ network was utilized to segment AS tissues automatically, such as corneal lens and iris. In addition, image processing techniques and geometric localization algorithms were developed to automatically identify the anatomical landmarks (ALs) of pupil diameter (PD), anterior chamber depth (ACD), angle-to-angle distance (ATA), and sulcus-to-sulcus distance (STS). Based on the results of the latter two processes, PD, ACD, ATA, and STS can be measured. Meanwhile, an external dataset of 294 images from Huangshi Aier Eye Hospital was employed to further assess the model's performance in other center. Lastly, a subset of 100 random images from the external test set was chosen to compare the performance of the model with senior experts. RESULTS Whether in the internal test dataset or external test dataset, using manual labeling as the reference standard, the models achieved a mean Dice coefficient exceeding 0.880. Additionally, the intra-class correlation coefficients (ICCs) of ALs' coordinates were all greater than 0.947, and the percentage of Euclidean distance distribution of ALs within 250 μm was over 95.24%.While the ICCs for PD, ACD, ATA, and STS were greater than 0.957, furthermore, the average relative error (ARE) of PD, ACD, ATA, and STS were below 2.41%. In terms of human versus machine performance, the ICCs between the measurements performed by the model and those by senior experts were all greater than 0.931. CONCLUSION A deep learning-based model could measure AS parameters using UBM images of ICL candidates, and exhibited a performance similar to that of a senior ophthalmologist.
Collapse
Affiliation(s)
- Jian Zhu
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yulin Yan
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Weiyan Jiang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Shaowei Zhang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Xiaoguang Niu
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Shanshan Wan
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yuyu Cong
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Xiao Hu
- Wuhan EndoAngel Medical Technology Company, Wuhan, China
| | - Biqin Zheng
- Wuhan EndoAngel Medical Technology Company, Wuhan, China
| | - Yanning Yang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China.
| |
Collapse
|
8
|
Benetz BAM, Shivade VS, Joseph NM, Romig NJ, McCormick JC, Chen J, Titus MS, Sawant OB, Clover JM, Yoganathan N, Menegay HJ, O'Brien RC, Wilson DL, Lass JH. Automatic Determination of Endothelial Cell Density From Donor Cornea Endothelial Cell Images. Transl Vis Sci Technol 2024; 13:40. [PMID: 39177992 PMCID: PMC11346145 DOI: 10.1167/tvst.13.8.40] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 06/21/2024] [Indexed: 08/24/2024] Open
Abstract
Purpose To determine endothelial cell density (ECD) from real-world donor cornea endothelial cell (EC) images using a self-supervised deep learning segmentation model. Methods Two eye banks (Eversight, VisionGift) provided 15,138 single, unique EC images from 8169 donors along with their demographics, tissue characteristics, and ECD. This dataset was utilized for self-supervised training and deep learning inference. The Cornea Image Analysis Reading Center (CIARC) provided a second dataset of 174 donor EC images based on image and tissue quality. These images were used to train a supervised deep learning cell border segmentation model. Evaluation between manual and automated determination of ECD was restricted to the 1939 test EC images with at least 100 cells counted by both methods. Results The ECD measurements from both methods were in excellent agreement with rc of 0.77 (95% confidence interval [CI], 0.75-0.79; P < 0.001) and bias of 123 cells/mm2 (95% CI, 114-131; P < 0.001); 81% of the automated ECD values were within 10% of the manual ECD values. When the analysis was further restricted to the cropped image, the rc was 0.88 (95% CI, 0.87-0.89; P < 0.001), bias was 46 cells/mm2 (95% CI, 39-53; P < 0.001), and 93% of the automated ECD values were within 10% of the manual ECD values. Conclusions Deep learning analysis provides accurate ECDs of donor images, potentially reducing analysis time and training requirements. Translational Relevance The approach of this study, a robust methodology for automatically evaluating donor cornea EC images, could expand the quantitative determination of endothelial health beyond ECD.
Collapse
Affiliation(s)
- Beth Ann M. Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | - Ved S. Shivade
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Naomi M. Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Nathan J. Romig
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - John C. McCormick
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jiawei Chen
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Onkar B. Sawant
- Eversight, Ann Arbor, MI, USA
- Center for Vision and Eye Banking Research, Eversight, Cleveland, OH, USA
| | | | | | - Harry J. Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | | | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jonathan H. Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| |
Collapse
|
9
|
Mathieu A, Ajana S, Korobelnik JF, Le Goff M, Gontier B, Rougier MB, Delcourt C, Delyfer MN. DeepAlienorNet: A deep learning model to extract clinical features from colour fundus photography in age-related macular degeneration. Acta Ophthalmol 2024; 102:e823-e830. [PMID: 38345159 DOI: 10.1111/aos.16660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/11/2024] [Accepted: 01/25/2024] [Indexed: 07/09/2024]
Abstract
OBJECTIVE This study aimed to develop a deep learning (DL) model, named 'DeepAlienorNet', to automatically extract clinical signs of age-related macular degeneration (AMD) from colour fundus photography (CFP). METHODS AND ANALYSIS The ALIENOR Study is a cohort of French individuals 77 years of age or older. A multi-label DL model was developed to grade the presence of 7 clinical signs: large soft drusen (>125 μm), intermediate soft (63-125 μm), large area of soft drusen (total area >500 μm), presence of central soft drusen (large or intermediate), hyperpigmentation, hypopigmentation, and advanced AMD (defined as neovascular or atrophic AMD). Prediction performances were evaluated using cross-validation and the expert human interpretation of the clinical signs as the ground truth. RESULTS A total of 1178 images were included in the study. Averaging the 7 clinical signs' detection performances, DeepAlienorNet achieved an overall sensitivity, specificity, and AUROC of 0.77, 0.83, and 0.87, respectively. The model demonstrated particularly strong performance in predicting advanced AMD and large areas of soft drusen. It can also generate heatmaps, highlighting the relevant image areas for interpretation. CONCLUSION DeepAlienorNet demonstrates promising performance in automatically identifying clinical signs of AMD from CFP, offering several notable advantages. Its high interpretability reduces the black box effect, addressing ethical concerns. Additionally, the model can be easily integrated to automate well-established and validated AMD progression scores, and the user-friendly interface further enhances its usability. The main value of DeepAlienorNet lies in its ability to assist in precise severity scoring for further adapted AMD management, all while preserving interpretability.
Collapse
Affiliation(s)
- Alexis Mathieu
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Soufiane Ajana
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Jean-François Korobelnik
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Mélanie Le Goff
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Brigitte Gontier
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | | | - Cécile Delcourt
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Marie-Noëlle Delyfer
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
- FRCRnet/FCRIN Network, Bordeaux, France
| |
Collapse
|
10
|
Chatzimichail E, Feltgen N, Motta L, Empeslidis T, Konstas AG, Gatzioufas Z, Panos GD. Transforming the future of ophthalmology: artificial intelligence and robotics' breakthrough role in surgical and medical retina advances: a mini review. Front Med (Lausanne) 2024; 11:1434241. [PMID: 39076760 PMCID: PMC11284058 DOI: 10.3389/fmed.2024.1434241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 06/26/2024] [Indexed: 07/31/2024] Open
Abstract
Over the past decade, artificial intelligence (AI) and its subfields, deep learning and machine learning, have become integral parts of ophthalmology, particularly in the field of ophthalmic imaging. A diverse array of algorithms has emerged to facilitate the automated diagnosis of numerous medical and surgical retinal conditions. The development of these algorithms necessitates extensive training using large datasets of retinal images. This approach has demonstrated a promising impact, especially in increasing accuracy of diagnosis for unspecialized clinicians for various diseases and in the area of telemedicine, where access to ophthalmological care is restricted. In parallel, robotic technology has made significant inroads into the medical field, including ophthalmology. The vast majority of research in the field of robotic surgery has been focused on anterior segment and vitreoretinal surgery. These systems offer potential improvements in accuracy and address issues such as hand tremors. However, widespread adoption faces hurdles, including the substantial costs associated with these systems and the steep learning curve for surgeons. These challenges currently constrain the broader implementation of robotic surgical systems in ophthalmology. This mini review discusses the current research and challenges, underscoring the limited yet growing implementation of AI and robotic systems in the field of retinal conditions.
Collapse
Affiliation(s)
| | - Nicolas Feltgen
- Department of Ophthalmology, University Hospital of Basel, Basel, Switzerland
| | - Lorenzo Motta
- Department of Ophthalmology, School of Medicine, University of Padova, Padua, Italy
| | | | - Anastasios G. Konstas
- Department of Ophthalmology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Zisis Gatzioufas
- Department of Ophthalmology, University Hospital of Basel, Basel, Switzerland
| | - Georgios D. Panos
- Department of Ophthalmology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham University Hospitals, Nottingham, United Kingdom
- Division of Ophthalmology and Visual Sciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
11
|
Peng J, Xie X, Lu Z, Xu Y, Xie M, Luo L, Xiao H, Ye H, Chen L, Yang J, Zhang M, Zhao P, Zheng C. Generative adversarial networks synthetic optical coherence tomography images as an education tool for image diagnosis of macular diseases: a randomized trial. Front Med (Lausanne) 2024; 11:1424749. [PMID: 39050535 PMCID: PMC11266019 DOI: 10.3389/fmed.2024.1424749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 06/19/2024] [Indexed: 07/27/2024] Open
Abstract
Purpose This study aimed to evaluate the effectiveness of generative adversarial networks (GANs) in creating synthetic OCT images as an educational tool for teaching image diagnosis of macular diseases to medical students and ophthalmic residents. Methods In this randomized trial, 20 fifth-year medical students and 20 ophthalmic residents were enrolled and randomly assigned (1:1 allocation) into Group real OCT and Group GANs OCT. All participants had a pretest to assess their educational background, followed by a 30-min smartphone-based education program using GANs or real OCT images for macular disease recognition training. Two additional tests were scheduled: one 5 min after the training to assess short-term performance, and another 1 week later to assess long-term performance. Scores and time consumption were recorded and compared. After all the tests, participants completed an anonymous subjective questionnaire. Results Group GANs OCT scores increased from 80.0 (46.0 to 85.5) to 92.0 (81.0 to 95.5) 5 min after training (p < 0.001) and 92.30 ± 5.36 1 week after training (p < 0.001). Similarly, Group real OCT scores increased from 66.00 ± 19.52 to 92.90 ± 5.71 (p < 0.001), respectively. When compared between two groups, no statistically significant difference was found in test scores, score improvements, or time consumption. After training, medical students had a significantly higher score improvement than residents (p < 0.001). Conclusion The education tool using synthetic OCT images had a similar educational ability compared to that using real OCT images, which improved the interpretation ability of ophthalmic residents and medical students in both short-term and long-term performances. The smartphone-based educational tool could be widely promoted for educational applications.Clinical trial registration: https://www.chictr.org.cn, Chinese Clinical Trial Registry [No. ChiCTR 2100053195].
Collapse
Affiliation(s)
- Jie Peng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaoling Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Zupeng Lu
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Ophthalmology, Shanghai Children’s Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Xu
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Meng Xie
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Li Luo
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Haodong Xiao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hongfei Ye
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Li Chen
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Peiquan Zhao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Hospital Development Strategy, China Hospital Development Institute Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
12
|
Chu Y, Hu S, Li Z, Yang X, Liu H, Yi X, Qi X. Image Analysis-Based Machine Learning for the Diagnosis of Retinopathy of Prematurity: A Meta-analysis and Systematic Review. Ophthalmol Retina 2024; 8:678-687. [PMID: 38237772 DOI: 10.1016/j.oret.2024.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 01/02/2024] [Accepted: 01/09/2024] [Indexed: 02/17/2024]
Abstract
TOPIC To evaluate the performance of machine learning (ML) in the diagnosis of retinopathy of prematurity (ROP) and to assess whether it can be an effective automated diagnostic tool for clinical applications. CLINICAL RELEVANCE Early detection of ROP is crucial for preventing tractional retinal detachment and blindness in preterm infants, which has significant clinical relevance. METHODS Web of Science, PubMed, Embase, IEEE Xplore, and Cochrane Library were searched for published studies on image-based ML for diagnosis of ROP or classification of clinical subtypes from inception to October 1, 2022. The quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies was used to determine the risk of bias (RoB) of the included original studies. A bivariate mixed effects model was used for quantitative analysis of the data, and the Deek's test was used for calculating publication bias. Quality of evidence was assessed using Grading of Recommendations Assessment, Development and Evaluation. RESULTS Twenty-two studies were included in the systematic review; 4 studies had high or unclear RoB. In the area of indicator test items, only 2 studies had high or unclear RoB because they did not establish predefined thresholds. In the area of reference standards, 3 studies had high or unclear RoB. Regarding applicability, only 1 study was considered to have high or unclear applicability in terms of patient selection. The sensitivity and specificity of image-based ML for the diagnosis of ROP were 93% (95% confidence interval [CI]: 0.90-0.94) and 95% (95% CI: 0.94-0.97), respectively. The area under the receiver operating characteristic curve (AUC) was 0.98 (95% CI: 0.97-0.99). For the classification of clinical subtypes of ROP, the sensitivity and specificity were 93% (95% CI: 0.89-0.96) and 93% (95% CI: 0.89-0.95), respectively, and the AUC was 0.97 (95% CI: 0.96-0.98). The classification results were highly similar to those of clinical experts (Spearman's R = 0.879). CONCLUSIONS Machine learning algorithms are no less accurate than human experts and hold considerable potential as automated diagnostic tools for ROP. However, given the quality and high heterogeneity of the available evidence, these algorithms should be considered as supplementary tools to assist clinicians in diagnosing ROP. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Yihang Chu
- Central South University of Forestry and Technology, Changsha, Hunan, China; State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Shipeng Hu
- Central South University of Forestry and Technology, Changsha, Hunan, China
| | - Zilan Li
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - Xiao Yang
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Hui Liu
- Central South University of Forestry and Technology, Changsha, Hunan, China.
| | - Xianglong Yi
- Department of Ophthalmology, The First Affiliated Hospital of Xinjiang Medical University, Urumchi, China.
| | - Xinwei Qi
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China.
| |
Collapse
|
13
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
14
|
Maitra P, Shah PK, Campbell PJ, Rishi P. The scope of artificial intelligence in retinopathy of prematurity (ROP) management. Indian J Ophthalmol 2024; 72:931-934. [PMID: 38454859 PMCID: PMC11329810 DOI: 10.4103/ijo.ijo_2544_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 01/04/2024] [Indexed: 03/09/2024] Open
Abstract
Artificial Intelligence (AI) is a revolutionary technology that has the potential to develop into a widely implemented system that could reduce the dependence on qualified professionals/experts for screening the large at-risk population, especially in the Indian scenario. Deep learning involves learning without being explicitly told what to focus on and utilizes several layers of artificial neural networks (ANNs) to create a robust algorithm that is capable of high-complexity tasks. Convolutional neural networks (CNNs) are a subset of ANNs that are particularly useful for image processing as well as cognitive tasks. Training of these algorithms involves inputting raw human-labeled data, which are then processed through the algorithm's multiple layers and allow CNN to develop their own learning of image features. AI systems must be validated using different population datasets since the performance of the AI system would vary according to the population. Indian datasets have been used in AI-based risk model that could predict whether an infant would develop treatment-requiring retinopathy of prematurity (ROP). AI also served as an epidemiological tool by objectively showing that a higher ROP severity was in Neonatal intensive care units (NICUs) that did not have the resources to monitor and titrate oxygen. There are rising concerns about the medicolegal aspect of AI implementation as well as discussion on the possibilities of catastrophic life-threatening diseases like retinoblastoma and lipemia retinalis being missed by AI. Computer-based systems have the advantage over humans in not being susceptible to biases or fatigue. This is especially relevant in a country like India with an increased rate of ROP and a preexisting strained doctor-to-preterm child ratio. Many AI algorithms can perform in a way comparable to or exceeding human experts, and this opens possibilities for future large-scale prospective studies.
Collapse
Affiliation(s)
- Puja Maitra
- Department of Vitreoretina Services, Aravind Eye Hospital, Chennai, Tamil Nadu, India
| | - Parag K Shah
- Department of Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, Tamil Nadu, India
| | - Peter J Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon, United States
| | - Pukhraj Rishi
- Ocular Oncology and Vitreoretinal Surgery, Truhlsen Eye Institute, University of Nebraska Medical Centre, Omaha, NE, USA
| |
Collapse
|
15
|
Hashemian H, Peto T, Ambrósio R, Lengyel I, Kafieh R, Muhammed Noori A, Khorrami-Nejad M. Application of Artificial Intelligence in Ophthalmology: An Updated Comprehensive Review. J Ophthalmic Vis Res 2024; 19:354-367. [PMID: 39359529 PMCID: PMC11444002 DOI: 10.18502/jovr.v19i3.15893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 07/06/2024] [Indexed: 10/04/2024] Open
Abstract
Artificial intelligence (AI) holds immense promise for transforming ophthalmic care through automated screening, precision diagnostics, and optimized treatment planning. This paper reviews recent advances and challenges in applying AI techniques such as machine learning and deep learning to major eye diseases. In diabetic retinopathy, AI algorithms analyze retinal images to accurately identify lesions, which helps clinicians in ophthalmology practice. Systems like IDx-DR (IDx Technologies Inc, USA) are FDA-approved for autonomous detection of referable diabetic retinopathy. For glaucoma, deep learning models assess optic nerve head morphology in fundus photographs to detect damage. In age-related macular degeneration, AI can quantify drusen and diagnose disease severity from both color fundus and optical coherence tomography images. AI has also been used in screening for retinopathy of prematurity, keratoconus, and dry eye disease. Beyond screening, AI can aid treatment decisions by forecasting disease progression and anti-VEGF response. However, potential limitations such as the quality and diversity of training data, lack of rigorous clinical validation, and challenges in regulatory approval and clinician trust must be addressed for the widespread adoption of AI. Two other significant hurdles include the integration of AI into existing clinical workflows and ensuring transparency in AI decision-making processes. With continued research to address these limitations, AI promises to enable earlier diagnosis, optimized resource allocation, personalized treatment, and improved patient outcomes. Besides, synergistic human-AI systems could set a new standard for evidence-based, precise ophthalmic care.
Collapse
Affiliation(s)
- Hesam Hashemian
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tunde Peto
- School of Medicine, Dentistry and Biomedical Sciences, Centre for Public Health, Queen's University Belfast, Northern Ireland, UK
| | - Renato Ambrósio
- Department of Ophthalmology, Federal University the State of Rio de Janeiro (UNIRIO), Brazil
- Department of Ophthalmology, Federal University of São Paulo, São Paulo, Brazil
- Brazilian Study Group of Artificial Intelligence and Corneal Analysis - BrAIN, Rio de Janeiro & Maceió, Brazil
- Rio Vision Hospital, Rio de Janeiro, Brazil
- Instituto de Olhos Renato Ambrósio, Rio de Janeiro, Brazil
| | - Imre Lengyel
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University Belfast, Northern Ireland
| | - Rahele Kafieh
- Department of Engineering, Durham University, United Kingdom
| | | | - Masoud Khorrami-Nejad
- School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
- Department of Optical Techniques, Al-Mustaqbal University College, Hillah, Babylon 51001, Iraq
| |
Collapse
|
16
|
Lee SB. Development of a chest X-ray machine learning convolutional neural network model on a budget and using artificial intelligence explainability techniques to analyze patterns of machine learning inference. JAMIA Open 2024; 7:ooae035. [PMID: 38699648 PMCID: PMC11064095 DOI: 10.1093/jamiaopen/ooae035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/03/2024] [Accepted: 04/10/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Machine learning (ML) will have a large impact on medicine and accessibility is important. This study's model was used to explore various concepts including how varying features of a model impacted behavior. Materials and Methods This study built an ML model that classified chest X-rays as normal or abnormal by using ResNet50 as a base with transfer learning. A contrast enhancement mechanism was implemented to improve performance. After training with a dataset of publicly available chest radiographs, performance metrics were determined with a test set. The ResNet50 base was substituted with deeper architectures (ResNet101/152) and visualization methods used to help determine patterns of inference. Results Performance metrics were an accuracy of 79%, recall 69%, precision 96%, and area under the curve of 0.9023. Accuracy improved to 82% and recall to 74% with contrast enhancement. When visualization methods were applied and the ratio of pixels used for inference measured, deeper architectures resulted in the model using larger portions of the image for inference as compared to ResNet50. Discussion The model performed on par with many existing models despite consumer-grade hardware and smaller datasets. Individual models vary thus a single model's explainability may not be generalizable. Therefore, this study varied architecture and studied patterns of inference. With deeper ResNet architectures, the machine used larger portions of the image to make decisions. Conclusion An example using a custom model showed that AI (Artificial Intelligence) can be accessible on consumer-grade hardware, and it also demonstrated an example of studying themes of ML explainability by varying ResNet architectures.
Collapse
Affiliation(s)
- Stephen B Lee
- Division of Infectious Diseases, Department of Medicine, College of Medicine, University of Saskatchewan, Regina, S4P 0W5, Canada
| |
Collapse
|
17
|
Ahn J, Choi M. Advancements and turning point of artificial intelligence in ophthalmology: A comprehensive analysis of research trends and collaborative networks. Ophthalmic Physiol Opt 2024; 44:1031-1040. [PMID: 38581209 DOI: 10.1111/opo.13315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/08/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative force with great potential in various fields, including healthcare. In recent years, AI has garnered significant attention due to its potential to revolutionise ophthalmology, leading to advancements in patient care such as disease detection, diagnosis, treatment and monitoring of disease progression. This study presents a comprehensive analysis of the research trends and collaborative networks at the intersection of AI and ophthalmology. In this study, we conducted an extensive search of the Web of Science Core Collection to identify articles related to 'artificial intelligence' in ophthalmology published from 1968 to 2023. We performed co-occurrence keywords and co-authorship network analyses using VOSviewer software to explore the relationships between keywords and country collaboration. We found a remarkable surge in articles applying AI in ophthalmology after 2017, marking a turning point in the integration of AI within the medical field. The primary application of AI shifted towards the diagnosis of ocular disease, which was particularly evident through keywords such as glaucoma, diabetic retinopathy and age-related macular degeneration. Analysis of the collaboration networks of countries revealed a global expansion of ophthalmology-related AI research. This study provides valuable insights into the evolving landscape of AI integration in ophthalmology, indicating its growing potential for enhancing disease detection, diagnosis, treatment planning and monitoring of disease progression. In order to translate AI technologies into clinical practice effectively, it is imperative to comprehend the evolving research trends and advancements at the intersection of AI and ophthalmology.
Collapse
Affiliation(s)
- Jihye Ahn
- Department of Optometry, College of Energy and Biotechnology, Seoul National University of Science and Technology, Seoul, Republic of Korea
| | - Moonsung Choi
- Department of Optometry, College of Energy and Biotechnology, Seoul National University of Science and Technology, Seoul, Republic of Korea
- Convergence Institute of Biomedical Engineering and Biomaterials, Seoul National University of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
18
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
19
|
Sorrentino FS, Gardini L, Fontana L, Musa M, Gabai A, Maniaci A, Lavalle S, D’Esposito F, Russo A, Longo A, Surico PL, Gagliano C, Zeppieri M. Novel Approaches for Early Detection of Retinal Diseases Using Artificial Intelligence. J Pers Med 2024; 14:690. [PMID: 39063944 PMCID: PMC11278069 DOI: 10.3390/jpm14070690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND An increasing amount of people are globally affected by retinal diseases, such as diabetes, vascular occlusions, maculopathy, alterations of systemic circulation, and metabolic syndrome. AIM This review will discuss novel technologies in and potential approaches to the detection and diagnosis of retinal diseases with the support of cutting-edge machines and artificial intelligence (AI). METHODS The demand for retinal diagnostic imaging exams has increased, but the number of eye physicians or technicians is too little to meet the request. Thus, algorithms based on AI have been used, representing valid support for early detection and helping doctors to give diagnoses and make differential diagnosis. AI helps patients living far from hub centers to have tests and quick initial diagnosis, allowing them not to waste time in movements and waiting time for medical reply. RESULTS Highly automated systems for screening, early diagnosis, grading and tailored therapy will facilitate the care of people, even in remote lands or countries. CONCLUSION A potential massive and extensive use of AI might optimize the automated detection of tiny retinal alterations, allowing eye doctors to perform their best clinical assistance and to set the best options for the treatment of retinal diseases.
Collapse
Affiliation(s)
| | - Lorenzo Gardini
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.)
| | - Luigi Fontana
- Ophthalmology Unit, Department of Surgical Sciences, Alma Mater Studiorum University of Bologna, IRCCS Azienda Ospedaliero-Universitaria Bologna, 40100 Bologna, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Andrea Gabai
- Department of Ophthalmology, Humanitas-San Pio X, 20159 Milan, Italy
| | - Antonino Maniaci
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Salvatore Lavalle
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Andrea Russo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Antonio Longo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Pier Luigi Surico
- Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| |
Collapse
|
20
|
Zivojinovic S, Petrovic Savic S, Prodanovic T, Prodanovic N, Simovic A, Devedzic G, Savic D. Neurosonographic Classification in Premature Infants Receiving Omega-3 Supplementation Using Convolutional Neural Networks. Diagnostics (Basel) 2024; 14:1342. [PMID: 39001234 PMCID: PMC11241385 DOI: 10.3390/diagnostics14131342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/14/2024] [Accepted: 06/21/2024] [Indexed: 07/16/2024] Open
Abstract
This study focuses on developing a model for the precise determination of ultrasound image density and classification using convolutional neural networks (CNNs) for rapid, timely, and accurate identification of hypoxic-ischemic encephalopathy (HIE). Image density is measured by comparing two regions of interest on ultrasound images of the choroid plexus and brain parenchyma using the Delta E CIE76 value. These regions are then combined and serve as input to the CNN model for classification. The classification results of images into three groups (Normal, Moderate, and Intensive) demonstrate high model efficiency, with an overall accuracy of 88.56%, precision of 90% for Normal, 85% for Moderate, and 88% for Intensive. The overall F-measure is 88.40%, indicating a successful combination of accuracy and completeness in classification. This study is significant as it enables rapid and accurate identification of hypoxic-ischemic encephalopathy in newborns, which is crucial for the timely implementation of appropriate therapeutic measures and improving long-term outcomes for these patients. The application of such advanced techniques allows medical personnel to manage treatment more efficiently, reducing the risk of complications and improving the quality of care for newborns with HIE.
Collapse
Affiliation(s)
- Suzana Zivojinovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Suzana Petrovic Savic
- Department for Production Engineering, Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac, Serbia; (S.P.S.); (G.D.)
| | - Tijana Prodanovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Nikola Prodanovic
- Department of Surgery, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia
- Clinic for Orthopaedic and Trauma Surgery, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Aleksandra Simovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Goran Devedzic
- Department for Production Engineering, Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac, Serbia; (S.P.S.); (G.D.)
| | - Dragana Savic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| |
Collapse
|
21
|
Coyner AS, Young BK, Ostmo SR, Grigorian F, Ells A, Hubbard B, Rodriguez SH, Rishi P, Miller AM, Bhatt AR, Agarwal-Sinha S, Sears J, Chan RVP, Chiang MF, Kalpathy-Cramer J, Binenbaum G, Campbell JP. Use of an Artificial Intelligence-Generated Vascular Severity Score Improved Plus Disease Diagnosis in Retinopathy of Prematurity. Ophthalmology 2024:S0161-6420(24)00339-7. [PMID: 38866367 DOI: 10.1016/j.ophtha.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/14/2024] Open
Abstract
PURPOSE To evaluate whether providing clinicians with an artificial intelligence (AI)-based vascular severity score (VSS) improves consistency in the diagnosis of plus disease in retinopathy of prematurity (ROP). DESIGN Multireader diagnostic accuracy imaging study. PARTICIPANTS Eleven ROP experts, 9 of whom had been in practice for 10 years or more. METHODS RetCam (Natus Medical Incorporated) fundus images were obtained from premature infants during routine ROP screening as part of the Imaging and Informatics in ROP study between January 2012 and July 2020. From all available examinations, a subset of 150 eye examinations from 110 infants were selected for grading. An AI-based VSS was assigned to each set of images using the i-ROP DL system (Siloam Vision). The clinicians were asked to diagnose plus disease for each examination and to assign an estimated VSS (range, 1-9) at baseline, and then again 1 month later with AI-based VSS assistance. A reference standard diagnosis (RSD) was assigned to each eye examination from the Imaging and Informatics in ROP study based on 3 masked expert labels and the ophthalmoscopic diagnosis. MAIN OUTCOME MEASURES Mean linearly weighted κ value for plus disease diagnosis compared with RSD. Area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPR) for labels 1 through 9 compared with RSD for plus disease. RESULTS Expert agreement improved significantly, from substantial (κ value, 0.69 [0.59, 0.75]) to near perfect (κ value, 0.81 [0.71, 0.86]), when AI-based VSS was integrated. Additionally, a significant improvement in plus disease discrimination was achieved as measured by mean AUC (from 0.94 [95% confidence interval (CI), 0.92-0.96] to 0.98 [95% CI, 0.96-0.99]; difference, 0.04 [95% CI, 0.01-0.06]) and AUPR (from 0.86 [95% CI, 0.81-0.90] to 0.95 [95% CI, 0.91-0.97]; difference, 0.09 [95% CI, 0.03-0.14]). CONCLUSIONS Providing ROP clinicians with an AI-based measurement of vascular severity in ROP was associated with both improved plus disease diagnosis and improved continuous severity labeling as compared with an RSD for plus disease. If implemented in practice, AI-based VSS could reduce interobserver variability and could standardize treatment for infants with ROP. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Aaron S Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Benjamin K Young
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Susan R Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Florin Grigorian
- Arkansas Children's Hospital, University of Arkansas for Medical Sciences, Little Rock, Arkansas
| | - Anna Ells
- Calgary Retina Consultants, University of Calgary, Calgary, Alberta, Canada
| | - Baker Hubbard
- Emory Eye Center, Emory University School of Medicine, Atlanta, Georgia
| | - Sarah H Rodriguez
- Department of Ophthalmology and Visual Science, University of Chicago, Chicago, Illinois
| | - Pukhraj Rishi
- Truhlsen Eye Institute, University of Nebraska Medical Centre, Omaha, Nebraska
| | - Aaron M Miller
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas
| | - Amit R Bhatt
- Department of Ophthalmology, Texas Children's Hospital, Houston, Texas
| | | | - Jonathan Sears
- Cole Eye Institute, The Cleveland Clinic, Cleveland, Ohio
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- National Eye Institute, National Institutes of Health, Bethesda, Maryland; Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | - Gil Binenbaum
- Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
22
|
Kim JH, Hong H, Lee K, Jeong Y, Ryu H, Kim H, Jang SH, Park HK, Han JY, Park HJ, Bae H, Oh BM, Kim WS, Lee SY, Lee SU. AI in evaluating ambulation of stroke patients: severity classification with video and functional ambulation category scale. Top Stroke Rehabil 2024:1-9. [PMID: 38841903 DOI: 10.1080/10749357.2024.2359342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 05/18/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND The evaluation of gait function and severity classification of stroke patients are important to determine the rehabilitation goal and the level of exercise. Physicians often qualitatively evaluate patients' walking ability through visual gait analysis using naked eye, video images, or standardized assessment tools. Gait evaluation through observation relies on the doctor's empirical judgment, potentially introducing subjective opinions. Therefore, conducting research to establish a basis for more objective judgment is crucial. OBJECTIVE To verify a deep learning model that classifies gait image data of stroke patients according to Functional Ambulation Category (FAC) scale. METHODS Gait vision data from 203 stroke patients and 182 healthy individuals recruited from six medical institutions were collected to train a deep learning model for classifying gait severity in stroke patients. The recorded videos were processed using OpenPose. The dataset was randomly split into 80% for training and 20% for testing. RESULTS The deep learning model attained a training accuracy of 0.981 and test accuracy of 0.903. Area Under the Curve(AUC) values of 0.93, 0.95, and 0.96 for discriminating among the mild, moderate, and severe stroke groups, respectively. CONCLUSION This confirms the potential of utilizing human posture estimation based on vision data not only to develop gait parameter models but also to develop models to classify severity according to the FAC criteria used by physicians. To develop an AI-based severity classification model, a large amount and variety of data is necessary and data collected in non-standardized real environments, not in laboratories, can also be used meaningfully.
Collapse
Affiliation(s)
- Jeong-Hyun Kim
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Hyeon Hong
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Kyuwon Lee
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Yeji Jeong
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Hokyoung Ryu
- Department of Graduate School of Technology and Innovation Management, Hanyang University, Seoul, South Korea
| | - Hyundo Kim
- Department of Intelligence Computing, Hanyang University, Seoul, South Korea
| | - Seong-Ho Jang
- Department of Rehabilitation Medicine, Hanyang University, Guri Hospital, Gyeonggi-do, South Korea
| | - Hyeng-Kyu Park
- Department of Physical & Rehabilitation Medicine, Regional Cardiocerebrovascular Center, Center for Aging and Geriatrics, Chonnam National University Medical School & Hospital, Gwangju, South Korea
| | - Jae-Young Han
- Department of Physical & Rehabilitation Medicine, Regional Cardiocerebrovascular Center, Center for Aging and Geriatrics, Chonnam National University Medical School & Hospital, Gwangju, South Korea
| | - Hye Jung Park
- Department of Rehabilitation Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, South Korea
| | - Hasuk Bae
- Department of Rehabilitation Medicine, Ewha Woman's University, Seoul, South Korea
| | - Byung-Mo Oh
- Department of Rehabilitation, Seoul National University Hospital, Seoul, South Korea
| | - Won-Seok Kim
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Sang Yoon Lee
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, SMG-SNU Boramae Medical Center, Seoul, South Korea
| | - Shi-Uk Lee
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
- Department of Physical Medicine & Rehabilitation, College of Medicine, Seoul National University, Seoul, South Korea
| |
Collapse
|
23
|
Chen S, Zhao X, Wu Z, Cao K, Zhang Y, Tan T, Lam CT, Xu Y, Zhang G, Sun Y. Multi-risk factors joint prediction model for risk prediction of retinopathy of prematurity. EPMA J 2024; 15:261-274. [PMID: 38841619 PMCID: PMC11147992 DOI: 10.1007/s13167-024-00363-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/17/2024] [Indexed: 06/07/2024]
Abstract
Purpose Retinopathy of prematurity (ROP) is a retinal vascular proliferative disease common in low birth weight and premature infants and is one of the main causes of blindness in children.In the context of predictive, preventive and personalized medicine (PPPM/3PM), early screening, identification and treatment of ROP will directly contribute to improve patients' long-term visual prognosis and reduce the risk of blindness. Thus, our objective is to establish an artificial intelligence (AI) algorithm combined with clinical demographics to create a risk model for ROP including treatment-requiring retinopathy of prematurity (TR-ROP) infants. Methods A total of 22,569 infants who underwent routine ROP screening in Shenzhen Eye Hospital from March 2003 to September 2023 were collected, including 3335 infants with ROP and 1234 infants with TR-ROP among ROP infants. Two machine learning methods of logistic regression and decision tree and a deep learning method of multi-layer perceptron were trained by using the relevant combination of risk factors such as birth weight (BW), gestational age (GA), gender, whether multiple births (MB) and mode of delivery (MD) to achieve the risk prediction of ROP and TR-ROP. We used five evaluation metrics to evaluate the performance of the risk prediction model. The area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUCPR) were the main measurement metrics. Results In the risk prediction for ROP, the BW + GA demonstrated the optimal performance (mean ± SD, AUCPR: 0.4849 ± 0.0175, AUC: 0.8124 ± 0.0033). In the risk prediction of TR-ROP, reasonable performance can be achieved by using GA + BW + Gender + MD + MB (AUCPR: 0.2713 ± 0.0214, AUC: 0.8328 ± 0.0088). Conclusions Combining risk factors with AI in screening programs for ROP could achieve risk prediction of ROP and TR-ROP, detect TR-ROP earlier and reduce the number of ROP examinations and unnecessary physiological stress in low-risk infants. Therefore, combining ROP-related biometric information with AI is a cost-effective strategy for predictive diagnostic, targeted prevention, and personalization of medical services in early screening and treatment of ROP.
Collapse
Affiliation(s)
- Shaobin Chen
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Kangyang Cao
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Yulin Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Chan-Tong Lam
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Yanwu Xu
- School of Future Technology, South China University of Technology, Guangzhou, Guangzhou; Pazhou Lab, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Yue Sun
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, 5612 AP The Netherlands
| |
Collapse
|
24
|
Abstract
OBJECTIVE To summarize the current research progress of machine learning and venous thromboembolism. METHODS The literature on risk factors, diagnosis, prevention and prognosis of machine learning and venous thromboembolism in recent years was reviewed. RESULTS Machine learning is the future of biomedical research, personalized medicine, and computer-aided diagnosis, and will significantly promote the development of biomedical research and healthcare. However, many medical professionals are not familiar with it. In this review, we will introduce several commonly used machine learning algorithms in medicine, discuss the application of machine learning in venous thromboembolism, and reveal the challenges and opportunities of machine learning in medicine. CONCLUSION The incidence of venous thromboembolism is high, the diagnostic measures are diverse, and it is necessary to classify and treat machine learning, and machine learning as a research tool, it is more necessary to strengthen the special research of venous thromboembolism and machine learning.
Collapse
Affiliation(s)
- Shirong Zou
- West China Hospital of Medicine, West China Hospital Operation Room /West China School of Nursing, Sichuan University, Chengdu, China
| | - Zhoupeng Wu
- Department of vascular surgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
25
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
26
|
Ramakrishnan MS, Kovach JL, Wykoff CC, Berrocal AM, Modi YS. American Society of Retina Specialists Clinical Practice Guidelines on Multimodal Imaging for Retinal Disease. JOURNAL OF VITREORETINAL DISEASES 2024; 8:234-246. [PMID: 38770073 PMCID: PMC11102716 DOI: 10.1177/24741264241237012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Purpose: Advancements in retinal imaging have augmented our understanding of the pathology and structure-function relationships of retinal disease. No single diagnostic test is sufficient; rather, diagnostic and management strategies increasingly involve the synthesis of multiple imaging modalities. Methods: This literature review and editorial offer practical clinical guidelines for how the retina specialist can use multimodal imaging to manage retinal conditions. Results: Various imaging modalities offer information on different aspects of retinal structure and function. For example, optical coherence tomography (OCT) and B-scan ultrasonography can provide insights into the microstructural anatomy; fluorescein angiography (FA), indocyanine green angiography (ICGA), and OCT angiography (OCTA) can reveal vascular integrity and perfusion status; and near-infrared reflectance and fundus autofluorescence (FAF) can characterize molecular components within tissues. Managing retinal vascular diseases often includes fundus photography, OCT, OCTA, and FA to evaluate for macular edema, retinal ischemia, and the secondary complications of neovascularization (NV). OCT and FAF play a key role in diagnosing and treating maculopathies. FA, OCTA, and ICGA can help identify macular NV, posterior uveitis, and choroidal venous insufficiency, which guides treatment strategies. Finally, OCT and B-scan ultrasonography can help with preoperative planning and prognostication in vitreoretinal surgical conditions. Conclusions: Today, the retina specialist has access to numerous retinal imaging modalities that can augment the clinical examination to help diagnose and manage retinal conditions. Understanding the capabilities and limitations of each modality is critical to maximizing its clinical utility.
Collapse
Affiliation(s)
- Meera S. Ramakrishnan
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, NY, USA
- Department of Ophthalmology, New York University Langone Medical Center, New York, NY, USA
| | - Jaclyn L. Kovach
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Charlie C. Wykoff
- Retina Consultants of Houston, Blanton Eye Institute, Houston Methodist Hospital, Weill Cornell Medical College, Houston, TX, USA
| | - Audina M. Berrocal
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yasha S. Modi
- Department of Ophthalmology, New York University Langone Medical Center, New York, NY, USA
| |
Collapse
|
27
|
Marra KV, Chen JS, Robles-Holmes HK, Miller J, Wei G, Aguilar E, Ideguchi Y, Ly KB, Prenner S, Erdogmus D, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Development of a Semi-automated Computer-based Tool for the Quantification of Vascular Tortuosity in the Murine Retina. OPHTHALMOLOGY SCIENCE 2024; 4:100439. [PMID: 38361912 PMCID: PMC10867761 DOI: 10.1016/j.xops.2023.100439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 10/10/2023] [Accepted: 11/27/2023] [Indexed: 02/17/2024]
Abstract
Purpose The murine oxygen-induced retinopathy (OIR) model is one of the most widely used animal models of ischemic retinopathy, mimicking hallmark pathophysiology of initial vaso-obliteration (VO) resulting in ischemia that drives neovascularization (NV). In addition to NV and VO, human ischemic retinopathies, including retinopathy of prematurity (ROP), are characterized by increased vascular tortuosity. Vascular tortuosity is an indicator of disease severity, need to treat, and treatment response in ROP. Current literature investigating novel therapeutics in the OIR model often report their effects on NV and VO, and measurements of vascular tortuosity are less commonly performed. No standardized quantification of vascular tortuosity exists to date despite this metric's relevance to human disease. This proof-of-concept study aimed to apply a previously published semi-automated computer-based image analysis approach (iROP-Assist) to develop a new tool to quantify vascular tortuosity in mouse models. Design Experimental study. Subjects C57BL/6J mice subjected to the OIR model. Methods In a pilot study, vasculature was manually segmented on flat-mount images of OIR and normoxic (NOX) mice retinas and segmentations were analyzed with iROP-Assist to quantify vascular tortuosity metrics. In a large cohort of age-matched (postnatal day 12 [P12], P17, P25) NOX and OIR mice retinas, NV, VO, and vascular tortuosity were quantified and compared. In a third experiment, vascular tortuosity in OIR mice retinas was quantified on P17 following intravitreal injection with anti-VEGF (aflibercept) or Immunoglobulin G isotype control on P12. Main Outcome Measures Vascular tortuosity. Results Cumulative tortuosity index was the best metric produced by iROP-Assist for discriminating between OIR mice and NOX controls. Increased vascular tortuosity correlated with disease activity in OIR. Treatment of OIR mice with aflibercept rescued vascular tortuosity. Conclusions Vascular tortuosity is a quantifiable feature of the OIR model that correlates with disease severity and may be quickly and accurately quantified using the iROP-Assist algorithm. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Kyle V. Marra
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Jimmy S. Chen
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Joseph Miller
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Yoichiro Ideguchi
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Sofia Prenner
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Deniz Erdogmus
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Napoleone Ferrara
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| |
Collapse
|
28
|
Padhi TR, Bhunia S, Das T, Nayak S, Jalan M, Rath S, Barik B, Ali H, Rani PK, Routray D, Jalali S. Outcome of real-time telescreening for retinopathy of prematurity using videoconferencing in a community setting in Eastern India. Indian J Ophthalmol 2024; 72:697-703. [PMID: 38389241 PMCID: PMC11168531 DOI: 10.4103/ijo.ijo_2024_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 11/01/2023] [Accepted: 11/06/2023] [Indexed: 02/24/2024] Open
Abstract
PURPOSE To evaluate the feasibility and outcome of a real-time retinopathy of prematurity (ROP) telescreening strategy using videoconferencing in a community setting in India. METHOD In a prospective study, trained allied ophthalmic personnel obtained the fundus images in the presence of the parents and local childcare providers. Analysis of images and parental counseling were done in real time by an ROP specialist located at a tertiary center using videoconferencing software. A subset of babies was also examined using bedside indirect ophthalmoscopy by an ROP care-trained ophthalmologist. The data were analyzed using descriptive statistics, sensitivity, specificity, positive and negative predictive values, and correlation coefficient. RESULTS Over 9 months, we examined 576 babies (1152 eyes) in six rural districts of India. The parents accepted the model as they recognized that a remotely located specialist was evaluating all images in real time. The strategy saved the travel time for ROP specialists by 477 h (47.7 working days) and for parents (47,406 h or 1975.25 days), along with the associated travel cost. In a subgroup analysis (100 babies, 200 eyes), the technology had a high sensitivity (97.2%) and negative predictivity value (92.7%). It showed substantial agreement (k = 0.708) with the bedside indirect ophthalmoscopy by ROP specialists with respect to the detection of treatment warranting ROP. Also, the strategy helped train the participants. CONCLUSION Real-time ROP telescreening using videoconferencing is sensitive enough to detect treatment warranting ROPs and saves skilled workforce and time. The real-time audiovisual connection allows optimal supervision of imaging, provides excellent training opportunities, and connects ophthalmologists directly with the parents.
Collapse
Affiliation(s)
- Tapas R Padhi
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Souvik Bhunia
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Taraprasad Das
- Vitreoretinal Services, Anant Bajaj Retina Institute, Hyderabad, Telangana, India
| | - Sameer Nayak
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Manav Jalan
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Suryasnata Rath
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Biswajeet Barik
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Hasnat Ali
- Department of Biostatistics, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Padmaja Kumari Rani
- Vitreoretinal Services, Anant Bajaj Retina Institute, Hyderabad, Telangana, India
| | - Dipanwita Routray
- Department of Community Medicine, District Medical College Hospital, Keonjhar, Odisha, India
| | - Subhadra Jalali
- Vitreoretinal Services, Anant Bajaj Retina Institute, Hyderabad, Telangana, India
| |
Collapse
|
29
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
30
|
Coyner AS, Murickan T, Oh MA, Young BK, Ostmo SR, Singh P, Chan RVP, Moshfeghi DM, Shah PK, Venkatapathy N, Chiang MF, Kalpathy-Cramer J, Campbell JP. Multinational External Validation of Autonomous Retinopathy of Prematurity Screening. JAMA Ophthalmol 2024; 142:327-335. [PMID: 38451496 PMCID: PMC10921347 DOI: 10.1001/jamaophthalmol.2024.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/15/2023] [Indexed: 03/08/2024]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of blindness in children, with significant disparities in outcomes between high-income and low-income countries, due in part to insufficient access to ROP screening. Objective To evaluate how well autonomous artificial intelligence (AI)-based ROP screening can detect more-than-mild ROP (mtmROP) and type 1 ROP. Design, Setting, and Participants This diagnostic study evaluated the performance of an AI algorithm, trained and calibrated using 2530 examinations from 843 infants in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study, on 2 external datasets (6245 examinations from 1545 infants in the Stanford University Network for Diagnosis of ROP [SUNDROP] and 5635 examinations from 2699 infants in the Aravind Eye Care Systems [AECS] telemedicine programs). Data were taken from 11 and 48 neonatal care units in the US and India, respectively. Data were collected from January 2012 to July 2021, and data were analyzed from July to December 2023. Exposures An imaging processing pipeline was created using deep learning to autonomously identify mtmROP and type 1 ROP in eye examinations performed via telemedicine. Main Outcomes and Measures The area under the receiver operating characteristics curve (AUROC) as well as sensitivity and specificity for detection of mtmROP and type 1 ROP at the eye examination and patient levels. Results The prevalence of mtmROP and type 1 ROP were 5.9% (91 of 1545) and 1.2% (18 of 1545), respectively, in the SUNDROP dataset and 6.2% (168 of 2699) and 2.5% (68 of 2699) in the AECS dataset. Examination-level AUROCs for mtmROP and type 1 ROP were 0.896 and 0.985, respectively, in the SUNDROP dataset and 0.920 and 0.982 in the AECS dataset. At the cross-sectional examination level, mtmROP detection had high sensitivity (SUNDROP: mtmROP, 83.5%; 95% CI, 76.6-87.7; type 1 ROP, 82.2%; 95% CI, 81.2-83.1; AECS: mtmROP, 80.8%; 95% CI, 76.2-84.9; type 1 ROP, 87.8%; 95% CI, 86.8-88.7). At the patient level, all infants who developed type 1 ROP screened positive (SUNDROP: 100%; 95% CI, 81.4-100; AECS: 100%; 95% CI, 94.7-100) prior to diagnosis. Conclusions and Relevance Where and when ROP telemedicine programs can be implemented, autonomous ROP screening may be an effective force multiplier for secondary prevention of ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Tom Murickan
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Susan R. Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Ophthalmology, University of Colorado School of Medicine, Aurora
| | - R. V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Darius M. Moshfeghi
- Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | | | | |
Collapse
|
31
|
Sharafi SM, Ebrahimiadib N, Roohipourmoallai R, Farahani AD, Fooladi MI, Khalili Pour E. Automated diagnosis of plus disease in retinopathy of prematurity using quantification of vessels characteristics. Sci Rep 2024; 14:6375. [PMID: 38493272 PMCID: PMC10944526 DOI: 10.1038/s41598-024-57072-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 03/14/2024] [Indexed: 03/18/2024] Open
Abstract
The condition known as Plus disease is distinguished by atypical alterations in the retinal vasculature of neonates born prematurely. It has been demonstrated that the diagnosis of Plus disease is subjective and qualitative in nature. The utilization of quantitative methods and computer-based image analysis to enhance the objectivity of Plus disease diagnosis has been extensively established in the literature. This study presents the development of a computer-based image analysis method aimed at automatically distinguishing Plus images from non-Plus images. The proposed methodology conducts a quantitative analysis of the vascular characteristics linked to Plus disease, thereby aiding physicians in making informed judgments. A collection of 76 posterior retinal images from a diverse group of infants who underwent screening for Retinopathy of Prematurity (ROP) was obtained. A reference standard diagnosis was established as the majority of the labeling performed by three experts in ROP during two separate sessions. The process of segmenting retinal vessels was carried out using a semi-automatic methodology. Computer algorithms were developed to compute the tortuosity, dilation, and density of vessels in various retinal regions as potential discriminative characteristics. A classifier was provided with a set of selected features in order to distinguish between Plus images and non-Plus images. This study included 76 infants (49 [64.5%] boys) with mean birth weight of 1305 ± 427 g and mean gestational age of 29.3 ± 3 weeks. The average level of agreement among experts for the diagnosis of plus disease was found to be 79% with a standard deviation of 5.3%. In terms of intra-expert agreement, the average was 85% with a standard deviation of 3%. Furthermore, the average tortuosity of the five most tortuous vessels was significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The curvature values based on points were found to be significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The maximum diameter of vessels within a region extending 5-disc diameters away from the border of the optic disc (referred to as 5DD) exhibited a statistically significant increase in Plus images compared to non-Plus images (p ≤ 0.0001). The density of vessels in Plus images was found to be significantly higher compared to non-Plus images (p ≤ 0.0001). The classifier's accuracy in distinguishing between Plus and non-Plus images, as determined through tenfold cross-validation, was found to be 0.86 ± 0.01. This accuracy was observed to be higher than the diagnostic accuracy of one out of three experts when compared to the reference standard. The implemented algorithm in the current study demonstrated a commendable level of accuracy in detecting Plus disease in cases of retinopathy of prematurity, exhibiting comparable performance to that of expert diagnoses. By engaging in an objective analysis of the characteristics of vessels, there exists the possibility of conducting a quantitative assessment of the disease progression's features. The utilization of this automated system has the potential to enhance physicians' ability to diagnose Plus disease, thereby offering valuable contributions to the management of ROP through the integration of traditional ophthalmoscopy and image-based telemedicine methodologies.
Collapse
Affiliation(s)
- Sayed Mehran Sharafi
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Ophthalmology Department, College of Medicine, University of Florida, Gainesville, FL, USA
| | - Ramak Roohipourmoallai
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida, Tempa, FL, USA
| | - Afsar Dastjani Farahani
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran
| | - Marjan Imani Fooladi
- Clinical Pediatric Ophthalmology Department, UPMC, Children's Hospital of Pittsburgh, Pittsburgh, PA, USA
| | - Elias Khalili Pour
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran.
| |
Collapse
|
32
|
Liu R, Li X, Liu Y, Du L, Zhu Y, Wu L, Hu B. A high-speed microscopy system based on deep learning to detect yeast-like fungi cells in blood. Bioanalysis 2024; 16:289-303. [PMID: 38334080 DOI: 10.4155/bio-2023-0193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024] Open
Abstract
Background: Blood-invasive fungal infections can cause the death of patients, while diagnosis of fungal infections is challenging. Methods: A high-speed microscopy detection system was constructed that included a microfluidic system, a microscope connected to a high-speed camera and a deep learning analysis section. Results: For training data, the sensitivity and specificity of the convolutional neural network model were 93.5% (92.7-94.2%) and 99.5% (99.1-99.5%), respectively. For validating data, the sensitivity and specificity were 81.3% (80.0-82.5%) and 99.4% (99.2-99.6%), respectively. Cryptococcal cells were found in 22.07% of blood samples. Conclusion: This high-speed microscopy system can analyze fungal pathogens in blood samples rapidly with high sensitivity and specificity and can help dramatically accelerate the diagnosis of fungal infectious diseases.
Collapse
Affiliation(s)
- Ruiqi Liu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Xiaojie Li
- Department of Laboratory Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, P.R. China
| | - Yingyi Liu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Lijun Du
- Department of Clinical Laboratory, Huadu District People's Hospital of Guangzhou, Guangdong, China
| | - Yingzhu Zhu
- Guangzhou Waterrock Gene Technology, Guangdong, China
| | - Lichuan Wu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Bo Hu
- Department of Laboratory Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, P.R. China
| |
Collapse
|
33
|
Demirbaş KC, Yıldız M, Saygılı S, Canpolat N, Kasapçopur Ö. Artificial Intelligence in Pediatrics: Learning to Walk Together. Turk Arch Pediatr 2024; 59:121-130. [PMID: 38454219 PMCID: PMC11059951 DOI: 10.5152/turkarchpediatr.2024.24002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 02/02/2024] [Indexed: 03/09/2024]
Abstract
In this era of rapidly advancing technology, artificial intelligence (AI) has emerged as a transformative force, even being called the Fourth Industrial Revolution, along with gene editing and robotics. While it has undoubtedly become an increasingly important part of our daily lives, it must be recognized that it is not an additional tool, but rather a complex concept that poses a variety of challenges. AI, with considerable potential, has found its place in both medical care and clinical research. Within the vast field of pediatrics, it stands out as a particularly promising advancement. As pediatricians, we are indeed witnessing the impactful integration of AI-based applications into our daily clinical practice and research efforts. These tools are being used for simple to more complex tasks such as diagnosing clinically challenging conditions, predicting disease outcomes, creating treatment plans, educating both patients and healthcare professionals, and generating accurate medical records or scientific papers. In conclusion, the multifaceted applications of AI in pediatrics will increase efficiency and improve the quality of healthcare and research. However, there are certain risks and threats accompanying this advancement including the biases that may contribute to health disparities and, inaccuracies. Therefore, it is crucial to recognize and address the technical, ethical, and legal challenges as well as explore the benefits in both clinical and research fields.
Collapse
Affiliation(s)
- Kaan Can Demirbaş
- İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Mehmet Yıldız
- Department of Pediatric Rheumatology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Seha Saygılı
- Department of Pediatric Nephrology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Nur Canpolat
- Department of Pediatric Nephrology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Özgür Kasapçopur
- Department of Pediatric Rheumatology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| |
Collapse
|
34
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Almeida JD, Pereira CT, Rodrigues JDA, Carrard VC. Convolutional neural network misclassification analysis in oral lesions: an error evaluation criterion by image characteristics. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:243-252. [PMID: 38161085 DOI: 10.1016/j.oooo.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 10/02/2023] [Accepted: 10/04/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE This retrospective study analyzed the errors generated by a convolutional neural network (CNN) when performing automated classification of oral lesions according to their clinical characteristics, seeking to identify patterns in systemic errors in the intermediate layers of the CNN. STUDY DESIGN A cross-sectional analysis nested in a previous trial in which automated classification by a CNN model of elementary lesions from clinical images of oral lesions was performed. The resulting CNN classification errors formed the dataset for this study. A total of 116 real outputs were identified that diverged from the estimated outputs, representing 7.6% of the total images analyzed by the CNN. RESULTS The discrepancies between the real and estimated outputs were associated with problems relating to image sharpness, resolution, and focus; human errors; and the impact of data augmentation. CONCLUSIONS From qualitative analysis of errors in the process of automated classification of clinical images, it was possible to confirm the impact of image quality, as well as identify the strong impact of the data augmentation process. Knowledge of the factors that models evaluate to make decisions can increase confidence in the high classification potential of CNNs.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil.
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | | | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| | - Janete Dias Almeida
- Department of Biosciences and Oral Diagnostics, São Paulo State University, Campus São José dos Campos, São Paulo, Brazil
| | | | - Jonas de Almeida Rodrigues
- Department of Surgery and Orthopaedics, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil; TelessaudeRS-UFRGS, Federal University of Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil; Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
35
|
Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone Eye Examination: Artificial Intelligence and Telemedicine. Telemed J E Health 2024; 30:341-353. [PMID: 37585566 DOI: 10.1089/tmj.2023.0041] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023] Open
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
36
|
Ong KTI, Kwon T, Jang H, Kim M, Lee CS, Byeon SH, Kim SS, Yeo J, Choi EY. Multitask Deep Learning for Joint Detection of Necrotizing Viral and Noninfectious Retinitis From Common Blood and Serology Test Data. Invest Ophthalmol Vis Sci 2024; 65:5. [PMID: 38306107 PMCID: PMC10851173 DOI: 10.1167/iovs.65.2.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024] Open
Abstract
Purpose Necrotizing viral retinitis is a serious eye infection that requires immediate treatment to prevent permanent vision loss. Uncertain clinical suspicion can result in delayed diagnosis, inappropriate administration of corticosteroids, or repeated intraocular sampling. To quickly and accurately distinguish between viral and noninfectious retinitis, we aimed to develop deep learning (DL) models solely using noninvasive blood test data. Methods This cross-sectional study trained DL models using common blood and serology test data from 3080 patients (noninfectious uveitis of the posterior segment [NIU-PS] = 2858, acute retinal necrosis [ARN] = 66, cytomegalovirus [CMV], retinitis = 156). Following the development of separate base DL models for ARN and CMV retinitis, multitask learning (MTL) was employed to enable simultaneous discrimination. Advanced MTL models incorporating adversarial training were used to enhance DL feature extraction from the small, imbalanced data. We evaluated model performance, disease-specific important features, and the causal relationship between DL features and detection results. Results The presented models all achieved excellent detection performances, with the adversarial MTL model achieving the highest receiver operating characteristic curves (0.932 for ARN and 0.982 for CMV retinitis). Significant features for ARN detection included varicella-zoster virus (VZV) immunoglobulin M (IgM), herpes simplex virus immunoglobulin G, and neutrophil count, while for CMV retinitis, they encompassed VZV IgM, CMV IgM, and lymphocyte count. The adversarial MTL model exhibited substantial changes in detection outcomes when the key features were contaminated, indicating stronger causality between DL features and detection results. Conclusions The adversarial MTL model, using blood test data, may serve as a reliable adjunct for the expedited diagnosis of ARN, CMV retinitis, and NIU-PS simultaneously in real clinical settings.
Collapse
Affiliation(s)
- Kai Tzu-iunn Ong
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Taeyoon Kwon
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Harok Jang
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Min Kim
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Christopher Seungkyu Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Suk Ho Byeon
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jinyoung Yeo
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Eun Young Choi
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
37
|
Li X, Owen LA, Taylor KD, Ostmo S, Chen YDI, Coyner AS, Sonmez K, Hartnett ME, Guo X, Ipp E, Roll K, Genter P, Chan RVP, DeAngelis MM, Chiang MF, Campbell JP, Rotter JI. Genome-wide association identifies novel ROP risk loci in a multiethnic cohort. Commun Biol 2024; 7:107. [PMID: 38233474 PMCID: PMC10794688 DOI: 10.1038/s42003-023-05743-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 12/26/2023] [Indexed: 01/19/2024] Open
Abstract
We conducted a genome-wide association study (GWAS) in a multiethnic cohort of 920 at-risk infants for retinopathy of prematurity (ROP), a major cause of childhood blindness, identifying 1 locus at genome-wide significance level (p < 5×10-8) and 9 with significance of p < 5×10-6 for ROP ≥ stage 3. The most significant locus, rs2058019, reached genome-wide significance within the full multiethnic cohort (p = 4.96×10-9); Hispanic and European Ancestry infants driving the association. The lead single nucleotide polymorphism (SNP) falls in an intronic region within the Glioma-associated oncogene family zinc finger 3 (GLI3) gene. Relevance for GLI3 and other top-associated genes to human ocular disease was substantiated through in-silico extension analyses, genetic risk score analysis and expression profiling in human donor eye tissues. Thus, we identify a novel locus at GLI3 with relevance to retinal biology, supporting genetic susceptibilities for ROP risk with possible variability by race and ethnicity.
Collapse
Affiliation(s)
- Xiaohui Li
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Leah A Owen
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
- Department of Population Health Sciences, University of Utah, Salt Lake City, UT, USA.
- Department of Obstetrics and Gynecology, University of Utah, Salt Lake City, UT, USA.
- Department of Ophthalmology, University at Buffalo the State University of New York, Buffalo, NY, USA.
| | - Kent D Taylor
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Yii-Der Ida Chen
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Aaron S Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Kemal Sonmez
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | | - Xiuqing Guo
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Eli Ipp
- Division of Endocrinology and Metabolism, Department of Medicine, The Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Kathryn Roll
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Pauline Genter
- Division of Endocrinology and Metabolism, Department of Medicine, The Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, Torrance, CA, USA
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Margaret M DeAngelis
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
- Department of Population Health Sciences, University of Utah, Salt Lake City, UT, USA
- Department of Ophthalmology, University at Buffalo the State University of New York, Buffalo, NY, USA
- Department of Biochemistry; Jacobs School of Medicine and Biomedical Sciences, University at Buffalo/State University of New York (SUNY), Buffalo, NY, USA
- Department of Neuroscience; Jacobs School of Medicine and Biomedical Sciences, University at Buffalo/State University of New York (SUNY), Buffalo, NY, USA
- Department of Genetics, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo/State University of New York (SUNY), Buffalo, NY, USA
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.
| | - Jerome I Rotter
- Institute for Translational Genomics and Population Sciences, The Lundquist Institute for Biomedical Innovation; Department of Pediatrics, Harbor-UCLA Medical Center, Torrance, CA, USA.
| |
Collapse
|
38
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
39
|
Nguyen TTP, Young BK, Coyner A, Ostmo S, Chan RVP, Kalpathy-Cramer J, Chiang MF, Campbell JP. Discrepancies in Diagnosis of Treatment-Requiring Retinopathy of Prematurity. Ophthalmol Retina 2024; 8:88-91. [PMID: 37689182 PMCID: PMC10841666 DOI: 10.1016/j.oret.2023.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/21/2023] [Accepted: 09/01/2023] [Indexed: 09/11/2023]
Abstract
52% of treated eyes with retinopathy of prematurity in a multicenter cohort didn’t require intervention per evaluation by an independent reading center. An artificial intelligence system detected worse vascular severity in the group designed as treatment-requiring by reading center.
Collapse
Affiliation(s)
- Thanh-Tin P Nguyen
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Benjamin K Young
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R V Paul Chan
- Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | | | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland; National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
40
|
Sullivan BA, Beam K, Vesoulis ZA, Aziz KB, Husain AN, Knake LA, Moreira AG, Hooven TA, Weiss EM, Carr NR, El-Ferzli GT, Patel RM, Simek KA, Hernandez AJ, Barry JS, McAdams RM. Transforming neonatal care with artificial intelligence: challenges, ethical consideration, and opportunities. J Perinatol 2024; 44:1-11. [PMID: 38097685 PMCID: PMC10872325 DOI: 10.1038/s41372-023-01848-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/21/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023]
Abstract
Artificial intelligence (AI) offers tremendous potential to transform neonatology through improved diagnostics, personalized treatments, and earlier prevention of complications. However, there are many challenges to address before AI is ready for clinical practice. This review defines key AI concepts and discusses ethical considerations and implicit biases associated with AI. Next we will review literature examples of AI already being explored in neonatology research and we will suggest future potentials for AI work. Examples discussed in this article include predicting outcomes such as sepsis, optimizing oxygen therapy, and image analysis to detect brain injury and retinopathy of prematurity. Realizing AI's potential necessitates collaboration between diverse stakeholders across the entire process of incorporating AI tools in the NICU to address testability, usability, bias, and transparency. With multi-center and multi-disciplinary collaboration, AI holds tremendous potential to transform the future of neonatology.
Collapse
Affiliation(s)
- Brynne A Sullivan
- Division of Neonatology, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Kristyn Beam
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Zachary A Vesoulis
- Division of Newborn Medicine, Department of Pediatrics, Washington University in St. Louis, St. Louis, MO, USA
| | - Khyzer B Aziz
- Division of Neonatology, Department of Pediatrics, Johns Hopkins University, Baltimore, MD, USA
| | - Ameena N Husain
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Lindsey A Knake
- Division of Neonatology, Department of Pediatrics, University of Iowa, Iowa City, IA, USA
| | - Alvaro G Moreira
- Division of Neonatology, Department of Pediatrics, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Thomas A Hooven
- Division of Newborn Medicine, Department of Pediatrics, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Elliott M Weiss
- Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
- Treuman Katz Center for Pediatric Bioethics and Palliative Care, Seattle Children's Research Institute, Seattle, WA, USA
| | - Nicholas R Carr
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - George T El-Ferzli
- Division of Neonatology, Department of Pediatrics, Ohio State University, Nationwide Children's Hospital, Columbus, OH, USA
| | - Ravi M Patel
- Division of Neonatology, Department of Pediatrics, Emory University School of Medicine and Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - Kelsey A Simek
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Antonio J Hernandez
- Division of Neonatology, Department of Pediatrics, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - James S Barry
- Division of Neonatology, Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Ryan M McAdams
- Department of Pediatrics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.
| |
Collapse
|
41
|
Yang X, Huang K, Yang D, Zhao W, Zhou X. Biomedical Big Data Technologies, Applications, and Challenges for Precision Medicine: A Review. GLOBAL CHALLENGES (HOBOKEN, NJ) 2024; 8:2300163. [PMID: 38223896 PMCID: PMC10784210 DOI: 10.1002/gch2.202300163] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 09/20/2023] [Indexed: 01/16/2024]
Abstract
The explosive growth of biomedical Big Data presents both significant opportunities and challenges in the realm of knowledge discovery and translational applications within precision medicine. Efficient management, analysis, and interpretation of big data can pave the way for groundbreaking advancements in precision medicine. However, the unprecedented strides in the automated collection of large-scale molecular and clinical data have also introduced formidable challenges in terms of data analysis and interpretation, necessitating the development of novel computational approaches. Some potential challenges include the curse of dimensionality, data heterogeneity, missing data, class imbalance, and scalability issues. This overview article focuses on the recent progress and breakthroughs in the application of big data within precision medicine. Key aspects are summarized, including content, data sources, technologies, tools, challenges, and existing gaps. Nine fields-Datawarehouse and data management, electronic medical record, biomedical imaging informatics, Artificial intelligence-aided surgical design and surgery optimization, omics data, health monitoring data, knowledge graph, public health informatics, and security and privacy-are discussed.
Collapse
Affiliation(s)
- Xue Yang
- Department of Pancreatic Surgery and West China Biomedical Big Data CenterWest China HospitalSichuan UniversityChengdu610041China
| | - Kexin Huang
- Department of Pancreatic Surgery and West China Biomedical Big Data CenterWest China HospitalSichuan UniversityChengdu610041China
| | - Dewei Yang
- College of Advanced Manufacturing EngineeringChongqing University of Posts and TelecommunicationsChongqingChongqing400000China
| | - Weiling Zhao
- Center for Systems MedicineSchool of Biomedical InformaticsUTHealth at HoustonHoustonTX77030USA
| | - Xiaobo Zhou
- Center for Systems MedicineSchool of Biomedical InformaticsUTHealth at HoustonHoustonTX77030USA
| |
Collapse
|
42
|
Chen JS, Marra KV, Robles-Holmes HK, Ly KB, Miller J, Wei G, Aguilar E, Bucher F, Ideguchi Y, Coyner AS, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Applications of Deep Learning: Automated Assessment of Vascular Tortuosity in Mouse Models of Oxygen-Induced Retinopathy. OPHTHALMOLOGY SCIENCE 2024; 4:100338. [PMID: 37869029 PMCID: PMC10585474 DOI: 10.1016/j.xops.2023.100338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/01/2023] [Accepted: 05/19/2023] [Indexed: 10/24/2023]
Abstract
Objective To develop a generative adversarial network (GAN) to segment major blood vessels from retinal flat-mount images from oxygen-induced retinopathy (OIR) and demonstrate the utility of these GAN-generated vessel segmentations in quantifying vascular tortuosity. Design Development and validation of GAN. Subjects Three datasets containing 1084, 50, and 20 flat-mount mice retina images with various stains used and ages at sacrifice acquired from previously published manuscripts. Methods Four graders manually segmented major blood vessels from flat-mount images of retinas from OIR mice. Pix2Pix, a high-resolution GAN, was trained on 984 pairs of raw flat-mount images and manual vessel segmentations and then tested on 100 and 50 image pairs from a held-out and external test set, respectively. GAN-generated and manual vessel segmentations were then used as an input into a previously published algorithm (iROP-Assist) to generate a vascular cumulative tortuosity index (CTI) for 20 image pairs containing mouse eyes treated with aflibercept versus control. Main Outcome Measures Mean dice coefficients were used to compare segmentation accuracy between the GAN-generated and manually annotated segmentation maps. For the image pairs treated with aflibercept versus control, mean CTIs were also calculated for both GAN-generated and manual vessel maps. Statistical significance was evaluated using Wilcoxon signed-rank tests (P ≤ 0.05 threshold for significance). Results The dice coefficient for the GAN-generated versus manual vessel segmentations was 0.75 ± 0.27 and 0.77 ± 0.17 for the held-out test set and external test set, respectively. The mean CTI generated from the GAN-generated and manual vessel segmentations was 1.12 ± 0.07 versus 1.03 ± 0.02 (P = 0.003) and 1.06 ± 0.04 versus 1.01 ± 0.01 (P < 0.001), respectively, for eyes treated with aflibercept versus control, demonstrating that vascular tortuosity was rescued by aflibercept when quantified by GAN-generated and manual vessel segmentations. Conclusions GANs can be used to accurately generate vessel map segmentations from flat-mount images. These vessel maps may be used to evaluate novel metrics of vascular tortuosity in OIR, such as CTI, and have the potential to accelerate research in treatments for ischemic retinopathies. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kyle V. Marra
- Molecular Medicine, the Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Joseph Miller
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Felicitas Bucher
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Yoichi Ideguchi
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Aaron S. Coyner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Napoleone Ferrara
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| |
Collapse
|
43
|
Soleimani M, Esmaili K, Rahdar A, Aminizadeh M, Cheraqpour K, Tabatabaei SA, Mirshahi R, Bibak Z, Mohammadi SF, Koganti R, Yousefi S, Djalilian AR. From the diagnosis of infectious keratitis to discriminating fungal subtypes; a deep learning-based study. Sci Rep 2023; 13:22200. [PMID: 38097753 PMCID: PMC10721811 DOI: 10.1038/s41598-023-49635-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/10/2023] [Indexed: 12/17/2023] Open
Abstract
Infectious keratitis (IK) is a major cause of corneal opacity. IK can be caused by a variety of microorganisms. Typically, fungal ulcers carry the worst prognosis. Fungal cases can be subdivided into filamentous and yeasts, which shows fundamental differences. Delays in diagnosis or initiation of treatment increase the risk of ocular complications. Currently, the diagnosis of IK is mainly based on slit-lamp examination and corneal scrapings. Notably, these diagnostic methods have their drawbacks, including experience-dependency, tissue damage, and time consumption. Artificial intelligence (AI) is designed to mimic and enhance human decision-making. An increasing number of studies have utilized AI in the diagnosis of IK. In this paper, we propose to use AI to diagnose IK (model 1), differentiate between bacterial keratitis and fungal keratitis (model 2), and discriminate the filamentous type from the yeast type of fungal cases (model 3). Overall, 9329 slit-lamp photographs gathered from 977 patients were enrolled in the study. The models exhibited remarkable accuracy, with model 1 achieving 99.3%, model 2 at 84%, and model 3 reaching 77.5%. In conclusion, our study offers valuable support in the early identification of potential fungal and bacterial keratitis cases and helps enable timely management.
Collapse
Affiliation(s)
- Mohammad Soleimani
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Kosar Esmaili
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir Rahdar
- Department of Telecommunication, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mehdi Aminizadeh
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Kasra Cheraqpour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Ali Tabatabaei
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Mirshahi
- Eye Research Center, The Five Senses Health Institute, Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Bibak
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Raghuram Koganti
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA
| | - Ali R Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA.
- Cornea Service, Stem Cell Therapy and Corneal Tissue Engineering Laboratory, Illinois Eye and Ear Infirmary, 1855 W. Taylor Street, M/C 648, Chicago, IL, 60612, USA.
| |
Collapse
|
44
|
Than J, Sim PY, Muttuvelu D, Ferraz D, Koh V, Kang S, Huemer J. Teleophthalmology and retina: a review of current tools, pathways and services. Int J Retina Vitreous 2023; 9:76. [PMID: 38053188 PMCID: PMC10699065 DOI: 10.1186/s40942-023-00502-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 10/02/2023] [Indexed: 12/07/2023] Open
Abstract
Telemedicine, the use of telecommunication and information technology to deliver healthcare remotely, has evolved beyond recognition since its inception in the 1970s. Advances in telecommunication infrastructure, the advent of the Internet, exponential growth in computing power and associated computer-aided diagnosis, and medical imaging developments have created an environment where telemedicine is more accessible and capable than ever before, particularly in the field of ophthalmology. Ever-increasing global demand for ophthalmic services due to population growth and ageing together with insufficient supply of ophthalmologists requires new models of healthcare provision integrating telemedicine to meet present day challenges, with the recent COVID-19 pandemic providing the catalyst for the widespread adoption and acceptance of teleophthalmology. In this review we discuss the history, present and future application of telemedicine within the field of ophthalmology, and specifically retinal disease. We consider the strengths and limitations of teleophthalmology, its role in screening, community and hospital management of retinal disease, patient and clinician attitudes, and barriers to its adoption.
Collapse
Affiliation(s)
- Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Peng Y Sim
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Danson Muttuvelu
- Department of Ophthalmology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- MitØje ApS/Danske Speciallaeger Aps, Aarhus, Denmark
| | - Daniel Ferraz
- D'Or Institute for Research and Education (IDOR), São Paulo, Brazil
- Institute of Ophthalmology, University College London, London, UK
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Josef Huemer
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK.
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
| |
Collapse
|
45
|
Hanif A, Prajna NV, Lalitha P, NaPier E, Parker M, Steinkamp P, Keenan JD, Campbell JP, Song X, Redd TK. Assessing the Impact of Image Quality on Deep Learning Classification of Infectious Keratitis. OPHTHALMOLOGY SCIENCE 2023; 3:100331. [PMID: 37920421 PMCID: PMC10618822 DOI: 10.1016/j.xops.2023.100331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 11/04/2023]
Abstract
Objective To investigate the impact of corneal photograph quality on convolutional neural network (CNN) predictions. Design A CNN trained to classify bacterial and fungal keratitis was evaluated using photographs of ulcers labeled according to 5 corneal image quality parameters: eccentric gaze direction, abnormal eyelid position, over/under-exposure, inadequate focus, and malpositioned light reflection. Participants All eligible subjects with culture and stain-proven bacterial and/or fungal ulcers presenting to Aravind Eye Hospital in Madurai, India, between January 1, 2021 and December 31, 2021. Methods Convolutional neural network classification performance was compared for each quality parameter, and gradient class activation heatmaps were generated to visualize regions of highest influence on CNN predictions. Main Outcome Measures Area under the receiver operating characteristic and precision recall curves were calculated to quantify model performance. Bootstrapped confidence intervals were used for statistical comparisons. Logistic loss was calculated to measure individual prediction accuracy. Results Individual presence of either light reflection or eyelids obscuring the corneal surface was associated with significantly higher CNN performance. No other quality parameter significantly influenced CNN performance. Qualitative review of gradient class activation heatmaps generally revealed the infiltrate as having the highest diagnostic relevance. Conclusions The CNN demonstrated expert-level performance regardless of image quality. Future studies may investigate use of smartphone cameras and image sets with greater variance in image quality to further explore the influence of these parameters on model performance. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Adam Hanif
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | | | | | - Erin NaPier
- John A. Burns School of Medicine, University of Hawai'i, Honolulu, Hawaii
| | - Maria Parker
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Peter Steinkamp
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Jeremy D. Keenan
- Francis I. Proctor Foundation, University of California, San Francisco, San Francisco, California
| | - J. Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Xubo Song
- Department of Medical Informatics and Clinical Epidemiology and Program of Computer Science and Electrical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Travis K. Redd
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| |
Collapse
|
46
|
Xu X, Jia Q, Yuan H, Qiu H, Dong Y, Xie W, Yao Z, Zhang J, Nie Z, Li X, Shi Y, Zou JY, Huang M, Zhuang J. A clinically applicable AI system for diagnosis of congenital heart diseases based on computed tomography images. Med Image Anal 2023; 90:102953. [PMID: 37734140 DOI: 10.1016/j.media.2023.102953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 08/22/2023] [Accepted: 09/01/2023] [Indexed: 09/23/2023]
Abstract
Congenital heart disease (CHD) is the most common type of birth defect. Without timely detection and treatment, approximately one-third of children with CHD would die in the infant period. However, due to the complicated heart structures, early diagnosis of CHD and its types is quite challenging, even for experienced radiologists. Here, we present an artificial intelligence (AI) system that achieves a comparable performance of human experts in the critical task of classifying 17 categories of CHD types. We collected the first-large CT dataset from three different CT machines, including more than 3750 CHD patients over 14 years. Experimental results demonstrate that it can achieve diagnosis accuracy (86.03%) comparable with junior cardiovascular radiologists (86.27%) in a World Health Organization appointed research and cooperation center in China on most types of CHD, and obtains a higher sensitivity (82.91%) than junior cardiovascular radiologists (76.18%). The accuracy of the combination of our AI system (97.20%) and senior radiologists achieves comparable results to that of junior radiologists and senior radiologists (97.16%) which is the current clinical routine. Our AI system can further provide 3D visualization of hearts to senior radiologists for interpretation and flexible review, surgeons for precise intuition of heart structures, and clinicians for more precise outcome prediction. We demonstrate the potential of our model to be integrated into current clinic practice to improve the diagnosis of CHD globally, especially in regions where experienced radiologists can be scarce.
Collapse
Affiliation(s)
- Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Yuhao Dong
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zhiqaing Nie
- Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong Special Administrative Region
| | - Yiyu Shi
- Computer Science and Engineering, University of Notre Dame, IN, 46656, USA
| | - James Y Zou
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA.
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| |
Collapse
|
47
|
Keles E, Bagci U. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review. NPJ Digit Med 2023; 6:220. [PMID: 38012349 PMCID: PMC10682088 DOI: 10.1038/s41746-023-00941-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 10/05/2023] [Indexed: 11/29/2023] Open
Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.
Collapse
Affiliation(s)
- Elif Keles
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA.
| | - Ulas Bagci
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA
- Northwestern University, Department of Biomedical Engineering, Chicago, IL, USA
- Department of Electrical and Computer Engineering, Chicago, IL, USA
| |
Collapse
|
48
|
Vandevenne MM, Favuzza E, Veta M, Lucenteforte E, Berendschot TT, Mencucci R, Nuijts RM, Virgili G, Dickman MM. Artificial intelligence for detecting keratoconus. Cochrane Database Syst Rev 2023; 11:CD014911. [PMID: 37965960 PMCID: PMC10646985 DOI: 10.1002/14651858.cd014911.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
BACKGROUND Keratoconus remains difficult to diagnose, especially in the early stages. It is a progressive disorder of the cornea that starts at a young age. Diagnosis is based on clinical examination and corneal imaging; though in the early stages, when there are no clinical signs, diagnosis depends on the interpretation of corneal imaging (e.g. topography and tomography) by trained cornea specialists. Using artificial intelligence (AI) to analyse the corneal images and detect cases of keratoconus could help prevent visual acuity loss and even corneal transplantation. However, a missed diagnosis in people seeking refractive surgery could lead to weakening of the cornea and keratoconus-like ectasia. There is a need for a reliable overview of the accuracy of AI for detecting keratoconus and the applicability of this automated method to the clinical setting. OBJECTIVES To assess the diagnostic accuracy of artificial intelligence (AI) algorithms for detecting keratoconus in people presenting with refractive errors, especially those whose vision can no longer be fully corrected with glasses, those seeking corneal refractive surgery, and those suspected of having keratoconus. AI could help ophthalmologists, optometrists, and other eye care professionals to make decisions on referral to cornea specialists. Secondary objectives To assess the following potential causes of heterogeneity in diagnostic performance across studies. • Different AI algorithms (e.g. neural networks, decision trees, support vector machines) • Index test methodology (preprocessing techniques, core AI method, and postprocessing techniques) • Sources of input to train algorithms (topography and tomography images from Placido disc system, Scheimpflug system, slit-scanning system, or optical coherence tomography (OCT); number of training and testing cases/images; label/endpoint variable used for training) • Study setting • Study design • Ethnicity, or geographic area as its proxy • Different index test positivity criteria provided by the topography or tomography device • Reference standard, topography or tomography, one or two cornea specialists • Definition of keratoconus • Mean age of participants • Recruitment of participants • Severity of keratoconus (clinically manifest or subclinical) SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), Ovid MEDLINE, Ovid Embase, OpenGrey, the ISRCTN registry, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). There were no date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 29 November 2022. SELECTION CRITERIA We included cross-sectional and diagnostic case-control studies that investigated AI for the diagnosis of keratoconus using topography, tomography, or both. We included studies that diagnosed manifest keratoconus, subclinical keratoconus, or both. The reference standard was the interpretation of topography or tomography images by at least two cornea specialists. DATA COLLECTION AND ANALYSIS Two review authors independently extracted the study data and assessed the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When an article contained multiple AI algorithms, we selected the algorithm with the highest Youden's index. We assessed the certainty of evidence using the GRADE approach. MAIN RESULTS We included 63 studies, published between 1994 and 2022, that developed and investigated the accuracy of AI for the diagnosis of keratoconus. There were three different units of analysis in the studies: eyes, participants, and images. Forty-four studies analysed 23,771 eyes, four studies analysed 3843 participants, and 15 studies analysed 38,832 images. Fifty-four articles evaluated the detection of manifest keratoconus, defined as a cornea that showed any clinical sign of keratoconus. The accuracy of AI seems almost perfect, with a summary sensitivity of 98.6% (95% confidence interval (CI) 97.6% to 99.1%) and a summary specificity of 98.3% (95% CI 97.4% to 98.9%). However, accuracy varied across studies and the certainty of the evidence was low. Twenty-eight articles evaluated the detection of subclinical keratoconus, although the definition of subclinical varied. We grouped subclinical keratoconus, forme fruste, and very asymmetrical eyes together. The tests showed good accuracy, with a summary sensitivity of 90.0% (95% CI 84.5% to 93.8%) and a summary specificity of 95.5% (95% CI 91.9% to 97.5%). However, the certainty of the evidence was very low for sensitivity and low for specificity. In both groups, we graded most studies at high risk of bias, with high applicability concerns, in the domain of patient selection, since most were case-control studies. Moreover, we graded the certainty of evidence as low to very low due to selection bias, inconsistency, and imprecision. We could not explain the heterogeneity between the studies. The sensitivity analyses based on study design, AI algorithm, imaging technique (topography versus tomography), and data source (parameters versus images) showed no differences in the results. AUTHORS' CONCLUSIONS AI appears to be a promising triage tool in ophthalmologic practice for diagnosing keratoconus. Test accuracy was very high for manifest keratoconus and slightly lower for subclinical keratoconus, indicating a higher chance of missing a diagnosis in people without clinical signs. This could lead to progression of keratoconus or an erroneous indication for refractive surgery, which would worsen the disease. We are unable to draw clear and reliable conclusions due to the high risk of bias, the unexplained heterogeneity of the results, and high applicability concerns, all of which reduced our confidence in the evidence. Greater standardization in future research would increase the quality of studies and improve comparability between studies.
Collapse
Affiliation(s)
- Magali Ms Vandevenne
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Eleonora Favuzza
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Mitko Veta
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Ersilia Lucenteforte
- Department of Statistics, Computer Science and Applications «G. Parenti», University of Florence, Florence, Italy
| | - Tos Tjm Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Rita Mencucci
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Rudy Mma Nuijts
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Gianni Virgili
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
- Queen's University Belfast, Belfast, UK
| | - Mor M Dickman
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| |
Collapse
|
49
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
50
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|