1
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
2
|
Shu Q, Pang J, Liu Z, Liang X, Chen M, Tao Z, Liu Q, Guo Y, Yang X, Ding J, Chen R, Wang S, Li W, Zhai G, Xu J, Li L. Artificial Intelligence for Early Detection of Pediatric Eye Diseases Using Mobile Photos. JAMA Netw Open 2024; 7:e2425124. [PMID: 39106068 DOI: 10.1001/jamanetworkopen.2024.25124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/07/2024] Open
Abstract
IMPORTANCE Identifying pediatric eye diseases at an early stage is a worldwide issue. Traditional screening procedures depend on hospitals and ophthalmologists, which are expensive and time-consuming. Using artificial intelligence (AI) to assess children's eye conditions from mobile photographs could facilitate convenient and early identification of eye disorders in a home setting. OBJECTIVE To develop an AI model to identify myopia, strabismus, and ptosis using mobile photographs. DESIGN, SETTING, AND PARTICIPANTS This cross-sectional study was conducted at the Department of Ophthalmology of Shanghai Ninth People's Hospital from October 1, 2022, to September 30, 2023, and included children who were diagnosed with myopia, strabismus, or ptosis. MAIN OUTCOMES AND MEASURES A deep learning-based model was developed to identify myopia, strabismus, and ptosis. The performance of the model was assessed using sensitivity, specificity, accuracy, the area under the curve (AUC), positive predictive values (PPV), negative predictive values (NPV), positive likelihood ratios (P-LR), negative likelihood ratios (N-LR), and the F1-score. GradCAM++ was utilized to visually and analytically assess the impact of each region on the model. A sex subgroup analysis and an age subgroup analysis were performed to validate the model's generalizability. RESULTS A total of 1419 images obtained from 476 patients (225 female [47.27%]; 299 [62.82%] aged between 6 and 12 years) were used to build the model. Among them, 946 monocular images were used to identify myopia and ptosis, and 473 binocular images were used to identify strabismus. The model demonstrated good sensitivity in detecting myopia (0.84 [95% CI, 0.82-0.87]), strabismus (0.73 [95% CI, 0.70-0.77]), and ptosis (0.85 [95% CI, 0.82-0.87]). The model showed comparable performance in identifying eye disorders in both female and male children during sex subgroup analysis. There were differences in identifying eye disorders among different age subgroups. CONCLUSIONS AND RELEVANCE In this cross-sectional study, the AI model demonstrated strong performance in accurately identifying myopia, strabismus, and ptosis using only smartphone images. These results suggest that such a model could facilitate the early detection of pediatric eye diseases in a convenient manner at home.
Collapse
Affiliation(s)
- Qin Shu
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Jiali Pang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Zijia Liu
- Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyi Liang
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Moxin Chen
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Zhuoran Tao
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Qianwen Liu
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Yonglin Guo
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Xuefeng Yang
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Jinru Ding
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Ruiyao Chen
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Sujing Wang
- Department of Epidemiology and Biostatistics, School of Public Health, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenjing Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Guangtao Zhai
- Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Xu
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Lin Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| |
Collapse
|
3
|
Shi XH, Ju L, Dong L, Zhang RH, Shao L, Yan YN, Wang YX, Fu XF, Chen YZ, Ge ZY, Wei WB. Deep Learning Models for the Screening of Cognitive Impairment Using Multimodal Fundus Images. Ophthalmol Retina 2024; 8:666-677. [PMID: 38280426 DOI: 10.1016/j.oret.2024.01.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/03/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE We aimed to develop a deep learning system capable of identifying subjects with cognitive impairment quickly and easily based on multimodal ocular images. DESIGN Cross sectional study. SUBJECTS Participants of Beijing Eye Study 2011 and patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. METHODS We trained and validated a deep learning algorithm to assess cognitive impairment using retrospectively collected data from the Beijing Eye Study 2011. Cognitive impairment was defined as a Mini-Mental State Examination score < 24. Based on fundus photographs and OCT images, we developed 5 models based on the following sets of images: macula-centered fundus photographs, optic disc-centered fundus photographs, fundus photographs of both fields, OCT images, and fundus photographs of both fields with OCT (multimodal). The performance of the models was evaluated and compared in an external validation data set, which was collected from patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. MAIN OUTCOME MEASURES Area under the curve (AUC). RESULTS A total of 9424 retinal photographs and 4712 OCT images were used to develop the model. The external validation sets from each center included 1180 fundus photographs and 590 OCT images. Model comparison revealed that the multimodal performed best, achieving an AUC of 0.820 in the internal validation set, 0.786 in external validation set 1, and 0.784 in external validation set 2. We evaluated the performance of the multi-model in different sexes and different age groups; there were no significant differences. The heatmap analysis showed that signals around the optic disc in fundus photographs and the retina and choroid around the macular and optic disc regions in OCT images were used by the multimodal to identify participants with cognitive impairment. CONCLUSIONS Fundus photographs and OCT can provide valuable information on cognitive function. Multimodal models provide richer information compared with single-mode models. Deep learning algorithms based on multimodal retinal images may be capable of screening cognitive impairment. This technique has potential value for broader implementation in community-based screening or clinic settings. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Xu Han Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lie Ju
- Beijing Airdoc Technology Co., Ltd., Beijing, China; Augmented Intelligence and Multimodal Analytics (AIM) for Health Lab, Faculty of Information Technology, Monash University, Clayton, Australia; Faculty of Engineering, Monash University, Clayton, Australia
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Rui Heng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Shao
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yan Ni Yan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Science Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China
| | - Xue Fei Fu
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | | | - Zong Yuan Ge
- Beijing Airdoc Technology Co., Ltd., Beijing, China; Augmented Intelligence and Multimodal Analytics (AIM) for Health Lab, Faculty of Information Technology, Monash University, Clayton, Australia; Faculty of Engineering, Monash University, Clayton, Australia
| | - Wen Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, China; Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
4
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
5
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
6
|
Yao Y, Wang Q, Yang J, Yan Y, Wei W. Prevalence and risk factors of retinal vein occlusion in individuals with diabetes: The kailuan eye study. Diab Vasc Dis Res 2024; 21:14791641241271899. [PMID: 39105547 DOI: 10.1177/14791641241271899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/07/2024] Open
Abstract
PURPOSE The aim of this study was to analyze the incidence of retinal vein occlusion (RVO) in patients with and without diabetes in the population and compare the influencing factors. METHOD The community-based Kailuan Eye Study included 14,440 participants (9835 male, 4605 female) with a mean age of 54.0 ± 13.3 years (range, 20-110 years). They underwent a systemic and ophthalmologic examination. RVO were diagnosed on fundus photographs. RESULT By matching for age and gender, we included a total of 2767 patients each with diabetes and non-diabetes. The prevalence of RVO among patients with and without diabetes was 1.5% and 0.8%, respectively. The prevalence of RVO was higher in patients with diabetes than in patients without diabetes in all age groups. Multifactorial regression analysis showed that only fasting blood glucose levels were significantly different between patients with RVO with or without DM. The occurrence of RVO in the group with diabetes was mainly associated with higher fasting glucose and systolic blood pressure; in the group without diabetes, RVO was mainly associated with higher diastolic blood pressure, Body Mass Index, and lower low-density lipoprotein cholesterol levels. CONCLUSION We found that patients with diabetes have increased risks of RVO. In addition to blood pressure control, we recommend educating patients with diabetes about RVO, to prevent its subsequent occurrence.
Collapse
Affiliation(s)
- Yao Yao
- Beijing Tongren Eye Center, Beijing Ophthalmology and Visual Science Key Lab, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Qian Wang
- Beijing Tongren Eye Center, Beijing Ophthalmology and Visual Science Key Lab, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Jingyan Yang
- Beijing Tongren Eye Center, Beijing Ophthalmology and Visual Science Key Lab, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Yanni Yan
- Beijing Tongren Eye Center, Beijing Ophthalmology and Visual Science Key Lab, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Ophthalmology and Visual Science Key Lab, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| |
Collapse
|
7
|
Li J, Li J, Guo C, Chen Q, Liu G, Li L, Luo X, Wei H. Multicentric intelligent cardiotocography signal interpretation using deep semi-supervised domain adaptation via minimax entropy and domain invariance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 249:108145. [PMID: 38582038 DOI: 10.1016/j.cmpb.2024.108145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 03/22/2024] [Accepted: 03/24/2024] [Indexed: 04/08/2024]
Abstract
BACKGROUND AND OBJECTIVE Obstetricians use Cardiotocography (CTG), which is the continuous recording of fetal heart rate and uterine contraction, to assess fetal health status. Deep learning models for intelligent fetal monitoring trained on extensively labeled and identically distributed CTG records have achieved excellent performance. However, creation of these training sets requires excessive time and specialist labor for the collection and annotation of CTG signals. Previous research has demonstrated that multicenter studies can improve model performance. However, models trained on cross-domain data may not generalize well to target domains due to variance in distribution among datasets. Hence, this paper conducted a multicenter study with Deep Semi-Supervised Domain Adaptation (DSSDA) for intelligent interpretation of antenatal CTG signals. This approach helps to align cross-domain distribution and transfer knowledge from a label-rich source domain to a label-scarce target domain. METHODS We proposed a DSSDA framework that integrated Minimax Entropy and Domain Invariance (DSSDA-MMEDI) to reduce inter-domain gaps and thus achieve domain invariance. The networks were developed using GoogLeNet to extract features from CTG signals, with fully connected, softmax layers for classification. We designed a Dynamic Gradient-driven strategy based on Mutual Information (DGMI) to unify the losses from Minimax Entropy (MME), Domain Invariance (DI), and supervised cross-entropy during iterative learning. RESULTS We validated our DSSDA model on two datasets collected from collaborating healthcare institutions and mobile terminals as the source and target domains, which contained 16,355 and 3,351 CTG signals, respectively. Compared to the results achieved with deep learning networks without DSSDA, DSSDA-MMEDI significantly improved sensitivity and F1-score by over 6%. DSSDA-MMEDI also outperformed other state-of-the-art DSSDA approaches for CTG signal interpretation. Ablation studies were performed to determine the unique contribution of each component in our DSSDA mechanism. CONCLUSIONS The proposed DSSDA-MMEDI is feasible and effective for alignment of cross-domain data and automated interpretation of multicentric antenatal CTG signals with minimal annotation cost.
Collapse
Affiliation(s)
- Jialu Li
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China
| | - Jun Li
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China; College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Chenshuo Guo
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China
| | - Qinqun Chen
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China
| | - Guiqing Liu
- The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, 510405, China
| | - Li Li
- Guangzhou Sunray Medical Apparatus Co. Ltd, Guangzhou, 510620, China; Tianhe District People's Hospital, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
| | - Xiaomu Luo
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China
| | - Hang Wei
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China; Intelligent Chinese Medicine Research Institute, Guangzhou University of Chinese Medicine, Guangzhou, 510006, China.
| |
Collapse
|
8
|
Usmani E, Bacchi S, Zhang H, Guymer C, Kraczkowska A, Qinfeng Shi J, Gilhotra J, Chan WO. Prediction of vitreomacular traction syndrome outcomes with deep learning: A pilot study. Eur J Ophthalmol 2024:11206721241258253. [PMID: 38809664 DOI: 10.1177/11206721241258253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
PURPOSE To investigate the potential of an Optical Coherence Tomography (OCT) based Deep-Learning (DL) model in the prediction of Vitreomacular Traction (VMT) syndrome outcomes. DESIGN A single-centre retrospective review. METHODS Records of consecutive adult patients attending the Royal Adelaide Hospital vitreoretinal clinic with evidence of spontaneous VMT were reviewed from January 2019 until May 2022. All patients with evidence of causes of cystoid macular oedema or secondary causes of VMT were excluded. OCT scans and outcome data obtained from patient records was used to train, test and then validate the models. RESULTS For the deep learning model, ninety-five patient files were identified from the OCT (SPECTRALIS system; Heidelberg Engineering, Heidelberg, Germany) records. 25% of the patients spontaneously improved, 48% remained stable and 27% had progression of their disease, approximately. The final longitudinal model was able to predict 'improved' or 'stable' disease with a positive predictive value of 0.72 and 0.79, respectively. The accuracy of the model was greater than 50%. CONCLUSIONS Deep-learning models may be utilised in real-world settings to predict outcomes of VMT. This approach requires further investigation as it may improve patient outcomes by aiding ophthalmologists in cross-checking management decisions and reduce the need for unnecessary interventions or delays.
Collapse
Affiliation(s)
- Eiman Usmani
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Stephen Bacchi
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Hao Zhang
- AMI Fusion Technology, University of Adelaide, Adelaide, Australia
| | - Chelsea Guymer
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Amber Kraczkowska
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
| | - Javen Qinfeng Shi
- Institute of Machine Learning, University of Adelaide, Adelaide, Australia
| | - Jagjit Gilhotra
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Weng Onn Chan
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| |
Collapse
|
9
|
Musleh AM, AlRyalat SA, Abid MN, Salem Y, Hamila HM, Sallam AB. Diagnostic accuracy of artificial intelligence in detecting retinitis pigmentosa: A systematic review and meta-analysis. Surv Ophthalmol 2024; 69:411-417. [PMID: 38042377 DOI: 10.1016/j.survophthal.2023.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/04/2023]
Abstract
Retinitis pigmentosa (RP) is often undetected in its early stages. Artificial intelligence (AI) has emerged as a promising tool in medical diagnostics. Therefore, we conducted a systematic review and meta-analysis to evaluate the diagnostic accuracy of AI in detecting RP using various ophthalmic images. We conducted a systematic search on PubMed, Scopus, and Web of Science databases on December 31, 2022. We included studies in the English language that used any ophthalmic imaging modality, such as OCT or fundus photography, used any AI technologies, had at least an expert in ophthalmology as a reference standard, and proposed an AI algorithm able to distinguish between images with and without retinitis pigmentosa features. We considered the sensitivity, specificity, and area under the curve (AUC) as the main measures of accuracy. We had a total of 14 studies in the qualitative analysis and 10 studies in the quantitative analysis. In total, the studies included in the meta-analysis dealt with 920,162 images. Overall, AI showed an excellent performance in detecting RP with pooled sensitivity and specificity of 0.985 [95%CI: 0.948-0.996], 0.993 [95%CI: 0.982-0.997] respectively. The area under the receiver operating characteristic (AUROC), using a random-effect model, was calculated to be 0.999 [95%CI: 0.998-1.000; P < 0.001]. The Zhou and Dendukuri I² test revealed a low level of heterogeneity between the studies, with [I2 = 19.94%] for sensitivity and [I2 = 21.07%] for specificity. The bivariate I² [20.33%] also suggested a low degree of heterogeneity. We found evidence supporting the accuracy of AI in the detection of RP; however, the level of heterogeneity between the studies was low.
Collapse
Affiliation(s)
| | - Saif Aldeen AlRyalat
- Department of Ophthalmology, The University of Jordan, Amman, Jordan; Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, USA.
| | - Mohammad Naim Abid
- Marka Specialty Hospital, Amman, Jordan; Valley Retina Institute, P.A., McAllen, TX, USA
| | - Yahia Salem
- Faculty of Medicine, The University of Jordan, Amman, Jordan
| | | | - Ahmed B Sallam
- Harvey and Bernice Jones Eye Institute at the University of Arkansas for Medical Sciences (UAMS), Little Rock, AR, USA
| |
Collapse
|
10
|
Zhou Y, Peng S, Wang H, Cai X, Wang Q. Review of Personalized Medicine and Pharmacogenomics of Anti-Cancer Compounds and Natural Products. Genes (Basel) 2024; 15:468. [PMID: 38674402 PMCID: PMC11049652 DOI: 10.3390/genes15040468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 05/11/2023] [Accepted: 05/13/2023] [Indexed: 04/28/2024] Open
Abstract
In recent years, the FDA has approved numerous anti-cancer drugs that are mutation-based for clinical use. These drugs have improved the precision of treatment and reduced adverse effects and side effects. Personalized therapy is a prominent and hot topic of current medicine and also represents the future direction of development. With the continuous advancements in gene sequencing and high-throughput screening, research and development strategies for personalized clinical drugs have developed rapidly. This review elaborates the recent personalized treatment strategies, which include artificial intelligence, multi-omics analysis, chemical proteomics, and computation-aided drug design. These technologies rely on the molecular classification of diseases, the global signaling network within organisms, and new models for all targets, which significantly support the development of personalized medicine. Meanwhile, we summarize chemical drugs, such as lorlatinib, osimertinib, and other natural products, that deliver personalized therapeutic effects based on genetic mutations. This review also highlights potential challenges in interpreting genetic mutations and combining drugs, while providing new ideas for the development of personalized medicine and pharmacogenomics in cancer study.
Collapse
Affiliation(s)
- Yalan Zhou
- Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China; (Y.Z.); (S.P.); (H.W.)
| | - Siqi Peng
- Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China; (Y.Z.); (S.P.); (H.W.)
| | - Huizhen Wang
- Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China; (Y.Z.); (S.P.); (H.W.)
| | - Xinyin Cai
- Shanghai R&D Centre for Standardization of Chinese Medicines, Shanghai 202103, China
| | - Qingzhong Wang
- Institute of Chinese Materia Medica, Shanghai University of Traditional Chinese Medicine, Shanghai 201203, China; (Y.Z.); (S.P.); (H.W.)
| |
Collapse
|
11
|
Zhang R, Dong L, Fu X, Hua L, Zhou W, Li H, Wu H, Yu C, Li Y, Shi X, Ou Y, Zhang B, Wang B, Ma Z, Luo Y, Yang M, Chang X, Wang Z, Wei W. Trends in the Prevalence of Common Retinal and Optic Nerve Diseases in China: An Artificial Intelligence Based National Screening. Transl Vis Sci Technol 2024; 13:28. [PMID: 38648051 PMCID: PMC11044835 DOI: 10.1167/tvst.13.4.28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 03/07/2024] [Indexed: 04/25/2024] Open
Abstract
Purpose Retinal and optic nerve diseases have become the primary cause of irreversible vision loss and blindness. However, there is still a lack of thorough evaluation regarding their prevalence in China. Methods This artificial intelligence-based national screening study applied a previously developed deep learning algorithm, named the Retinal Artificial Intelligence Diagnosis System (RAIDS). De-identified personal medical records from January 2019 to December 2021 were extracted from 65 examination centers in 19 provinces of China. Crude prevalence and age-sex-adjusted prevalence were calculated by mapping to the standard population in the seventh national census. Results In 2021, adjusted referral possible glaucoma (63.29, 95% confidence interval [CI] = 57.12-68.90 cases per 1000), epiretinal macular membrane (21.84, 95% CI = 15.64-29.22), age-related macular degeneration (13.93, 95% CI = 11.09-17.17), and diabetic retinopathy (11.33, 95% CI = 8.89-13.77) ranked the highest among 10 diseases. Female participants had significantly higher adjusted prevalence of pathologic myopia, yet a lower adjusted prevalence of diabetic retinopathy, referral possible glaucoma, and hypertensive retinopathy than male participants. From 2019 to 2021, the adjusted prevalence of retinal vein occlusion (0.99, 95% CI = 0.73-1.26 to 1.88, 95% CI = 1.42-2.44), macular hole (0.59, 95% CI = 0.41-0.82 to 1.12, 95% CI = 0.76-1.51), and hypertensive retinopathy (0.53, 95% CI = 0.40-0.67 to 0.77, 95% CI = 0.60-0.95) significantly increased. The prevalence of diabetic retinopathy in participants under 50 years old significant increased. Conclusions Retinal and optic nerve diseases are an important public health concern in China. Further well-conceived epidemiological studies are required to validate the observed increased prevalence of diabetic retinopathy, hypertensive retinopathy, retinal vein occlusion, and macular hole nationwide. Translational Relevance This artificial intelligence system can be a potential tool to monitor the prevalence of major retinal and optic nerve diseases over a wide geographic area.
Collapse
Affiliation(s)
- Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuefei Fu
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Lin Hua
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Haotian Wu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuyao Yu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yitong Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuhan Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yangjie Ou
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Bing Zhang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhiqiang Ma
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | - Yuan Luo
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | - Meng Yang
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | | | - Zhaohui Wang
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
12
|
Lou B, Hu YL, Jiang ZH. Predictive Value of Combined HbA1c and Neutrophil-to-Lymphocyte Ratio for Diabetic Peripheral Neuropathy in Type 2 Diabetes. Med Sci Monit 2024; 30:e942509. [PMID: 38561932 PMCID: PMC10998473 DOI: 10.12659/msm.942509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 01/24/2024] [Indexed: 04/04/2024] Open
Abstract
BACKGROUND Diabetic peripheral neuropathy (DPN) is a prevalent complication affecting over 60% of type 2 diabetes patients. Early diagnosis is challenging, leading to irreversible impacts on quality of life. This study explores the predictive value of combining HbA1c and Neutrophil-to-Lymphocyte Ratio (NLR) for early DPN detection. MATERIAL AND METHODS An observational study was conducted at the First People's Hospital of Linping District, Hangzhou spanning from May 2019 to July 2020. Data on sex, age, biochemical measurements were collected from electronic medical records and analyzed. Employing multivariate logistic regression analysis, we sought to comprehend the factors influencing the development of DPN. To assess the predictive value of individual and combined testing for DPN, a receiver operating characteristic (ROC) curve was plotted. The data analysis was executed using R software (Version: 4.1.0). RESULTS The univariate and multivariate logistic regression analysis identified the level of glycated hemoglobin (HbA1C) (OR=1.94, 95% CI: 1.27-3.14) and neutrophil-to-lymphocyte ratio (NLR) (OR=4.60, 95% CI: 1.15-22.62, P=0.04) as significant risk factors for the development of DPN. Receiver operating characteristic (ROC) curve analysis demonstrated that HbA1c, NLR, and their combined detection exhibited high sensitivity in predicting the development of DPN (71.60%, 90.00%, and 97.2%, respectively), with moderate specificity (63.8%, 45.00%, and 50.00%, respectively). The area under the curve (AUC) for these predictors was 0.703, 0.661, and 0.733, respectively. CONCLUSIONS HbA1c and NLR emerge as noteworthy risk indicators associated with the manifestation of DPN in patients with type 2 diabetes. The combined detection of HbA1c and NLR exhibits a heightened predictive value for the development of DPN.
Collapse
|
13
|
Kalaw FGP, Cavichini M, Zhang J, Wen B, Lin AC, Heinke A, Nguyen T, An C, Bartsch DUG, Cheng L, Freeman WR. Ultra-wide field and new wide field composite retinal image registration with AI-enabled pipeline and 3D distortion correction algorithm. Eye (Lond) 2024; 38:1189-1195. [PMID: 38114568 PMCID: PMC11009222 DOI: 10.1038/s41433-023-02868-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023] Open
Abstract
PURPOSE This study aimed to compare a new Artificial Intelligence (AI) method to conventional mathematical warping in accurately overlaying peripheral retinal vessels from two different imaging devices: confocal scanning laser ophthalmoscope (cSLO) wide-field images and SLO ultra-wide field images. METHODS Images were captured using the Heidelberg Spectralis 55-degree field-of-view and Optos ultra-wide field. The conventional mathematical warping was performed using Random Sample Consensus-Sample and Consensus sets (RANSAC-SC). This was compared to an AI alignment algorithm based on a one-way forward registration procedure consisting of full Convolutional Neural Networks (CNNs) with Outlier Rejection (OR CNN), as well as an iterative 3D camera pose optimization process (OR CNN + Distortion Correction [DC]). Images were provided in a checkerboard pattern, and peripheral vessels were graded in four quadrants based on alignment to the adjacent box. RESULTS A total of 660 boxes were analysed from 55 eyes. Dice scores were compared between the three methods (RANSAC-SC/OR CNN/OR CNN + DC): 0.3341/0.4665/4784 for fold 1-2 and 0.3315/0.4494/4596 for fold 2-1 in composite images. The images composed using the OR CNN + DC have a median rating of 4 (out of 5) versus 2 using RANSAC-SC. The odds of getting a higher grading level are 4.8 times higher using our OR CNN + DC than RANSAC-SC (p < 0.0001). CONCLUSION Peripheral retinal vessel alignment performed better using our AI algorithm than RANSAC-SC. This may help improve co-localizing retinal anatomy and pathology with our algorithm.
Collapse
Affiliation(s)
- Fritz Gerald P Kalaw
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Melina Cavichini
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Junkang Zhang
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Bo Wen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Andrew C Lin
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Anna Heinke
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Truong Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | | | - Lingyun Cheng
- Jacobs Retina Center, University of California, San Diego, CA, USA
| | - William R Freeman
- Jacobs Retina Center, University of California, San Diego, CA, USA.
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
14
|
Parmar UPS, Surico PL, Singh RB, Romano F, Salati C, Spadea L, Musa M, Gagliano C, Mori T, Zeppieri M. Artificial Intelligence (AI) for Early Diagnosis of Retinal Diseases. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:527. [PMID: 38674173 PMCID: PMC11052176 DOI: 10.3390/medicina60040527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/12/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative tool in the field of ophthalmology, revolutionizing disease diagnosis and management. This paper provides a comprehensive overview of AI applications in various retinal diseases, highlighting its potential to enhance screening efficiency, facilitate early diagnosis, and improve patient outcomes. Herein, we elucidate the fundamental concepts of AI, including machine learning (ML) and deep learning (DL), and their application in ophthalmology, underscoring the significance of AI-driven solutions in addressing the complexity and variability of retinal diseases. Furthermore, we delve into the specific applications of AI in retinal diseases such as diabetic retinopathy (DR), age-related macular degeneration (AMD), Macular Neovascularization, retinopathy of prematurity (ROP), retinal vein occlusion (RVO), hypertensive retinopathy (HR), Retinitis Pigmentosa, Stargardt disease, best vitelliform macular dystrophy, and sickle cell retinopathy. We focus on the current landscape of AI technologies, including various AI models, their performance metrics, and clinical implications. Furthermore, we aim to address challenges and pitfalls associated with the integration of AI in clinical practice, including the "black box phenomenon", biases in data representation, and limitations in comprehensive patient assessment. In conclusion, this review emphasizes the collaborative role of AI alongside healthcare professionals, advocating for a synergistic approach to healthcare delivery. It highlights the importance of leveraging AI to augment, rather than replace, human expertise, thereby maximizing its potential to revolutionize healthcare delivery, mitigate healthcare disparities, and improve patient outcomes in the evolving landscape of medicine.
Collapse
Affiliation(s)
| | - Pier Luigi Surico
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
| | - Rohan Bir Singh
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Francesco Romano
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Carlo Salati
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Leopoldo Spadea
- Eye Clinic, Policlinico Umberto I, “Sapienza” University of Rome, 00142 Rome, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Tommaso Mori
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
- Department of Ophthalmology, University of California San Diego, La Jolla, CA 92122, USA
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
15
|
Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med Inform 2024; 12:e52073. [PMID: 38506918 PMCID: PMC10993141 DOI: 10.2196/52073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/12/2023] [Accepted: 01/30/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Generative artificial intelligence tools and applications (GenAI) are being increasingly used in health care. Physicians, specialists, and other providers have started primarily using GenAI as an aid or tool to gather knowledge, provide information, train, or generate suggestive dialogue between physicians and patients or between physicians and patients' families or friends. However, unless the use of GenAI is oriented to be helpful in clinical service encounters that can improve the accuracy of diagnosis, treatment, and patient outcomes, the expected potential will not be achieved. As adoption continues, it is essential to validate the effectiveness of the infusion of GenAI as an intelligent technology in service encounters to understand the gap in actual clinical service use of GenAI. OBJECTIVE This study synthesizes preliminary evidence on how GenAI assists, guides, and automates clinical service rendering and encounters in health care The review scope was limited to articles published in peer-reviewed medical journals. METHODS We screened and selected 0.38% (161/42,459) of articles published between January 1, 2020, and May 31, 2023, identified from PubMed. We followed the protocols outlined in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select highly relevant studies with at least 1 element on clinical use, evaluation, and validation to provide evidence of GenAI use in clinical services. The articles were classified based on their relevance to clinical service functions or activities using the descriptive and analytical information presented in the articles. RESULTS Of 161 articles, 141 (87.6%) reported using GenAI to assist services through knowledge access, collation, and filtering. GenAI was used for disease detection (19/161, 11.8%), diagnosis (14/161, 8.7%), and screening processes (12/161, 7.5%) in the areas of radiology (17/161, 10.6%), cardiology (12/161, 7.5%), gastrointestinal medicine (4/161, 2.5%), and diabetes (6/161, 3.7%). The literature synthesis in this study suggests that GenAI is mainly used for diagnostic processes, improvement of diagnosis accuracy, and screening and diagnostic purposes using knowledge access. Although this solves the problem of knowledge access and may improve diagnostic accuracy, it is oriented toward higher value creation in health care. CONCLUSIONS GenAI informs rather than assisting or automating clinical service functions in health care. There is potential in clinical service, but it has yet to be actualized for GenAI. More clinical service-level evidence that GenAI is used to streamline some functions or provides more automated help than only information retrieval is needed. To transform health care as purported, more studies related to GenAI applications must automate and guide human-performed services and keep up with the optimism that forward-thinking health care organizations will take advantage of GenAI.
Collapse
Affiliation(s)
- Dobin Yim
- Loyola University, Maryland, MD, United States
| | - Jiban Khuntia
- University of Colorado Denver, Denver, CO, United States
| | | | - Arlen Meyers
- University of Colorado Denver, Denver, CO, United States
| |
Collapse
|
16
|
Xu Y, Jiang Z, Ting DSW, Kow AWC, Bello F, Car J, Tham YC, Wong TY. Medical education and physician training in the era of artificial intelligence. Singapore Med J 2024; 65:159-166. [PMID: 38527300 PMCID: PMC11060639 DOI: 10.4103/singaporemedj.smj-2023-203] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 02/08/2024] [Indexed: 03/27/2024]
Abstract
ABSTRACT With the rise of generative artificial intelligence (AI) and AI-powered chatbots, the landscape of medicine and healthcare is on the brink of significant transformation. This perspective delves into the prospective influence of AI on medical education, residency training and the continuing education of attending physicians or consultants. We begin by highlighting the constraints of the current education model, challenges in limited faculty, uniformity amidst burgeoning medical knowledge and the limitations in 'traditional' linear knowledge acquisition. We introduce 'AI-assisted' and 'AI-integrated' paradigms for medical education and physician training, targeting a more universal, accessible, high-quality and interconnected educational journey. We differentiate between essential knowledge for all physicians, specialised insights for clinician-scientists and mastery-level proficiency for clinician-computer scientists. With the transformative potential of AI in healthcare and service delivery, it is poised to reshape the pedagogy of medical education and residency training.
Collapse
Affiliation(s)
- Yueyuan Xu
- Tsinghua Medicine, School of Medicine, Tsinghua University, Beijing, China
| | - Zehua Jiang
- Tsinghua Medicine, School of Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Eye Academic Clinical Program, Duke-NUS Medical School, Singapore
- Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Alfred Wei Chieh Kow
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Fernando Bello
- Technology Enhanced Learning and Innovation Department, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Josip Car
- Centre for Population Health Sciences, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Eye Academic Clinical Program, Duke-NUS Medical School, Singapore
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Yin Wong
- Tsinghua Medicine, School of Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
17
|
Liu Y, Xie H, Zhao X, Tang J, Yu Z, Wu Z, Tian R, Chen Y, Chen M, Ntentakis DP, Du Y, Chen T, Hu Y, Zhang S, Lei B, Zhang G. Automated detection of nine infantile fundus diseases and conditions in retinal images using a deep learning system. EPMA J 2024; 15:39-51. [PMID: 38463622 PMCID: PMC10923762 DOI: 10.1007/s13167-024-00350-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 01/21/2024] [Indexed: 03/12/2024]
Abstract
Purpose We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists. Methods We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results. Results Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities. Conclusions IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
Collapse
Affiliation(s)
- Yaling Liu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Jiannan Tang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhen Yu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Ruyin Tian
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Yi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Miaohong Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Dimitrios P. Ntentakis
- Retina Service, Ines and Fred Yeatts Retina Research Laboratory, Angiogenesis Laboratory, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA USA
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Tingyi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Yarou Hu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Sifan Zhang
- Guizhou Medical University, Guiyang, Guizhou China
- Southern University of Science and Technology School of Medicine, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| |
Collapse
|
18
|
Gu C, Wang Y, Jiang Y, Xu F, Wang S, Liu R, Yuan W, Abudureyimu N, Wang Y, Lu Y, Li X, Wu T, Dong L, Chen Y, Wang B, Zhang Y, Wei WB, Qiu Q, Zheng Z, Liu D, Chen J. Application of artificial intelligence system for screening multiple fundus diseases in Chinese primary healthcare settings: a real-world, multicentre and cross-sectional study of 4795 cases. Br J Ophthalmol 2024; 108:424-431. [PMID: 36878715 PMCID: PMC10894824 DOI: 10.1136/bjo-2022-322940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/19/2023] [Indexed: 03/08/2023]
Abstract
BACKGROUND/AIMS This study evaluates the performance of the Airdoc retinal artificial intelligence system (ARAS) for detecting multiple fundus diseases in real-world scenarios in primary healthcare settings and investigates the fundus disease spectrum based on ARAS. METHODS This real-world, multicentre, cross-sectional study was conducted in Shanghai and Xinjiang, China. Six primary healthcare settings were included in this study. Colour fundus photographs were taken and graded by ARAS and retinal specialists. The performance of ARAS is described by its accuracy, sensitivity, specificity and positive and negative predictive values. The spectrum of fundus diseases in primary healthcare settings has also been investigated. RESULTS A total of 4795 participants were included. The median age was 57.0 (IQR 39.0-66.0) years, and 3175 (66.2%) participants were female. The accuracy, specificity and negative predictive value of ARAS for detecting normal fundus and 14 retinal abnormalities were high, whereas the sensitivity and positive predictive value varied in detecting different abnormalities. The proportion of retinal drusen, pathological myopia and glaucomatous optic neuropathy was significantly higher in Shanghai than in Xinjiang. Moreover, the percentages of referable diabetic retinopathy, retinal vein occlusion and macular oedema in middle-aged and elderly people in Xinjiang were significantly higher than in Shanghai. CONCLUSION This study demonstrated the dependability of ARAS for detecting multiple retinal diseases in primary healthcare settings. Implementing the AI-assisted fundus disease screening system in primary healthcare settings might be beneficial in reducing regional disparities in medical resources. However, the ARAS algorithm must be improved to achieve better performance. TRIAL REGISTRATION NUMBER NCT04592068.
Collapse
Affiliation(s)
- Chufeng Gu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yujie Wang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yan Jiang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Feiping Xu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Shasha Wang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Rui Liu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Wen Yuan
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Nurbiyimu Abudureyimu
- Department of Ophthalmology, Bachu County Traditional Chinese Medicine Hospital of Kashgar, Xinjiang, China
| | - Ying Wang
- Department of Ophthalmology, Bachu Country People's Hospital of Kashgar, Xinjiang, China
| | - Yulan Lu
- Department of Ophthalmology, Linfen Community Health Service Center of Jing'an District, Shanghai, China
| | - Xiaolong Li
- Department of Ophthalmology, Pengpu New Village Community Health Service Center of Jing'an District, Shanghai, China
| | - Tao Wu
- Department of Ophthalmology, Pengpu Town Community Health Service Center of Jing'an District, Shanghai, China
| | - Li Dong
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | | | - Wen Bin Wei
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Qinghua Qiu
- Department of Ophthalmology, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhi Zheng
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Deng Liu
- Bachu Country People's Hospital of Kashgar, Xinjiang, China
- Shanghai No. 3 Rehabilitation Hospital, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| |
Collapse
|
19
|
Skevas C, de Olaguer NP, Lleó A, Thiwa D, Schroeter U, Lopes IV, Mautone L, Linke SJ, Spitzer MS, Yap D, Xiao D. Implementing and evaluating a fully functional AI-enabled model for chronic eye disease screening in a real clinical environment. BMC Ophthalmol 2024; 24:51. [PMID: 38302908 PMCID: PMC10832120 DOI: 10.1186/s12886-024-03306-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to increase the affordability and accessibility of eye disease screening, especially with the recent approval of AI-based diabetic retinopathy (DR) screening programs in several countries. METHODS This study investigated the performance, feasibility, and user experience of a seamless hardware and software solution for screening chronic eye diseases in a real-world clinical environment in Germany. The solution integrated AI grading for DR, age-related macular degeneration (AMD), and glaucoma, along with specialist auditing and patient referral decision. The study comprised several components: (1) evaluating the entire system solution from recruitment to eye image capture and AI grading for DR, AMD, and glaucoma; (2) comparing specialist's grading results with AI grading results; (3) gathering user feedback on the solution. RESULTS A total of 231 patients were recruited, and their consent forms were obtained. The sensitivity, specificity, and area under the curve for DR grading were 100.00%, 80.10%, and 90.00%, respectively. For AMD grading, the values were 90.91%, 78.79%, and 85.00%, and for glaucoma grading, the values were 93.26%, 76.76%, and 85.00%. The analysis of all false positive cases across the three diseases and their comparison with the final referral decisions revealed that only 17 patients were falsely referred among the 231 patients. The efficacy analysis of the system demonstrated the effectiveness of the AI grading process in the study's testing environment. Clinical staff involved in using the system provided positive feedback on the disease screening process, particularly praising the seamless workflow from patient registration to image transmission and obtaining the final result. Results from a questionnaire completed by 12 participants indicated that most found the system easy, quick, and highly satisfactory. The study also revealed room for improvement in the AMD model, suggesting the need to enhance its training data. Furthermore, the performance of the glaucoma model grading could be improved by incorporating additional measures such as intraocular pressure. CONCLUSIONS The implementation of the AI-based approach for screening three chronic eye diseases proved effective in real-world settings, earning positive feedback on the usability of the integrated platform from both the screening staff and auditors. The auditing function has proven valuable for obtaining efficient second opinions from experts, pointing to its potential for enhancing remote screening capabilities. TRIAL REGISTRATION Institutional Review Board of the Hamburg Medical Chamber (Ethik-Kommission der Ärztekammer Hamburg): 2021-10574-BO-ff.
Collapse
Affiliation(s)
- Christos Skevas
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | | | - Albert Lleó
- TeleMedC GmbH, Raboisen 32, 20095, Hamburg, Germany
| | - David Thiwa
- Department of Otorhinolaryngology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Ulrike Schroeter
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Inês Valente Lopes
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany.
| | - Luca Mautone
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Stephan J Linke
- Zentrum Sehestaerke, Martinistraße 64, 20251, Hamburg, Germany
| | - Martin Stephan Spitzer
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Daniel Yap
- TeleMedC Pty Ltd, 61 Ubi Avenue 1, #06-11 UBPoint, Singapore, 40894, Singapore
| | - Di Xiao
- TeleMedC Pty Ltd, Brisbane Technology Park, Level 2, 1 Westlink Court, Darra, QLD 4076, Australia
| |
Collapse
|
20
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
21
|
Tsai MC, Yen HH, Tsai HY, Huang YK, Luo YS, Kornelius E, Sung WW, Lin CC, Tseng MH, Wang CC. Artificial intelligence system for the detection of Barrett's esophagus. World J Gastroenterol 2023; 29:6198-6207. [PMID: 38186865 PMCID: PMC10768395 DOI: 10.3748/wjg.v29.i48.6198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 11/13/2023] [Accepted: 12/12/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND Barrett's esophagus (BE), which has increased in prevalence worldwide, is a precursor for esophageal adenocarcinoma. Although there is a gap in the detection rates between endoscopic BE and histological BE in current research, we trained our artificial intelligence (AI) system with images of endoscopic BE and tested the system with images of histological BE. AIM To assess whether an AI system can aid in the detection of BE in our setting. METHODS Endoscopic narrow-band imaging (NBI) was collected from Chung Shan Medical University Hospital and Changhua Christian Hospital, resulting in 724 cases, with 86 patients having pathological results. Three senior endoscopists, who were instructing physicians of the Digestive Endoscopy Society of Taiwan, independently annotated the images in the development set to determine whether each image was classified as an endoscopic BE. The test set consisted of 160 endoscopic images of 86 cases with histological results. RESULTS Six pre-trained models were compared, and EfficientNetV2B2 (accuracy [ACC]: 0.8) was selected as the backbone architecture for further evaluation due to better ACC results. In the final test, the AI system correctly identified 66 of 70 cases of BE and 85 of 90 cases without BE, resulting in an ACC of 94.37%. CONCLUSION Our AI system, which was trained by NBI of endoscopic BE, can adequately predict endoscopic images of histological BE. The ACC, sensitivity, and specificity are 94.37%, 94.29%, and 94.44%, respectively.
Collapse
Affiliation(s)
- Ming-Chang Tsai
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| | - Hsu-Heng Yen
- Division of Gastroenterology, Changhua Christian Hospital, Changhua 500, Taiwan
- Artificial Intelligence Development Center, Changhua Christian Hospital, Changhua 500, Taiwan
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung 400, Taiwan
| | - Hui-Yu Tsai
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan
| | - Yu-Kai Huang
- Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Yu-Sin Luo
- Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Edy Kornelius
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
- Department of Endocrinology and Metabolism, Chung-Shan Medical University Hospital, Taichung 402, Taiwan
| | - Wen-Wei Sung
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
- Department of Urology, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Chun-Che Lin
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| | - Ming-Hseng Tseng
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan
- Information Technology Office, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Chi-Chih Wang
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| |
Collapse
|
22
|
Xie J, Zhong W, Yang R, Wang L, Zhen X. Discriminative fusion of moments-aligned latent representation of multimodality medical data. Phys Med Biol 2023; 69:015015. [PMID: 38052076 DOI: 10.1088/1361-6560/ad1271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/05/2023] [Indexed: 12/07/2023]
Abstract
Fusion of multimodal medical data provides multifaceted, disease-relevant information for diagnosis or prognosis prediction modeling. Traditional fusion strategies such as feature concatenation often fail to learn hidden complementary and discriminative manifestations from high-dimensional multimodal data. To this end, we proposed a methodology for the integration of multimodality medical data by matching their moments in a latent space, where the hidden, shared information of multimodal data is gradually learned by optimization with multiple feature collinearity and correlation constrains. We first obtained the multimodal hidden representations by learning mappings between the original domain and shared latent space. Within this shared space, we utilized several relational regularizations, including data attribute preservation, feature collinearity and feature-task correlation, to encourage learning of the underlying associations inherent in multimodal data. The fused multimodal latent features were finally fed to a logistic regression classifier for diagnostic prediction. Extensive evaluations on three independent clinical datasets have demonstrated the effectiveness of the proposed method in fusing multimodal data for medical prediction modeling.
Collapse
Affiliation(s)
- Jincheng Xie
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Weixiong Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Ruimeng Yang
- Department of Radiology, the Second Affiliated Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, 510180, People's Republic of China
| | - Linjing Wang
- Radiotherapy Center, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, Guangdong 510095, People's Republic of China
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| |
Collapse
|
23
|
Liu L, Li M, Lin D, Yun D, Lin Z, Zhao L, Pang J, Li L, Wu Y, Shang Y, Lin H, Wu X. Protocol to analyze fundus images for multidimensional quality grading and real-time guidance using deep learning techniques. STAR Protoc 2023; 4:102565. [PMID: 37733597 PMCID: PMC10519839 DOI: 10.1016/j.xpro.2023.102565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/09/2023] [Accepted: 08/18/2023] [Indexed: 09/23/2023] Open
Abstract
Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| |
Collapse
|
24
|
Zhang Y, Li Y, Liu J, Wang J, Li H, Zhang J, Yu X. Performances of artificial intelligence in detecting pathologic myopia: a systematic review and meta-analysis. Eye (Lond) 2023; 37:3565-3573. [PMID: 37117783 PMCID: PMC10141825 DOI: 10.1038/s41433-023-02551-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 03/30/2023] [Accepted: 04/18/2023] [Indexed: 04/30/2023] Open
Abstract
BACKGROUND/OBJECTIVE Pathologic myopia (PM) is a major cause of severe visual impairment and blindness, and current applications of artificial intelligence (AI) have covered the diagnosis and classification of PM. This meta-analysis and systematic review aimed to evaluate the overall performance of AI-based models in detecting PM and related complications. METHODS We searched PubMed, Scopus, Embase, Web of Science and IEEE Xplore for eligible studies before Dec 20, 2022. The methodological quality of included studies was evaluated using the Quality Assessment for Diagnostic Accuracy Studies (QUADAS-2). We calculated the pooled sensitivity (SEN), specificity (SPE) and the summary area under the curve (AUC) using a random effects model, to evaluate the performance of AI in the detection of PM based on fundus or optical coherence tomography (OCT) images. RESULTS 22 studies were included in the systematic review, and 14 of them were included in the quantitative analysis. Of all included studies, SEN and SPE ranged from 80.0% to 98.7% and from 79.5% to 100.0% for PM detection, respectively. For the detection of PM, the summary AUC was 0.99 (95% confidence interval (CI) 0.97 to 0.99), and the pooled SEN and SPE were 0.95 (95% CI 0.92 to 0.96) and 0.97 (95% CI: 0.94 to 0.98), respectively. For the detection of PM-related choroid neovascularization (CNV), the summary AUC was 0.99 (95% CI: 0.97 to 0.99). CONCLUSION Our review demonstrated the excellent performance of current AI algorithms in detecting PM and related complications based on fundus and OCT images.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Ophthalmology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
- Graduate School of Peking Union Medical College, Beijing, China
| | - Yilin Li
- Center for Statistical Sciences, Peking University, Beijing, China
| | - Jing Liu
- Department of Ophthalmology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
- Graduate School of Peking Union Medical College, Beijing, China
| | - Jianing Wang
- Department of Ophthalmology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Hui Li
- Department of Ophthalmology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Jinrong Zhang
- Department of Ophthalmology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Xiaobing Yu
- Department of Ophthalmology, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China.
- Graduate School of Peking Union Medical College, Beijing, China.
| |
Collapse
|
25
|
Lin YT, Zhou Q, Tan J, Tao Y. Multimodal and multi-omics-based deep learning model for screening of optic neuropathy. Heliyon 2023; 9:e22244. [PMID: 38046141 PMCID: PMC10686864 DOI: 10.1016/j.heliyon.2023.e22244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 12/05/2023] Open
Abstract
Purpose To examine the use of multimodal data and multi-omics strategies for optic nerve disease screening. Methods This was a single-center retrospective study. A deep learning model was created from fundus photography and infrared reflectance (IR) images of patients with diabetic optic neuropathy, glaucomatous optic neuropathy, and optic neuritis. Patients who were seen at the Ophthalmology Department of First Affiliated Hospital of Nanchang University in Jiangxi Province from November 2019 to April 2023 were included in this study. The data were analyzed in single and multimodal modes following the traditional omics, Resnet101, and fusion models. The accuracy and area-under-the-curve (AUC) of each model were compared. Results A total of 312 images fundus and infrared fundus photographs were collected from 156 patients. When multi-modal data was used, the accuracy of the traditional omics mode, Resnet101, and fusion models with the training set were 0.97, 0.98, and 0.99, respectively. The accuracy of the same models with the test sets were 0.72, 0.87, and 0.88, respectively. We compared single- and multi-mode states by applying the data to the different groups in the learning model. In the traditional omics model, the macro-average AUCs of the features extracted from fundus photography, IR images, and multimodal data were 0.94, 0.90, and 0.96, respectively. When the same data were processed in the Resnet101 model, the scores were 0.97 equally. However, when multimodal data was utilized, the macro-average AUCs in the traditional omics, Resnet101, and fusion modesl were 0.96, 0.97, and 0.99, respectively. Conclusion The deep learning model based on multimodal data and multi-omics strategies can improve the accuracy of screening and diagnosing diabetic optic neuropathy, glaucomatous optic neuropathy, and optic neuritis.
Collapse
Affiliation(s)
- Ye-ting Lin
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| | - Qiong Zhou
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| | - Jian Tan
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| | - Yulin Tao
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| |
Collapse
|
26
|
Li Q, Qin Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC MEDICAL EDUCATION 2023; 23:852. [PMID: 37946176 PMCID: PMC10637014 DOI: 10.1186/s12909-023-04700-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 09/19/2023] [Indexed: 11/12/2023]
Abstract
Medical AI has transformed modern medicine and created a new environment for future doctors. However, medical education has failed to keep pace with these advances, and it is essential to provide systematic education on medical AI to current medical undergraduate and postgraduate students. To address this issue, our study utilized the Unified Theory of Acceptance and Use of Technology model to identify key factors that influence the acceptance and intention to use medical AI. We collected data from 1,243 undergraduate and postgraduate students from 13 universities and 33 hospitals, and 54.3% reported prior experience using medical AI. Our findings indicated that medical postgraduate students have a higher level of awareness in using medical AI than undergraduate students. The intention to use medical AI is positively associated with factors such as performance expectancy, habit, hedonic motivation, and trust. Therefore, future medical education should prioritize promoting students' performance in training, and courses should be designed to be both easy to learn and engaging, ensuring that students are equipped with the necessary skills to succeed in their future medical careers.
Collapse
Affiliation(s)
- Qianying Li
- Antai College of economics and management, Shanghai Jiao Tong University, Shanghai, China
| | - Yunhao Qin
- Department of Orthopedics, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
27
|
Cui T, Lin D, Yu S, Zhao X, Lin Z, Zhao L, Xu F, Yun D, Pang J, Li R, Xie L, Zhu P, Huang Y, Huang H, Hu C, Huang W, Liang X, Lin H. Deep Learning Performance of Ultra-Widefield Fundus Imaging for Screening Retinal Lesions in Rural Locales. JAMA Ophthalmol 2023; 141:1045-1051. [PMID: 37856107 PMCID: PMC10587822 DOI: 10.1001/jamaophthalmol.2023.4650] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Pengzhi Zhu
- Greater Bay Area Center for Medical Device Evaluation and Inspection of National Medical Products Administration, Shenzhen, China
| | - Yuzhe Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Hongxin Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Changming Hu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
28
|
Li L, Lin D, Lin Z, Li M, Lian Z, Zhao L, Wu X, Liu L, Liu J, Wei X, Luo M, Zeng D, Yan A, Iao WC, Shang Y, Xu F, Xiang W, He M, Fu Z, Wang X, Deng Y, Fan X, Ye Z, Wei M, Zhang J, Liu B, Li J, Ding X, Lin H. DeepQuality improves infant retinopathy screening. NPJ Digit Med 2023; 6:192. [PMID: 37845275 PMCID: PMC10579317 DOI: 10.1038/s41746-023-00943-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 10/05/2023] [Indexed: 10/18/2023] Open
Abstract
Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.
Collapse
Affiliation(s)
- Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhangkai Lian
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jiali Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoyue Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingjie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danqi Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Anqi Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wei Xiang
- Department of Clinical Laboratory Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Muchen He
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xueyu Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yaru Deng
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinyan Fan
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhijun Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Meirong Wei
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Jianping Zhang
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Baohai Liu
- Department of Ophthalmology, Maternal and Children's Hospital, Linyi, Shandong, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
29
|
Khosravi P, Huck NA, Shahraki K, Hunter SC, Danza CN, Kim SY, Forbes BJ, Dai S, Levin AV, Binenbaum G, Chang PD, Suh DW. Deep Learning Approach for Differentiating Etiologies of Pediatric Retinal Hemorrhages: A Multicenter Study. Int J Mol Sci 2023; 24:15105. [PMID: 37894785 PMCID: PMC10606803 DOI: 10.3390/ijms242015105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 09/29/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Retinal hemorrhages in pediatric patients can be a diagnostic challenge for ophthalmologists. These hemorrhages can occur due to various underlying etiologies, including abusive head trauma, accidental trauma, and medical conditions. Accurate identification of the etiology is crucial for appropriate management and legal considerations. In recent years, deep learning techniques have shown promise in assisting healthcare professionals in making more accurate and timely diagnosis of a variety of disorders. We explore the potential of deep learning approaches for differentiating etiologies of pediatric retinal hemorrhages. Our study, which spanned multiple centers, analyzed 898 images, resulting in a final dataset of 597 retinal hemorrhage fundus photos categorized into medical (49.9%) and trauma (50.1%) etiologies. Deep learning models, specifically those based on ResNet and transformer architectures, were applied; FastViT-SA12, a hybrid transformer model, achieved the highest accuracy (90.55%) and area under the receiver operating characteristic curve (AUC) of 90.55%, while ResNet18 secured the highest sensitivity value (96.77%) on an independent test dataset. The study highlighted areas for optimization in artificial intelligence (AI) models specifically for pediatric retinal hemorrhages. While AI proves valuable in diagnosing these hemorrhages, the expertise of medical professionals remains irreplaceable. Collaborative efforts between AI specialists and pediatric ophthalmologists are crucial to fully harness AI's potential in diagnosing etiologies of pediatric retinal hemorrhages.
Collapse
Affiliation(s)
- Pooya Khosravi
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
| | - Nolan A. Huck
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Kourosh Shahraki
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Stephen C. Hunter
- School of Medicine, University of California, 900 University Ave, Riverside, CA 92521, USA;
| | - Clifford Neil Danza
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - So Young Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan 31151, Chungcheongnam-do, Republic of Korea;
| | - Brian J. Forbes
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children’s Hospital, South Brisbane, QLD 4101, Australia;
| | - Alex V. Levin
- Department of Ophthalmology, Flaum Eye Institute, Golisano Children’s Hospital, Rochester, NY 14642, USA;
| | - Gil Binenbaum
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Peter D. Chang
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
- Department of Radiological Sciences, School of Medicine, University of California, Irvine, CA 92697, USA
| | - Donny W. Suh
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| |
Collapse
|
30
|
Aw KL, Suepiantham S, Rodriguez A, Bruce A, Borooah S, Cackett P. Patients' Perception of Robot-Driven Technology in the Management of Retinal Diseases. Ophthalmol Ther 2023; 12:2529-2536. [PMID: 37369908 PMCID: PMC10442043 DOI: 10.1007/s40123-023-00762-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 06/20/2023] [Indexed: 06/29/2023] Open
Abstract
INTRODUCTION There is increasing application of robots and other artificial intelligence-driven technologies in the management of retinal disease. These technologies have the potential to meet increasing demands for retinal diseases. However, there is currently a lack of understanding of patients' attitudes towards use of robots in ophthalmology. This study investigates patients' attitudes towards robot-led management of retinal disease. METHODS Paper questionnaires were distributed to 177 patients attending intravitreal treatment (IVT) at the Princess Alexandra Eye Pavilion between 1 October 2022 and 31 January 2023. The questionnaire collected information on age, sex, diagnosis and postcode. In the questionnaire, patients responded to questions about their attitudes towards robot-led diagnosis, treatment decisions and IVT injections. Responses were collected using a 5-category Likert scale which was analysed using ordinal logistic regression with adjustments for age, sex and deprivation status. RESULTS Those from affluent socioeconomic backgrounds were significantly (p < 0.001) more accepting of robots diagnosing and deciding on treatment, although the total number of patients who were accepting was only 26 (14.7%). Furthermore, there was an increased proportion of patients who would accept robots if the robot made fewer mistakes than doctors, if the robot reduced waiting or appointment time and if the robot was able to communicate well and have empathy; the same association with socioeconomic background remains (p < 0.001). Lastly, 116 patients (65.5%) would not be happy if IVT injections were performed by a robot; this was more likely the case if the patient was female (p = 0.04) or from a more deprived socioeconomic background (p < 0.001). CONCLUSION Attitudes towards robot involvement in diagnosis and management of retinal disease are significantly associated with socioeconomic backgrounds and sex. Additional studies are required to further investigate these determinants of robot receptiveness to ensure acceptance and compliance with treatment with these new technologies.
Collapse
Affiliation(s)
- Kah Long Aw
- Princess Alexandra Eye Pavilion, Edinburgh, Scotland.
- University of Edinburgh, Edinburgh, Scotland.
| | | | | | - Alison Bruce
- Princess Alexandra Eye Pavilion, Edinburgh, Scotland
| | | | - Peter Cackett
- Princess Alexandra Eye Pavilion, Edinburgh, Scotland
- University of Edinburgh, Edinburgh, Scotland
| |
Collapse
|
31
|
Liu YF, Ji YK, Fei FQ, Chen NM, Zhu ZT, Fei XZ. Research progress in artificial intelligence assisted diabetic retinopathy diagnosis. Int J Ophthalmol 2023; 16:1395-1405. [PMID: 37724288 PMCID: PMC10475636 DOI: 10.18240/ijo.2023.09.05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 09/20/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most common retinal vascular diseases and one of the main causes of blindness worldwide. Early detection and treatment can effectively delay vision decline and even blindness in patients with DR. In recent years, artificial intelligence (AI) models constructed by machine learning and deep learning (DL) algorithms have been widely used in ophthalmology research, especially in diagnosing and treating ophthalmic diseases, particularly DR. Regarding DR, AI has mainly been used in its diagnosis, grading, and lesion recognition and segmentation, and good research and application results have been achieved. This study summarizes the research progress in AI models based on machine learning and DL algorithms for DR diagnosis and discusses some limitations and challenges in AI research.
Collapse
Affiliation(s)
- Yun-Fang Liu
- Department of Ophthalmology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Fang-Qin Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Nai-Mei Chen
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Zhen-Tao Zhu
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Xing-Zhen Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| |
Collapse
|
32
|
Seth I, Lim B, Xie Y, Hunter-Smith DJ, Rozen WM. Exploring the role of artificial intelligence chatbot on the management of scaphoid fractures. J Hand Surg Eur Vol 2023; 48:814-818. [PMID: 37177798 DOI: 10.1177/17531934231169858] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3004, Australia
| | - Bryan Lim
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3004, Australia
- Faculty of Medicine, Monash University, Melbourne, Victoria, 3002, Australia
| | - Yi Xie
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3004, Australia
| | - David J Hunter-Smith
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3004, Australia
| | - Warren M Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3004, Australia
| |
Collapse
|
33
|
Cleland CR, Rwiza J, Evans JR, Gordon I, MacLeod D, Burton MJ, Bascaran C. Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review. BMJ Open Diabetes Res Care 2023; 11:e003424. [PMID: 37532460 PMCID: PMC10401245 DOI: 10.1136/bmjdrc-2023-003424] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/11/2023] [Indexed: 08/04/2023] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness globally. There is growing evidence to support the use of artificial intelligence (AI) in diabetic eye care, particularly for screening populations at risk of sight loss from DR in low-income and middle-income countries (LMICs) where resources are most stretched. However, implementation into clinical practice remains limited. We conducted a scoping review to identify what AI tools have been used for DR in LMICs and to report their performance and relevant characteristics. 81 articles were included. The reported sensitivities and specificities were generally high providing evidence to support use in clinical practice. However, the majority of studies focused on sensitivity and specificity only and there was limited information on cost, regulatory approvals and whether the use of AI improved health outcomes. Further research that goes beyond reporting sensitivities and specificities is needed prior to wider implementation.
Collapse
Affiliation(s)
- Charles R Cleland
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Justus Rwiza
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Jennifer R Evans
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - Iris Gordon
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - David MacLeod
- Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK
| | - Matthew J Burton
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Covadonga Bascaran
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| |
Collapse
|
34
|
Shi XH, Dong L, Zhang RH, Zhou DJ, Ling SG, Shao L, Yan YN, Wang YX, Wei WB. Relationships between quantitative retinal microvascular characteristics and cognitive function based on automated artificial intelligence measurements. Front Cell Dev Biol 2023; 11:1174984. [PMID: 37416799 PMCID: PMC10322221 DOI: 10.3389/fcell.2023.1174984] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 06/09/2023] [Indexed: 07/08/2023] Open
Abstract
Introduction: The purpose of this study is to assess the relationship between retinal vascular characteristics and cognitive function using artificial intelligence techniques to obtain fully automated quantitative measurements of retinal vascular morphological parameters. Methods: A deep learning-based semantic segmentation network ResNet101-UNet was used to construct a vascular segmentation model for fully automated quantitative measurement of retinal vascular parameters on fundus photographs. Retinal photographs centered on the optic disc of 3107 participants (aged 50-93 years) from the Beijing Eye Study 2011, a population-based cross-sectional study, were analyzed. The main parameters included the retinal vascular branching angle, vascular fractal dimension, vascular diameter, vascular tortuosity, and vascular density. Cognitive function was assessed using the Mini-Mental State Examination (MMSE). Results: The results showed that the mean MMSE score was 26.34 ± 3.64 (median: 27; range: 2-30). Among the participants, 414 (13.3%) were classified as having cognitive impairment (MMSE score < 24), 296 (9.5%) were classified as mild cognitive impairment (MMSE: 19-23), 98 (3.2%) were classified as moderate cognitive impairment (MMSE: 10-18), and 20 (0.6%) were classified as severe cognitive impairment (MMSE < 10). Compared with the normal cognitive function group, the retinal venular average diameter was significantly larger (p = 0.013), and the retinal vascular fractal dimension and vascular density were significantly smaller (both p < 0.001) in the mild cognitive impairment group. The retinal arteriole-to-venular ratio (p = 0.003) and vascular fractal dimension (p = 0.033) were significantly decreased in the severe cognitive impairment group compared to the mild cognitive impairment group. In the multivariate analysis, better cognition (i.e., higher MMSE score) was significantly associated with higher retinal vascular fractal dimension (b = 0.134, p = 0.043) and higher retinal vascular density (b = 0.152, p = 0.023) after adjustment for age, best corrected visual acuity (BCVA) (logMAR) and education level. Discussion: In conclusion, our findings derived from an artificial intelligence-based fully automated retinal vascular parameter measurement method showed that several retinal vascular morphological parameters were correlated with cognitive impairment. The decrease in retinal vascular fractal dimension and decreased vascular density may serve as candidate biomarkers for early identification of cognitive impairment. The observed reduction in the retinal arteriole-to-venular ratio occurs in the late stages of cognitive impairment.
Collapse
Affiliation(s)
- Xu Han Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Rui Heng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Deng Ji Zhou
- EVision Technology (Beijing) Co., Ltd., Beijing, China
| | | | - Lei Shao
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yan Ni Yan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Science Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China
| | - Wen Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
35
|
Wang Z, Li Z, Li K, Mu S, Zhou X, Di Y. Performance of artificial intelligence in diabetic retinopathy screening: a systematic review and meta-analysis of prospective studies. Front Endocrinol (Lausanne) 2023; 14:1197783. [PMID: 37383397 PMCID: PMC10296189 DOI: 10.3389/fendo.2023.1197783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/23/2023] [Indexed: 06/30/2023] Open
Abstract
Aims To systematically evaluate the diagnostic value of an artificial intelligence (AI) algorithm model for various types of diabetic retinopathy (DR) in prospective studies over the previous five years, and to explore the factors affecting its diagnostic effectiveness. Materials and methods A search was conducted in Cochrane Library, Embase, Web of Science, PubMed, and IEEE databases to collect prospective studies on AI models for the diagnosis of DR from January 2017 to December 2022. We used QUADAS-2 to evaluate the risk of bias in the included studies. Meta-analysis was performed using MetaDiSc and STATA 14.0 software to calculate the combined sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of various types of DR. Diagnostic odds ratios, summary receiver operating characteristic (SROC) plots, coupled forest plots, and subgroup analysis were performed according to the DR categories, patient source, region of study, and quality of literature, image, and algorithm. Results Finally, 21 studies were included. Meta-analysis showed that the pooled sensitivity, specificity, pooled positive likelihood ratio, pooled negative likelihood ratio, area under the curve, Cochrane Q index, and pooled diagnostic odds ratio of AI model for the diagnosis of DR were 0.880 (0.875-0.884), 0.912 (0.99-0.913), 13.021 (10.738-15.789), 0.083 (0.061-0.112), 0.9798, 0.9388, and 206.80 (124.82-342.63), respectively. The DR categories, patient source, region of study, sample size, quality of literature, image, and algorithm may affect the diagnostic efficiency of AI for DR. Conclusion AI model has a clear diagnostic value for DR, but it is influenced by many factors that deserve further study. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023389687.
Collapse
|
36
|
Land MR, Patel PA, Bui T, Jiao C, Ali A, Ibnamasud S, Patel PN, Sheth V. Examining the Role of Telemedicine in Diabetic Retinopathy. J Clin Med 2023; 12:jcm12103537. [PMID: 37240642 DOI: 10.3390/jcm12103537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/21/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
With the increasing prevalence of diabetic retinopathy (DR), screening is of the utmost importance to prevent vision loss for patients and reduce financial costs for the healthcare system. Unfortunately, it appears that the capacity of optometrists and ophthalmologists to adequately perform in-person screenings of DR will be insufficient within the coming years. Telemedicine offers the opportunity to expand access to screening while reducing the economic and temporal burden associated with current in-person protocols. The present literature review summarizes the latest developments in telemedicine for DR screening, considerations for stakeholders, barriers to implementation, and future directions in this area. As the role of telemedicine in DR screening continues to expand, further work will be necessary to continually optimize practices and improve long-term patient outcomes.
Collapse
Affiliation(s)
- Matthew R Land
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA 30912, USA
| | - Parth A Patel
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA 30912, USA
| | - Tommy Bui
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA 30912, USA
| | - Cheng Jiao
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA 30912, USA
| | - Arsalan Ali
- Burnett School of Medicine, Texas Christian University, Fort Worth, TX 76129, USA
| | - Shadman Ibnamasud
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, GA 30912, USA
| | - Prem N Patel
- Department of Ophthalmology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Veeral Sheth
- Department of Ophthalmology, University Retina and Macula Associates, Oak Forest, IL 60452, USA
| |
Collapse
|
37
|
Hubbard DC, Cox P, Redd TK. Assistive applications of artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:261-266. [PMID: 36728651 PMCID: PMC10065924 DOI: 10.1097/icu.0000000000000939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
PURPOSE OF REVIEW Assistive (nonautonomous) artificial intelligence (AI) models designed to support (rather than function independently of) clinicians have received increasing attention in medicine. This review aims to highlight several recent developments in these models over the past year and their ophthalmic implications. RECENT FINDINGS Artificial intelligence models with a diverse range of applications in ophthalmology have been reported in the literature over the past year. Many of these systems have reported high performance in detection, classification, prognostication, and/or monitoring of retinal, glaucomatous, anterior segment, and other ocular pathologies. SUMMARY Over the past year, developments in AI have been made that have implications affecting ophthalmic surgical training and refractive outcomes after cataract surgery, therapeutic monitoring of disease, disease classification, and prognostication. Many of these recently developed models have obtained encouraging results and have the potential to serve as powerful clinical decision-making tools pending further external validation and evaluation of their generalizability.
Collapse
Affiliation(s)
- Donald C Hubbard
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Parker Cox
- Spencer Fox Eccles School of Medicine, University of Utah, Salt Lake City, Utah, USA
| | - Travis K Redd
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| |
Collapse
|
38
|
Sun G, Wang X, Xu L, Li C, Wang W, Yi Z, Luo H, Su Y, Zheng J, Li Z, Chen Z, Zheng H, Chen C. Deep Learning for the Detection of Multiple Fundus Diseases Using Ultra-widefield Images. Ophthalmol Ther 2023; 12:895-907. [PMID: 36565376 PMCID: PMC10011259 DOI: 10.1007/s40123-022-00627-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/27/2022] [Indexed: 12/25/2022] Open
Abstract
INTRODUCTION To design and evaluate a deep learning model based on ultra-widefield images (UWFIs) that can detect several common fundus diseases. METHODS Based on 4574 UWFIs, a deep learning model was trained and validated that can identify normal fundus and eight common fundus diseases, namely referable diabetic retinopathy, retinal vein occlusion, pathologic myopia, retinal detachment, retinitis pigmentosa, age-related macular degeneration, vitreous opacity, and optic neuropathy. The model was tested on three test sets with data volumes of 465, 979, and 525. The performance of the three deep learning networks, EfficientNet-B7, DenseNet, and ResNet-101, was evaluated on the internal test set. Additionally, we compared the performance of the deep learning model with that of doctors in a tertiary referral hospital. RESULTS Compared to the other two deep learning models, EfficientNet-B7 achieved the best performance. The area under the receiver operating characteristic curves of the EfficientNet-B7 model on the internal test set, external test set A and external test set B were 0.9708 (0.8772, 0.9849) to 1.0000 (1.0000, 1.0000), 0.9683 (0.8829, 0.9770) to 1.0000 (0.9975, 1.0000), and 0.8919 (0.7150, 0.9055) to 0.9977 (0.9165, 1.0000), respectively. On a data set of 100 images, the total accuracy of the deep learning model was 93.00%, the average accuracy of three ophthalmologists who had been working for 2 years and three ophthalmologists who had been working in fundus imaging for more than 5 years was 88.00% and 94.00%, respectively. CONCLUSION High performance was achieved on all three test sets using our UWFI multidisease classification model with a small sample size and fast model inference. The performance of the artificial intelligence model was comparable to that of a physician with 2-5 years of experience in fundus diseases at a tertiary referral hospital. The model is expected to be used as an effective aid for fundus disease screening.
Collapse
Affiliation(s)
- Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Lizhang Xu
- Wuhan Aiyanbang Technology Co., Ltd, Wuhan, 430073, China
| | - Chang Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Huijuan Luo
- The People's Hospital of Yidu, Yidu, 443300, China
| | - Yu Su
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Jian Zheng
- School of Electronic Information and Electric Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Zhiqing Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Zhen Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| |
Collapse
|
39
|
Ji Y, Ji Y, Liu Y, Zhao Y, Zhang L. Research progress on diagnosing retinal vascular diseases based on artificial intelligence and fundus images. Front Cell Dev Biol 2023; 11:1168327. [PMID: 37056999 PMCID: PMC10086262 DOI: 10.3389/fcell.2023.1168327] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
As the only blood vessels that can directly be seen in the whole body, pathological changes in retinal vessels are related to the metabolic state of the whole body and many systems, which seriously affect the vision and quality of life of patients. Timely diagnosis and treatment are key to improving vision prognosis. In recent years, with the rapid development of artificial intelligence, the application of artificial intelligence in ophthalmology has become increasingly extensive and in-depth, especially in the field of retinal vascular diseases. Research study results based on artificial intelligence and fundus images are remarkable and provides a great possibility for early diagnosis and treatment. This paper reviews the recent research progress on artificial intelligence in retinal vascular diseases (including diabetic retinopathy, hypertensive retinopathy, retinal vein occlusion, retinopathy of prematurity, and age-related macular degeneration). The limitations and challenges of the research process are also discussed.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
| | - Yunfang Liu
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
| | - Ying Zhao
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| | - Liya Zhang
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| |
Collapse
|
40
|
Chen X, Xue Y, Wu X, Zhong Y, Rao H, Luo H, Weng Z. Deep Learning-Based System for Disease Screening and Pathologic Region Detection From Optical Coherence Tomography Images. Transl Vis Sci Technol 2023; 12:29. [PMID: 36716039 PMCID: PMC9896901 DOI: 10.1167/tvst.12.1.29] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Purpose This study was designed to apply deep learning models in retinal disease screening and lesion detection based on optical coherence tomography (OCT) images. Methods We collected 37,138 OCT images from 775 patients and labelled by ophthalmologists. Multiple deep learning models including ResNet50 and YOLOv3 were developed to identify the types and locations of diseases or lesions based on the images. Results The model were evaluated using patient-based independent holdout set. For binary classification of OCT images with or without lesions, the performance accuracy was 98.5%, sensitivity was 98.7%, specificity was 98.4%, and the F1 score was 97.7%. For multiclass multilabel disease classification, the models was able to detect vitreomacular traction syndrome and age-related macular degeneration both with an accuracy of more than 99%, sensitivity of more than 98%, specificity of more than 98%, and an F1 score of more than 97%. For lesion location detection, the recalls for different lesion types ranged from 87.0% (epiretinal membrane) to 98.2% (macular pucker). Conclusions Deep learning-based models have potentials to aid retinal disease screening, classification and diagnosis with excellent performance, which may serve as useful references for ophthalmologists. Translational Relevance The deep learning-based models are capable of identifying and predicting different eye diseases and lesions from OCT images and may have potential clinical application to assist the ophthalmologists for fast and accuracy retinal disease screening.
Collapse
Affiliation(s)
- Xiaoming Chen
- College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China
| | - Ying Xue
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Xiaoyan Wu
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Yi Zhong
- The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,College of Biological Science and Engineering, Fuzhou University, Fujian province, China
| | - Huiying Rao
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Heng Luo
- The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,College of Biological Science and Engineering, Fuzhou University, Fujian province, China,MetaNovas Biotech Inc., Foster City, CA, USA
| | - Zuquan Weng
- The Centre for Big Data Research in Burns and Trauma, College of Mathematics and Computer Science, Fuzhou University, Fujian province, China,College of Biological Science and Engineering, Fuzhou University, Fujian province, China
| |
Collapse
|
41
|
Wang S, Ji Y, Bai W, Ji Y, Li J, Yao Y, Zhang Z, Jiang Q, Li K. Advances in artificial intelligence models and algorithms in the field of optometry. Front Cell Dev Biol 2023; 11:1170068. [PMID: 37187617 PMCID: PMC10175695 DOI: 10.3389/fcell.2023.1170068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023] Open
Abstract
The rapid development of computer science over the past few decades has led to unprecedented progress in the field of artificial intelligence (AI). Its wide application in ophthalmology, especially image processing and data analysis, is particularly extensive and its performance excellent. In recent years, AI has been increasingly applied in optometry with remarkable results. This review is a summary of the application progress of different AI models and algorithms used in optometry (for problems such as myopia, strabismus, amblyopia, keratoconus, and intraocular lens) and includes a discussion of the limitations and challenges associated with its application in this field.
Collapse
Affiliation(s)
- Suyu Wang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yuke Ji
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Wen Bai
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Jiajun Li
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yujia Yao
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Ziran Zhang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Qin Jiang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
- *Correspondence: Qin Jiang, ; Keran Li,
| | - Keran Li
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
- *Correspondence: Qin Jiang, ; Keran Li,
| |
Collapse
|
42
|
Ji Y, Liu S, Hong X, Lu Y, Wu X, Li K, Li K, Liu Y. Advances in artificial intelligence applications for ocular surface diseases diagnosis. Front Cell Dev Biol 2022; 10:1107689. [PMID: 36605721 PMCID: PMC9808405 DOI: 10.3389/fcell.2022.1107689] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/05/2022] [Indexed: 01/07/2023] Open
Abstract
In recent years, with the rapid development of computer technology, continual optimization of various learning algorithms and architectures, and establishment of numerous large databases, artificial intelligence (AI) has been unprecedentedly developed and applied in the field of ophthalmology. In the past, ophthalmological AI research mainly focused on posterior segment diseases, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, retinal vein occlusion, and glaucoma optic neuropathy. Meanwhile, an increasing number of studies have employed AI to diagnose ocular surface diseases. In this review, we summarize the research progress of AI in the diagnosis of several ocular surface diseases, namely keratitis, keratoconus, dry eye, and pterygium. We discuss the limitations and challenges of AI in the diagnosis of ocular surface diseases, as well as prospects for the future.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Sha Liu
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Yi Lu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Xingyang Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| |
Collapse
|
43
|
Cao J, You K, Zhou J, Xu M, Xu P, Wen L, Wang S, Jin K, Lou L, Wang Y, Ye J. A cascade eye diseases screening system with interpretability and expandability in ultra-wide field fundus images: A multicentre diagnostic accuracy study. EClinicalMedicine 2022; 53:101633. [PMID: 36110868 PMCID: PMC9468501 DOI: 10.1016/j.eclinm.2022.101633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/08/2022] [Accepted: 08/08/2022] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND Clinical application of artificial intelligence is limited due to the lack of interpretability and expandability in complex clinical settings. We aimed to develop an eye diseases screening system with improved interpretability and expandability based on a lesion-level dissection and tested the clinical expandability and auxiliary ability of the system. METHODS The four-hierarchical interpretable eye diseases screening system (IEDSS) based on a novel structural pattern named lesion atlas was developed to identify 30 eye diseases and conditions using a total of 32,026 ultra-wide field images collected from the Second Affiliated Hospital of Zhejiang University, School of Medicine (SAHZU), the First Affiliated Hospital of University of Science and Technology of China (FAHUSTC), and the Affiliated People's Hospital of Ningbo University (APHNU) in China between November 1, 2016 to February 28, 2022. The performance of IEDSS was compared with ophthalmologists and classic models trained with image-level labels. We further evaluated IEDSS in two external datasets, and tested it in a real-world scenario and an extended dataset with new phenotypes beyond the training categories. The accuracy (ACC), F1 score and confusion matrix were calculated to assess the performance of IEDSS. FINDINGS IEDSS reached average ACCs (aACC) of 0·9781 (95%CI 0·9739-0·9824), 0·9660 (95%CI 0·9591-0·9730) and 0·9709 (95%CI 0·9655-0·9763), frequency-weighted average F1 scores of 0·9042 (95%CI 0·8957-0·9127), 0·8837 (95%CI 0·8714-0·8960) and 0·8874 (95%CI 0·8772-0·8972) in datasets of SAHZU, APHNU and FAHUSTC, respectively. IEDSS reached a higher aACC (0·9781, 95%CI 0·9739-0·9824) compared with a multi-class image-level model (0·9398, 95%CI 0·9329-0·9467), a classic multi-label image-level model (0·9278, 95%CI 0·9189-0·9366), a novel multi-label image-level model (0·9241, 95%CI 0·9151-0·9331) and a lesion-level model without Adaboost (0·9381, 95%CI 0·9299-0·9463). In the real-world scenario, the aACC of IEDSS (0·9872, 95%CI 0·9828-0·9915) was higher than that of the senior ophthalmologist (SO) (0·9413, 95%CI 0·9321-0·9504, p = 0·000) and the junior ophthalmologist (JO) (0·8846, 95%CI 0·8722-0·8971, p = 0·000). IEDSS remained strong performance (ACC = 0·8560, 95%CI 0·8252-0·8868) compared with JO (ACC = 0·784, 95%CI 0·7479-0·8201, p= 0·003) and SO (ACC = 0·8500, 95%CI 0·8187-0·8813, p = 0·789) in the extended dataset. INTERPRETATION IEDSS showed excellent and stable performance in identifying common eye conditions and conditions beyond the training categories. The transparency and expandability of IEDSS could tremendously increase the clinical application range and the practical clinical value of it. It would enhance the efficiency and reliability of clinical practice, especially in remote areas with a lack of experienced specialists. FUNDING National Natural Science Foundation Regional Innovation and Development Joint Fund (U20A20386), Key research and development program of Zhejiang Province (2019C03020), Clinical Medical Research Centre for Eye Diseases of Zhejiang Province (2021E50007).
Collapse
Affiliation(s)
- Jing Cao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Kun You
- Zhejiang Feitu Medical Imaging Co.,LTD, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Mingyu Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Peifang Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lei Wen
- The First Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Shengzhan Wang
- The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
- Corresponding author at: No. 1 West Lake Avenue, Hangzhou, Zhejiang Province, China, 310009.
| |
Collapse
|
44
|
Khan NC, Perera C, Dow ER, Chen KM, Mahajan VB, Mruthyunjaya P, Do DV, Leng T, Myung D. Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics (Basel) 2022; 12:diagnostics12071714. [PMID: 35885619 PMCID: PMC9322827 DOI: 10.3390/diagnostics12071714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/02/2022] Open
Abstract
While color fundus photos are used in routine clinical practice to diagnose ophthalmic conditions, evidence suggests that ocular imaging contains valuable information regarding the systemic health features of patients. These features can be identified through computer vision techniques including deep learning (DL) artificial intelligence (AI) models. We aim to construct a DL model that can predict systemic features from fundus images and to determine the optimal method of model construction for this task. Data were collected from a cohort of patients undergoing diabetic retinopathy screening between March 2020 and March 2021. Two models were created for each of 12 systemic health features based on the DenseNet201 architecture: one utilizing transfer learning with images from ImageNet and another from 35,126 fundus images. Here, 1277 fundus images were used to train the AI models. Area under the receiver operating characteristics curve (AUROC) scores were used to compare the model performance. Models utilizing the ImageNet transfer learning data were superior to those using retinal images for transfer learning (mean AUROC 0.78 vs. 0.65, p-value < 0.001). Models using ImageNet pretraining were able to predict systemic features including ethnicity (AUROC 0.93), age > 70 (AUROC 0.90), gender (AUROC 0.85), ACE inhibitor (AUROC 0.82), and ARB medication use (AUROC 0.78). We conclude that fundus images contain valuable information about the systemic characteristics of a patient. To optimize DL model performance, we recommend that even domain specific models consider using transfer learning from more generalized image sets to improve accuracy.
Collapse
Affiliation(s)
- Nergis C. Khan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- Department of Ophthalmology, Fremantle Hospital, Perth, WA 6004, Australia
| | - Eliot R. Dow
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Diana V. Do
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Theodore Leng
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - David Myung
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- VA Palo Alto Health Care System, Palo Alto, CA 94304, USA
- Correspondence: ; Tel.: +1-650-724-3948
| |
Collapse
|
45
|
Research Progress of Artificial Intelligence Image Analysis in Systemic Disease-Related Ophthalmopathy. DISEASE MARKERS 2022; 2022:3406890. [PMID: 35783011 PMCID: PMC9249504 DOI: 10.1155/2022/3406890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/09/2022] [Indexed: 11/28/2022]
Abstract
The eye is one of the most important organs of the human body. Eye diseases are closely related to other systemic diseases, both of which influence each other. Numerous systemic diseases lead to special clinical manifestations and complications in the eyes. Typical diseases include diabetic retinopathy, hypertensive retinopathy, thyroid associated ophthalmopathy, optic neuromyelitis, and Behcet's disease. Systemic disease-related ophthalmopathy is usually a chronic disease, and the analysis of imaging markers is helpful for a comprehensive diagnosis of these diseases. Recently, artificial intelligence (AI) technology based on deep learning has rapidly developed, leading to numerous achievements and arousing widespread concern. Presently, AI technology has made significant progress in research on imaging markers of systemic disease-related ophthalmopathy; however, there are also many limitations and challenges. This article reviews the research achievements, limitations, and future prospects of AI image analysis technology in systemic disease-related ophthalmopathy.
Collapse
|