1
|
Qian B, Sheng B, Chen H, Wang X, Li T, Jin Y, Guan Z, Jiang Z, Wu Y, Wang J, Chen T, Guo Z, Chen X, Yang D, Hou J, Feng R, Xiao F, Li Y, El Habib Daho M, Lu L, Ding Y, Liu D, Yang B, Zhu W, Wang Y, Kim H, Nam H, Li H, Wu WC, Wu Q, Dai R, Li H, Ang M, Ting DSW, Cheung CY, Wang X, Cheng CY, Tan GSW, Ohno-Matsui K, Jonas JB, Zheng Y, Tham YC, Wong TY, Wang YX. A Competition for the Diagnosis of Myopic Maculopathy by Artificial Intelligence Algorithms. JAMA Ophthalmol 2024:2824092. [PMID: 39325442 DOI: 10.1001/jamaophthalmol.2024.3707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
Importance Myopic maculopathy (MM) is a major cause of vision impairment globally. Artificial intelligence (AI) and deep learning (DL) algorithms for detecting MM from fundus images could potentially improve diagnosis and assist screening in a variety of health care settings. Objectives To evaluate DL algorithms for MM classification and segmentation and compare their performance with that of ophthalmologists. Design, Setting, and Participants The Myopic Maculopathy Analysis Challenge (MMAC) was an international competition to develop automated solutions for 3 tasks: (1) MM classification, (2) segmentation of MM plus lesions, and (3) spherical equivalent (SE) prediction. Participants were provided 3 subdatasets containing 2306, 294, and 2003 fundus images, respectively, with which to build algorithms. A group of 5 ophthalmologists evaluated the same test sets for tasks 1 and 2 to ascertain performance. Results from model ensembles, which combined outcomes from multiple algorithms submitted by MMAC participants, were compared with each individual submitted algorithm. This study was conducted from March 1, 2023, to March 30, 2024, and data were analyzed from January 15, 2024, to March 30, 2024. Exposure DL algorithms submitted as part of the MMAC competition or ophthalmologist interpretation. Main Outcomes and Measures MM classification was evaluated by quadratic-weighted κ (QWK), F1 score, sensitivity, and specificity. MM plus lesions segmentation was evaluated by dice similarity coefficient (DSC), and SE prediction was evaluated by R2 and mean absolute error (MAE). Results The 3 tasks were completed by 7, 4, and 4 teams, respectively. MM classification algorithms achieved a QWK range of 0.866 to 0.901, an F1 score range of 0.675 to 0.781, a sensitivity range of 0.667 to 0.778, and a specificity range of 0.931 to 0.945. MM plus lesions segmentation algorithms achieved a DSC range of 0.664 to 0.687 for lacquer cracks (LC), 0.579 to 0.673 for choroidal neovascularization, and 0.768 to 0.841 for Fuchs spot (FS). SE prediction algorithms achieved an R2 range of 0.791 to 0.874 and an MAE range of 0.708 to 0.943. Model ensemble results achieved the best performance compared to each submitted algorithms, and the model ensemble outperformed ophthalmologists at MM classification in sensitivity (0.801; 95% CI, 0.764-0.840 vs 0.727; 95% CI, 0.684-0.768; P = .006) and specificity (0.946; 95% CI, 0.939-0.954 vs 0.933; 95% CI, 0.925-0.941; P = .009), LC segmentation (DSC, 0.698; 95% CI, 0.649-0.745 vs DSC, 0.570; 95% CI, 0.515-0.625; P < .001), and FS segmentation (DSC, 0.863; 95% CI, 0.831-0.888 vs DSC, 0.790; 95% CI, 0.742-0.830; P < .001). Conclusions and Relevance In this diagnostic study, 15 AI models for MM classification and segmentation on a public dataset made available for the MMAC competition were validated and evaluated, with some models achieving better diagnostic performance than ophthalmologists.
Collapse
Affiliation(s)
- Bo Qian
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Ministry of Education Key Laboratory of Artificial Intelligence, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Ministry of Education Key Laboratory of Artificial Intelligence, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Ministry of Education Key Laboratory of Artificial Intelligence, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yixiao Jin
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Zehua Jiang
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Yilan Wu
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Jinyuan Wang
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Tingli Chen
- Department of Ophthalmology, Shanghai Health and Medical Center, Wuxi, China
| | - Zhengrui Guo
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiang Chen
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Ministry of Education Key Laboratory of Artificial Intelligence, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Junlin Hou
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Rui Feng
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Fan Xiao
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Yihao Li
- Laboratoire de Traitement de l'Information Médicale UMR 1101, Inserm, Brest, France
- Université de Bretagne Occidentale, Brest, France
| | - Mostafa El Habib Daho
- Laboratoire de Traitement de l'Information Médicale UMR 1101, Inserm, Brest, France
- Université de Bretagne Occidentale, Brest, France
| | - Li Lu
- School of Computer Science and Technology, Dongguan University of Technology, Dongguan, China
| | - Ye Ding
- School of Computer Science and Technology, Dongguan University of Technology, Dongguan, China
| | - Di Liu
- AIFUTURE Laboratory, Beijing, China
- National Digital Health Center of China Top Think Tanks, Beijing Normal University, Beijing, China
- School of Journalism and Communication, Beijing Normal University, Beijing, China
| | - Bo Yang
- AIFUTURE Laboratory, Beijing, China
| | - Wenhui Zhu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe
| | - Yalin Wang
- School of Computing and Augmented Intelligence, Arizona State University, Tempe
| | - Hyeonmin Kim
- Mediwhale, Seoul, South Korea
- Pohang University of Science and Technology, Pohang, South Korea
| | | | - Huayu Li
- Department of Electrical and Computer Engineering, University of Arizona, Tucson
| | - Wei-Chi Wu
- Department of Ophthalmology, Linkou Chang Gung Memorial Hospital, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | | | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaofei Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Kyoko Ohno-Matsui
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Jost B Jonas
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Institut Français de Myopie, Rothschild Foundation Hospital, Paris, France
| | | | - Yih-Chung Tham
- Center for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Science Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Tien Yin Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Zhongshan Ophthalmic Center, Guangzhou, China
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
2
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
3
|
Yamashita T, Asaoka R, Iwase A, Sakai H, Terasaki H, Sakamoto T, Araie M. Relationship between fundus sex index obtained using color fundus parameters and body height or axial length in the Kumejima population. Jpn J Ophthalmol 2024; 68:586-593. [PMID: 39083146 PMCID: PMC11420305 DOI: 10.1007/s10384-024-01082-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 05/29/2024] [Indexed: 09/26/2024]
Abstract
PURPOSE To investigate the relationship between the fundus sex index obtained from fundus photographs and body height or axial length in the Kumejima population. STUDY DESIGN Prospective cross-sectional observational population study. METHODS Using color fundus photographs obtained from the Kumejima population, 1,653 healthy right eyes with reliable fundus parameter measurements were included in this study. The tessellation fundus index was calculated as R/(R + G + B) using the mean value of the red-green-blue intensity in the eight locations around the optic disc and foveal region. The optic disc ovality ratio, papillomacular angle, and retinal vessel angle were quantified as previously described. The masculine or feminine fundus was quantified using machine learning (L2 regularized binominal logistic regression and leave one out cross validation), with the range of 0-1 as the predictive value, and defined as the fundus sex index. The relationship between the fundus sex index and body height or axial length was investigated using Spearman's correlation. RESULTS The mean age of the 838 men and 815 women included in this study was 52.8 and 54.0 years, respectively. The correlation coefficient between fundus sex index and body height was - 0.40 (p < 0.001) in all, 0.01 (p = 0.89) in men, and - 0.04 (p = 0.30) in women, and that between fundus sex index and axial length was - 0.23 (p < 0.001) in all, - 0.12 (p < 0.001) in men, and - 0.13 (p < 0.001) in women. CONCLUSION This study shows that a larger number of masculine fundi tend to have longer axial lengths in each sex group. However, sex index was not significantly related with body height either in men or in women.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan
| | | | | | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan.
| | - Makoto Araie
- Department of Ophthalmology, Kanto Central Hospital, Tokyo, Japan
| |
Collapse
|
4
|
Qi Z, Li T, Chen J, Yam JC, Wen Y, Huang G, Zhong H, He M, Zhu D, Dai R, Qian B, Wang J, Qian C, Wang W, Zheng Y, Zhang J, Yi X, Wang Z, Zhang B, Liu C, Cheng T, Yang X, Li J, Pan YT, Ding X, Xiong R, Wang Y, Zhou Y, Feng D, Liu S, Du L, Yang J, Zhu Z, Bi L, Kim J, Tang F, Zhang Y, Zhang X, Zou H, Ang M, Tham CC, Cheung CY, Pang CP, Sheng B, He X, Xu X. A deep learning system for myopia onset prediction and intervention effectiveness evaluation in children. NPJ Digit Med 2024; 7:206. [PMID: 39112566 PMCID: PMC11306751 DOI: 10.1038/s41746-024-01204-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 07/29/2024] [Indexed: 08/10/2024] Open
Abstract
The increasing prevalence of myopia worldwide presents a significant public health challenge. A key strategy to combat myopia is with early detection and prediction in children as such examination allows for effective intervention using readily accessible imaging technique. To this end, we introduced DeepMyopia, an artificial intelligence (AI)-enabled decision support system to detect and predict myopia onset and facilitate targeted interventions for children at risk using routine retinal fundus images. Based on deep learning architecture, DeepMyopia had been trained and internally validated on a large cohort of retinal fundus images (n = 1,638,315) and then externally tested on datasets from seven sites in China (n = 22,060). Our results demonstrated robustness of DeepMyopia, with AUCs of 0.908, 0.813, and 0.810 for 1-, 2-, and 3-year myopia onset prediction with the internal test set, and AUCs of 0.796, 0.808, and 0.767 with the external test set. DeepMyopia also effectively stratified children into low- and high-risk groups (p < 0.001) in both test sets. In an emulated randomized controlled trial (eRCT) on the Shanghai outdoor cohort (n = 3303) where DeepMyopia showed effectiveness in myopia prevention compared to NonCyc-based model, with an adjusted relative reduction (ARR) of -17.8%, 95% CI: -29.4%, -6.4%. DeepMyopia-assisted interventions attained quality-adjusted life years (QALYs) of 0.75 (95% CI: 0.53, 1.04) per person and avoided blindness years of 13.54 (95% CI: 9.57, 18.83) per 1 million persons compared to natural lifestyle with no active intervention. Our findings demonstrated DeepMyopia as a reliable and efficient AI-based decision support system for intervention guidance for children.
Collapse
Affiliation(s)
- Ziyi Qi
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Center of Eye Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Chen
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Jason C Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong
| | - Yang Wen
- Guangdong Provincial Key Laboratory of Intelligent Information Processing, College of Electronic and Information Engineering, Shenzhen University, Shenzhen, China
| | - Gengyou Huang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hua Zhong
- Department of Ophthalmology, First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Mingguang He
- Experimental Ophthalmology, The Hong Kong Polytechnic University, Kowloon, Hong Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dan Zhu
- Department of Ophthalmology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Bo Qian
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jingjing Wang
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Chaoxu Qian
- Department of Ophthalmology, First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yanfei Zheng
- Department of Ophthalmology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China
| | - Jian Zhang
- Department of Ophthalmology, Beijing Friendship Hospital Pinggu Campus, Capital Medical University, Beijing, China
| | - Xianglong Yi
- Department of Ophthalmology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Zheyuan Wang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bo Zhang
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Chunyu Liu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Tianyu Cheng
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Center of Eye Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Li
- Affiliated Hospital of Yunnan University, Kunming, China
| | - Yan-Ting Pan
- Affiliated Hospital of Yunnan University, Kunming, China
| | - Xiaohu Ding
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ruilin Xiong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yan Wang
- Department of Ophthalmology, The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, Inner Mongolia, China
| | - Yan Zhou
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dagan Feng
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Sichen Liu
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Linlin Du
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Jinliuxing Yang
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia
| | - Lei Bi
- Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jinman Kim
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Fangyao Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong
| | - Yuzhou Zhang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong
| | - Xiujuan Zhang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong
| | - Haidong Zou
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
- DUKE-NUS Ophthalmology and Visual Sciences, Singapore, Singapore
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong
| | - Chi Pui Pang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Kowloon, Hong Kong.
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Xiangui He
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China.
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Center of Eye Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China.
| | - Xun Xu
- Department of Clinical Research, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai Vision Health Center & Shanghai Children Myopia Institute, Shanghai, China.
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Center of Eye Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China.
| |
Collapse
|
5
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
6
|
Yew SME, Chen Y, Goh JHL, Chen DZ, Chun Jin Tan M, Cheng CY, Teck Chang Koh V, Tham YC. Ocular image-based deep learning for predicting refractive error: A systematic review. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2024; 4:164-172. [PMID: 39114269 PMCID: PMC11305245 DOI: 10.1016/j.aopr.2024.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 08/10/2024]
Abstract
Background Uncorrected refractive error is a major cause of vision impairment worldwide and its increasing prevalent necessitates effective screening and management strategies. Meanwhile, deep learning, a subset of Artificial Intelligence, has significantly advanced ophthalmological diagnostics by automating tasks that required extensive clinical expertise. Although recent studies have investigated the use of deep learning models for refractive power detection through various imaging techniques, a comprehensive systematic review on this topic is has yet be done. This review aims to summarise and evaluate the performance of ocular image-based deep learning models in predicting refractive errors. Main text We search on three databases (PubMed, Scopus, Web of Science) up till June 2023, focusing on deep learning applications in detecting refractive error from ocular images. We included studies that had reported refractive error outcomes, regardless of publication years. We systematically extracted and evaluated the continuous outcomes (sphere, SE, cylinder) and categorical outcomes (myopia), ground truth measurements, ocular imaging modalities, deep learning models, and performance metrics, adhering to PRISMA guidelines. Nine studies were identified and categorised into three groups: retinal photo-based (n = 5), OCT-based (n = 1), and external ocular photo-based (n = 3).For high myopia prediction, retinal photo-based models achieved AUC between 0.91 and 0.98, sensitivity levels between 85.10% and 97.80%, and specificity levels between 76.40% and 94.50%. For continuous prediction, retinal photo-based models reported MAE ranging from 0.31D to 2.19D, and R 2 between 0.05 and 0.96. The OCT-based model achieved an AUC of 0.79-0.81, sensitivity of 82.30% and 87.20% and specificity of 61.70%-68.90%. For external ocular photo-based models, the AUC ranged from 0.91 to 0.99, sensitivity of 81.13%-84.00% and specificity of 74.00%-86.42%, MAE ranges from 0.07D to 0.18D and accuracy ranges from 81.60% to 96.70%. The reported papers collectively showed promising performances, in particular the retinal photo-based and external eye photo -based DL models. Conclusions The integration of deep learning model and ocular imaging for refractive error detection appear promising. However, their real-world clinical utility in current screening workflow have yet been evaluated and would require thoughtful consideration in design and implementation.
Collapse
Affiliation(s)
- Samantha Min Er Yew
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yibing Chen
- School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technological University, Singapore
| | | | - David Ziyou Chen
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Marcus Chun Jin Tan
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Ching-Yu Cheng
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore
| | - Victor Teck Chang Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Yih-Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
7
|
Yii F, Nguyen L, Strang N, Bernabeu MO, Tatham AJ, MacGillivray T, Dhillon B. Factors associated with pathologic myopia onset and progression: A systematic review and meta-analysis. Ophthalmic Physiol Opt 2024; 44:963-976. [PMID: 38563652 DOI: 10.1111/opo.13312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/04/2024]
Abstract
PURPOSE To synthesise evidence across studies on factors associated with pathologic myopia (PM) onset and progression based on the META-analysis for Pathologic Myopia (META-PM) classification framework. METHODS Findings from six longitudinal studies (5-18 years) were narratively synthesised and meta-analysed, using odds ratio (OR) as the common measure of association. All studies adjusted for baseline myopia, age and sex at a minimum. The quality of evidence was rated using the Grades of Recommendation, Assessment, Development and Evaluation framework. RESULTS Five out of six studies were conducted in Asia. There was inconclusive evidence of an independent effect (or lack thereof) of ethnicity and sex on PM onset/progression. The odds of PM onset increased with greater axial length (pooled OR: 2.03; 95% CI: 1.71-2.40; p < 0.001), older age (pooled OR: 1.07; 1.05-1.09; p < 0.001) and more negative spherical equivalent refraction, SER (OR: 0.77; 0.68-0.87; p < 0.001), all of which were supported by an acceptable level of evidence. Fundus tessellation was found to independently increase the odds of PM onset in a population-based study (OR: 3.02; 2.58-3.53; p < 0.001), although this was only supported by weak evidence. There was acceptable evidence that greater axial length (pooled OR: 1.23; 1.09-1.39; p < 0.001), more negative SER (pooled OR: 0.87; 0.83-0.92; p < 0.001) and higher education level (pooled OR: 3.17; 1.36-7.35; p < 0.01) increased the odds of PM progression. Other baseline factors found to be associated with PM progression but currently supported by weak evidence included age (pooled OR: 1.01), severity of myopic maculopathy (OR: 3.61), intraocular pressure (OR: 1.62) and hypertension (OR: 0.21). CONCLUSIONS Most PM risk/prognostic factors are not supported by an adequate evidence base at present (an indication that PM remains understudied). Current factors for which an acceptable level of evidence exists (limited in number) are unmodifiable in adults and lack personalised information. More longitudinal studies focusing on uncovering modifiable factors and imaging biomarkers are warranted.
Collapse
Affiliation(s)
- Fabian Yii
- Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Linda Nguyen
- MRC Human Genetics Unit, Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh, UK
| | - Niall Strang
- Department of Vision Sciences, Glasgow Caledonian University, Glasgow, UK
| | - Miguel O Bernabeu
- Centre for Medical Informatics, Usher Institute, The University of Edinburgh, Edinburgh, UK
- The Bayes Centre, The University of Edinburgh, Edinburgh, UK
| | - Andrew J Tatham
- Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Tom MacGillivray
- Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
| | - Baljean Dhillon
- Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, The University of Edinburgh, Edinburgh, UK
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| |
Collapse
|
8
|
Yamashita T, Terasaki H, Asaoka R, Iwase A, Sakai H, Sakamoto T, Araie M. Age prediction using fundus parameters of normal eyes from the Kumejima population study. Graefes Arch Clin Exp Ophthalmol 2024:10.1007/s00417-024-06471-4. [PMID: 38819490 DOI: 10.1007/s00417-024-06471-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 03/16/2024] [Accepted: 03/22/2024] [Indexed: 06/01/2024] Open
Abstract
PURPOSE Artificial intelligence can predict the age of an individual using color fundus photographs (CFPs). This study aimed to investigate the accuracy of age prediction in the Kumejima study using fundus parameters and to clarify age-related changes in the fundus. METHODS We used nonmydriatic CFPs obtained from the Kumejima population study, including 1,646 right eyes of healthy participants with reliable fundus parameter measurements. The tessellation fundus index was calculated as R/(R + G + B) using the mean value of the red-green-blue intensity in eight locations around the optic disc and foveal region. The optic disc ovality ratio, papillomacular angle, and retinal vessel angle were quantified as previously described. Least absolute shrinkage and selection operator regression with leave-one-out cross-validation was used to predict age. The relationship between the actual and predicted ages was investigated using Pearson's correlation coefficient. RESULTS The mean age of included participants (834 males and 812 females) was 53.4 ± 10.1 years. The mean predicted age based on fundus parameters was 53.4 ± 8.9 years, with a mean absolute error of 3.64 years, and the correlation coefficient between actual and predicted age was 0.88 (p < 0.001). Older patients had greater red and green intensities and weaker blue intensities in the peripapillary area (p < 0.001). CONCLUSIONS Age could be predicted using the CFP parameters, and there were notable age-related changes in the peripapillary color intensity. The age-related changes in the fundus may aid the understanding of the mechanism of fundus diseases such as age-related macular degeneration.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan
| | | | | | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan.
| | - Makoto Araie
- Department of Ophthalmology, Kanto Central Hospital, Tokyo, Japan
| |
Collapse
|
9
|
Xie Z, Zhang T, Kim S, Lu J, Zhang W, Lin CH, Wu MR, Davis A, Channa R, Giancardo L, Chen H, Wang S, Chen R, Zhi D. iGWAS: Image-based genome-wide association of self-supervised deep phenotyping of retina fundus images. PLoS Genet 2024; 20:e1011273. [PMID: 38728357 PMCID: PMC11111076 DOI: 10.1371/journal.pgen.1011273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 05/22/2024] [Accepted: 04/25/2024] [Indexed: 05/12/2024] Open
Abstract
Existing imaging genetics studies have been mostly limited in scope by using imaging-derived phenotypes defined by human experts. Here, leveraging new breakthroughs in self-supervised deep representation learning, we propose a new approach, image-based genome-wide association study (iGWAS), for identifying genetic factors associated with phenotypes discovered from medical images using contrastive learning. Using retinal fundus photos, our model extracts a 128-dimensional vector representing features of the retina as phenotypes. After training the model on 40,000 images from the EyePACS dataset, we generated phenotypes from 130,329 images of 65,629 British White participants in the UK Biobank. We conducted GWAS on these phenotypes and identified 14 loci with genome-wide significance (p<5×10-8 and intersection of hits from left and right eyes). We also did GWAS on the retina color, the average color of the center region of the retinal fundus photos. The GWAS of retina colors identified 34 loci, 7 are overlapping with GWAS of raw image phenotype. Our results establish the feasibility of this new framework of genomic study based on self-supervised phenotyping of medical images.
Collapse
Affiliation(s)
- Ziqian Xie
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, United States of America
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Tao Zhang
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, United States of America
| | - Sangbae Kim
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, United States of America
| | - Jiaxiong Lu
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, United States of America
| | - Wanheng Zhang
- School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Cheng-Hui Lin
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States of America
| | - Man-Ru Wu
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States of America
| | - Alexander Davis
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States of America
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Luca Giancardo
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Han Chen
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
- School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
- Human Genetics Center, Department of Epidemiology, Human Genetics and Environmental Sciences, School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| | - Sui Wang
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States of America
| | - Rui Chen
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, United States of America
| | - Degui Zhi
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, United States of America
| |
Collapse
|
10
|
Yii F, Bernabeu MO, Dhillon B, Strang N, MacGillivray T. Retinal Changes From Hyperopia to Myopia: Not All Diopters Are Created Equal. Invest Ophthalmol Vis Sci 2024; 65:25. [PMID: 38758640 PMCID: PMC11107950 DOI: 10.1167/iovs.65.5.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/30/2024] [Indexed: 05/19/2024] Open
Abstract
Purpose To quantitatively characterize retinal changes across different quantiles of refractive error in 34,414 normal eyes of 23,064 healthy adults in the UK Biobank. Methods Twelve optic disc (OD), foveal and vascular parameters were derived from color fundus photographs, correcting for ocular magnification as appropriate. Quantile regression was used to test the independent associations between these parameters and spherical equivalent refraction (SER) across 34 refractive quantiles (high hyperopia to high myopia)-controlling for age, sex and corneal radius. Results More negative SER was nonlinearly associated with greater Euclidian (largely horizontal) OD-fovea distance, larger OD, less circular OD, more obliquely orientated OD (superior pole tilted towards the fovea), brighter fovea, lower vascular complexity, less tortuous vessels, more concave (straightened out towards the fovea) papillomacular arterial/venous arcade and wider central retinal arterioles/venules. In myopia, these parameters varied more strongly with SER as myopia increased. For example, while every standard deviation (SD) decrease in vascular complexity was associated with 0.63 D (right eye: 95% confidence interval [CI], 0.58-0.68) to 0.68 D (left eye: 95% CI, 0.63-0.73) higher myopia in the quantile corresponding to -0.60 D, it was associated with 1.61 D (right eye: 95% CI, 1.40-1.82) to 1.70 D (left eye: 95% CI, 1.56-1.84) higher myopia in the most myopic quantile. OD-fovea angle (degree of vertical separation between OD and fovea) was found to vary linearly with SER, but the magnitude was of little practical importance (less than 0.10 D variation per SD change in angle in almost all refractive quantiles) compared with the changes in OD-fovea distance. Conclusions Several interrelated retinal changes indicative of an increasing (nonconstant) rate of mechanical stretching are evident at the posterior pole as myopia increases. These changes also suggest that the posterior pole stretches predominantly in the temporal horizontal direction.
Collapse
Affiliation(s)
- Fabian Yii
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, University of Edinburgh, Edinburgh, United Kingdom
| | - Miguel O. Bernabeu
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, United Kingdom
- The Bayes Centre, University of Edinburgh, Edinburgh, United Kingdom
| | - Baljean Dhillon
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, University of Edinburgh, Edinburgh, United Kingdom
- Princess Alexandra Eye Pavilion, Edinburgh, United Kingdom
| | - Niall Strang
- Department of Vision Sciences, Glasgow Caledonian University, Glasgow, United Kingdom
| | - Tom MacGillivray
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Curle Ophthalmology Laboratory, Institute for Regeneration and Repair, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
11
|
Arai Y, Takahashi H, Takayama T, Yousefi S, Tampo H, Yamashita T, Hasegawa T, Ohgami T, Sonoda S, Tanaka Y, Inoda S, Sakamoto S, Kawashima H, Yanagi Y. Predicting central choroidal thickness from colour fundus photographs using deep learning. PLoS One 2024; 19:e0301467. [PMID: 38551957 PMCID: PMC10980193 DOI: 10.1371/journal.pone.0301467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
The estimation of central choroidal thickness from colour fundus images can improve disease detection. We developed a deep learning method to estimate central choroidal thickness from colour fundus images at a single institution, using independent datasets from other institutions for validation. A total of 2,548 images from patients who underwent same-day optical coherence tomography examination and colour fundus imaging at the outpatient clinic of Jichi Medical University Hospital were retrospectively analysed. For validation, 393 images from three institutions were used. Patients with signs of subretinal haemorrhage, central serous detachment, retinal pigment epithelial detachment, and/or macular oedema were excluded. All other fundus photographs with a visible pigment epithelium were included. The main outcome measure was the standard deviation of 10-fold cross-validation. Validation was performed using the original algorithm and the algorithm after learning based on images from all institutions. The standard deviation of 10-fold cross-validation was 73 μm. The standard deviation for other institutions was reduced by re-learning. We describe the first application and validation of a deep learning approach for the estimation of central choroidal thickness from fundus images. This algorithm is expected to help graders judge choroidal thickening and thinning.
Collapse
Affiliation(s)
- Yusuke Arai
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Hidenori Takahashi
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Takuya Takayama
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, Tennessee, United States of America
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, Tennessee, United States of America
| | - Hironobu Tampo
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | | | - Tetsuya Hasegawa
- Department of Ophthalmology, Saitama Medical Center, Jichi Medical University, Saitama, Japan
| | - Tomohiro Ohgami
- Department of Ophthalmology, Ibaraki Seinan Medical Center, Ibaraki, Japan
| | - Shozo Sonoda
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Yoshiaki Tanaka
- Department of Ophthalmology, Saitama Medical Center, Jichi Medical University, Saitama, Japan
| | - Satoru Inoda
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Shinichi Sakamoto
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Hidetoshi Kawashima
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Yasuo Yanagi
- Department of Ophthalmology, Yokohama City University, Kanagawa, Japan
- Medical Retina, Singapore Eye Research Institute, Singapore, Singapore
| |
Collapse
|
12
|
Feng L, Zhang Y, Wei W, Qiu H, Shi M. Applying deep learning to recognize the properties of vitreous opacity in ophthalmic ultrasound images. Eye (Lond) 2024; 38:380-385. [PMID: 37596401 PMCID: PMC10810903 DOI: 10.1038/s41433-023-02705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/20/2023] [Accepted: 08/09/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND To explore the feasibility of artificial intelligence technology based on deep learning to automatically recognize the properties of vitreous opacities in ophthalmic ultrasound images. METHODS A total of 2000 greyscale Doppler ultrasound images containing non-pathological eye and three typical vitreous opacities confirmed as physiological vitreous opacity (VO), asteroid hyalosis (AH), and vitreous haemorrhage (VH) were selected and labelled for each lesion type. Five residual networks (ResNet) and two GoogLeNet models were trained to recognize vitreous lesions. Seventy-five percent of the images were randomly selected as the training set, and the remaining 25% were selected as the test set. The accuracy and parameters were recorded and compared among these seven different deep learning (DL) models. The precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC) values for recognizing vitreous lesions were calculated for the most accurate DL model. RESULTS These seven DL models had significant differences in terms of their accuracy and parameters. GoogLeNet Inception V1 achieved the highest accuracy (95.5%) and minor parameters (10315580) in vitreous lesion recognition. GoogLeNet Inception V1 achieved precision values of 0.94, 0.94, 0.96, and 0.96, recall values of 0.94, 0.93, 0.97 and 0.98, and F1 scores of 0.94, 0.93, 0.96 and 0.97 for normal, VO, AH, and VH recognition, respectively. The AUC values for these four vitreous lesion types were 0.99, 1.0, 0.99, and 0.99, respectively. CONCLUSIONS GoogLeNet Inception V1 has shown promising results in ophthalmic ultrasound image recognition. With increasing ultrasound image data, a wide variety of confidential information on eye diseases can be detected automatically by artificial intelligence technology based on deep learning.
Collapse
Affiliation(s)
- Li Feng
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | | | - Wei Wei
- Hebei Eye Hospital, Xingtai, China
| | - Hui Qiu
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | - Mingyu Shi
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China.
| |
Collapse
|
13
|
Choi JY, Kim H, Kim JK, Lee IS, Ryu IH, Kim JS, Yoo TK. Deep learning prediction of steep and flat corneal curvature using fundus photography in post-COVID telemedicine era. Med Biol Eng Comput 2024; 62:449-463. [PMID: 37889431 DOI: 10.1007/s11517-023-02952-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 10/14/2023] [Indexed: 10/28/2023]
Abstract
Recently, fundus photography (FP) is being increasingly used. Corneal curvature is an essential factor in refractive errors and is associated with several pathological corneal conditions. As FP-based examination systems have already been widely distributed, it would be helpful for telemedicine to extract information such as corneal curvature using FP. This study aims to develop a deep learning model based on FP for corneal curvature prediction by categorizing corneas into steep, regular, and flat groups. The EfficientNetB0 architecture with transfer learning was used to learn FP patterns to predict flat, regular, and steep corneas. In validation, the model achieved a multiclass accuracy of 0.727, a Matthews correlation coefficient of 0.519, and an unweighted Cohen's κ of 0.590. The areas under the receiver operating characteristic curves for binary prediction of flat and steep corneas were 0.863 and 0.848, respectively. The optic nerve and its peripheral areas were the main focus of the model. The developed algorithm shows that FP can potentially be used as an imaging modality to estimate corneal curvature in the post-COVID-19 era, whereby patients may benefit from the detection of abnormal corneal curvatures using FP in the telemedicine setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | | | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and Development Department, VISUWORKS, Seoul, South Korea
| | - Jung Soo Kim
- Research and Development Department, VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and Development Department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
14
|
Dai L, Sheng B, Chen T, Wu Q, Liu R, Cai C, Wu L, Yang D, Hamzah H, Liu Y, Wang X, Guan Z, Yu S, Li T, Tang Z, Ran A, Che H, Chen H, Zheng Y, Shu J, Huang S, Wu C, Lin S, Liu D, Li J, Wang Z, Meng Z, Shen J, Hou X, Deng C, Ruan L, Lu F, Chee M, Quek TC, Srinivasan R, Raman R, Sun X, Wang YX, Wu J, Jin H, Dai R, Shen D, Yang X, Guo M, Zhang C, Cheung CY, Tan GSW, Tham YC, Cheng CY, Li H, Wong TY, Jia W. A deep learning system for predicting time to progression of diabetic retinopathy. Nat Med 2024; 30:584-594. [PMID: 38177850 PMCID: PMC10878973 DOI: 10.1038/s41591-023-02702-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 11/10/2023] [Indexed: 01/06/2024]
Abstract
Diabetic retinopathy (DR) is the leading cause of preventable blindness worldwide. The risk of DR progression is highly variable among different individuals, making it difficult to predict risk and personalize screening intervals. We developed and validated a deep learning system (DeepDR Plus) to predict time to DR progression within 5 years solely from fundus images. First, we used 717,308 fundus images from 179,327 participants with diabetes to pretrain the system. Subsequently, we trained and validated the system with a multiethnic dataset comprising 118,868 images from 29,868 participants with diabetes. For predicting time to DR progression, the system achieved concordance indexes of 0.754-0.846 and integrated Brier scores of 0.153-0.241 for all times up to 5 years. Furthermore, we validated the system in real-world cohorts of participants with diabetes. The integration with clinical workflow could potentially extend the mean screening interval from 12 months to 31.97 months, and the percentage of participants recommended to be screened at 1-5 years was 30.62%, 20.00%, 19.63%, 11.85% and 17.89%, respectively, while delayed detection of progression to vision-threatening DR was 0.18%. Altogether, the DeepDR Plus system could predict individualized risk and time to DR progression over 5 years, potentially allowing personalized screening intervals.
Collapse
Grants
- the National Key Research and Development Program of China (2022YFA1004804), the Shanghai Municipal Key Clinical Specialty, Shanghai Research Center for Endocrine and Metabolic Diseases (2022ZZ01002), and the Chinese Academy of Engineering (2022-XY-08)
- the General Program of NSFC (62272298), the National Key Research and Development Program of China (2022YFC2407000), the Interdisciplinary Program of Shanghai Jiao Tong University (YG2023LC11 and YG2022ZD007), National Natural Science Foundation of China (62272298 and 62077037), the College-level Project Fund of Shanghai Jiao Tong University Affiliated Sixth People’s Hospital (ynlc201909), and the Medical-industrial Cross-fund of Shanghai Jiao Tong University (YG2022QN089)
- the Clinical Special Program of Shanghai Municipal Health Commission (20224044) and Three-year action plan to strengthen the construction of public health system in Shanghai (GWVI-11.1-28)
- the National Natural Science Foundation of China (82100879)
- the National Key Research and Development Program of China (2022YFA1004804), Excellent Young Scientists Fund of NSFC (82022012), General Fund of NSFC (81870598), Innovative research team of high-level local universities in Shanghai (SHSMU-ZDCX20212700)
- the National Key R & D Program of China (2022YFC2502800) and National Natural Science Fund of China (8238810007)
Collapse
Affiliation(s)
- Ling Dai
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Tingli Chen
- Department of Ophthalmology, Huadong Sanatorium, Wuxi, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ruhan Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chun Cai
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Liang Wu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Yuexing Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Shujie Yu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziqi Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haoxuan Che
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jia Shu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shan Huang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Shiqun Lin
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dan Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Jiajia Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zheyuan Wang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziyao Meng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Shen
- Medical Records and Statistics Office, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xuhong Hou
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Chenxin Deng
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Ruan
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Feng Lu
- National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Miaoli Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ramyaa Srinivasan
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Xiaodong Sun
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Science Key Laboratory, Beijing, China
| | - Jiarui Wu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Center for Excellence in Molecular Science, Chinese Academy of Sciences, Shanghai, China
| | - Hai Jin
- National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dinggang Shen
- School of Biomedical Engineering, Shanghai Tech University, Shanghai, China
- Shanghai United Imaging Intelligence, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Minyi Guo
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Cuntai Zhang
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Tsinghua Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China.
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| |
Collapse
|
15
|
Zhang J, Zou H. Insights into artificial intelligence in myopia management: from a data perspective. Graefes Arch Clin Exp Ophthalmol 2024; 262:3-17. [PMID: 37231280 PMCID: PMC10212230 DOI: 10.1007/s00417-023-06101-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/23/2023] [Accepted: 05/06/2023] [Indexed: 05/27/2023] Open
Abstract
Given the high incidence and prevalence of myopia, the current healthcare system is struggling to handle the task of myopia management, which is worsened by home quarantine during the ongoing COVID-19 pandemic. The utilization of artificial intelligence (AI) in ophthalmology is thriving, yet not enough in myopia. AI can serve as a solution for the myopia pandemic, with application potential in early identification, risk stratification, progression prediction, and timely intervention. The datasets used for developing AI models are the foundation and determine the upper limit of performance. Data generated from clinical practice in managing myopia can be categorized into clinical data and imaging data, and different AI methods can be used for analysis. In this review, we comprehensively review the current application status of AI in myopia with an emphasis on data modalities used for developing AI models. We propose that establishing large public datasets with high quality, enhancing the model's capability of handling multimodal input, and exploring novel data modalities could be of great significance for the further application of AI for myopia.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Eye Diseases Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, China.
- Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
| |
Collapse
|
16
|
Jain R, Yoo TK, Ryu IH, Song J, Kolte N, Nariani A. Deep Transfer Learning for Ethnically Distinct Populations: Prediction of Refractive Error Using Optical Coherence Tomography. Ophthalmol Ther 2024; 13:305-319. [PMID: 37955835 PMCID: PMC10776546 DOI: 10.1007/s40123-023-00842-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 10/20/2023] [Indexed: 11/14/2023] Open
Abstract
INTRODUCTION The mismatch between training and testing data distribution causes significant degradation in the deep learning model performance in multi-ethnic scenarios. To reduce the performance differences between ethnic groups and image domains, we built a deep transfer learning model with adaptation training to predict uncorrected refractive errors using posterior segment optical coherence tomography (OCT) images of the macula and optic nerve. METHODS Observational, cross-sectional, multicenter study design. We pre-trained a deep learning model on OCT images from the B&VIIT Eye Center (Seoul, South Korea) (N = 2602 eyes of 1301 patients). OCT images from Poona Eye Care (Pune, India) were chronologically sorted into adaptation training data (N = 60 eyes of 30 patients) for transfer learning and test data (N = 142 eyes of 71 patients) for validation. Deep learning models were trained to predict spherical equivalent (SE) and mean keratometry (K) values via transfer learning for domain adaptation. RESULTS Both adaptation models for SE and K were significantly better than those without adaptation (P < 0.001). In myopia/hyperopia classification, the model trained on circular optic disc OCT images yielded the best performance (accuracy = 74.7%). It also performed best in estimating SE with the lowest mean absolute error (MAE) of 1.58 D. For classifying the degree of corneal curvature, the optic nerve vertical algorithm performed best (accuracy = 65.7%). The optic nerve horizontal model achieved the lowest MAE (1.85 D) when predicting the K value. Saliency maps frequently highlighted the retinal nerve fiber layers. CONCLUSIONS Adaptation training via transfer learning is an effective technique for estimating refractive errors and K values using macular and optic nerve OCT images from ethnically heterogeneous populations. Further studies with larger sample sizes and various data sources are needed to confirm the feasibility of the proposed algorithm.
Collapse
Affiliation(s)
- Rishabh Jain
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea.
- Research and Development Department, VISUWORKS, Seoul, South Korea.
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, Republic of Korea
- Research and Development Department, VISUWORKS, Seoul, South Korea
| | - Joanna Song
- Research and Development Department, VISUWORKS, Seoul, South Korea
| | | | - Ashiyana Nariani
- Department of Ophthalmology, King Edward Memorial Hospital and Seth Gordhandas Sunderdas Medical College, Mumbai, Maharashtra, India
| |
Collapse
|
17
|
Yamashita T, Asaoka R, Terasaki H, Yoshihara N, Kakiuchi N, Sakamoto T. Three-year changes in sex judgment using color fundus parameters in elementary school students. PLoS One 2023; 18:e0295123. [PMID: 38033010 PMCID: PMC10688721 DOI: 10.1371/journal.pone.0295123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
PURPOSE In a previous cross-sectional study, we reported that the sexes can be distinguished using known factors obtained from color fundus photography (CFP). However, it is not clear how sex differences in fundus parameters appear across the human lifespan. Therefore, we conducted a cohort study to investigate sex determination based on fundus parameters in elementary school students. METHODS This prospective observational longitudinal study investigated 109 right eyes of elementary school students over 4 years (age, 8.5 to 11.5 years). From each CFP, the tessellation fundus index was calculated as red/red + green + blue (R/[R+G+B]) using the mean value of red-green-blue intensity in eight locations around the optic disc and macular region. Optic disc area, ovality ratio, papillomacular angle, and retinal vessel angles and distances were quantified according to the data in our previous report. Using 54 fundus parameters, sex was predicted by L2 regularized binomial logistic regression for each grade. RESULTS The right eyes of 53 boys and 56 girls were analyzed. The discrimination accuracy rate significantly increased with age: 56.3% at 8.5 years, 46.1% at 9.5 years, 65.5% at 10.5 years and 73.1% at 11.5 years. CONCLUSIONS The accuracy of sex discrimination by fundus photography improved during a 3-year cohort study of elementary school students.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- School of Nursing, Seirei Christopher University, Hamamatsu, Shizuoka, Japan
- Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Shizuoka, Japan
- The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Shizuoka, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoya Yoshihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoko Kakiuchi
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| |
Collapse
|
18
|
Zhou L, Jiang H, Li G, Ding J, Lv C, Duan M, Wang W, Chen K, Shen N, Huang X. Point-wise spatial network for identifying carcinoma at the upper digestive and respiratory tract. BMC Med Imaging 2023; 23:140. [PMID: 37749498 PMCID: PMC10521533 DOI: 10.1186/s12880-023-01076-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 08/07/2023] [Indexed: 09/27/2023] Open
Abstract
PROBLEM Artificial intelligence has been widely investigated for diagnosis and treatment strategy design, with some models proposed for detecting oral pharyngeal, nasopharyngeal, or laryngeal carcinoma. However, no comprehensive model has been established for these regions. AIM Our hypothesis was that a common pattern in the cancerous appearance of these regions could be recognized and integrated into a single model, thus improving the efficacy of deep learning models. METHODS We utilized a point-wise spatial attention network model to perform semantic segmentation in these regions. RESULTS Our study demonstrated an excellent outcome, with an average mIoU of 86.3%, and an average pixel accuracy of 96.3%. CONCLUSION The research confirmed that the mucosa of oral pharyngeal, nasopharyngeal, and laryngeal regions may share a common appearance, including the appearance of tumors, which can be recognized by a single artificial intelligence model. Therefore, a deep learning model could be constructed to effectively recognize these tumors.
Collapse
Affiliation(s)
- Lei Zhou
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Huaili Jiang
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Guangyao Li
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Jiaye Ding
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Cuicui Lv
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China
| | - Maoli Duan
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
- Department of Otolaryngology Head and Neck Surgery, Karolinska University Hospital, 171 76, Stockholm, Sweden
| | - Wenfeng Wang
- Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou, 510006, P. R. China
| | - Kongyang Chen
- Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou, 510006, P. R. China
- Pazhou Lab, Guangzhou, 510330, P. R. China
| | - Na Shen
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China.
| | - Xinsheng Huang
- Department of Otorhinolaryngology-Head and Neck Surgery, Zhongshan Hospital Affiliated to Fudan University, Xuhui District, 180 Fenglin Road, , Shanghai, 200032, P. R. China.
| |
Collapse
|
19
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
20
|
Linde G, Chalakkal R, Zhou L, Huang JL, O’Keeffe B, Shah D, Davidson S, Hong SC. Automatic Refractive Error Estimation Using Deep Learning-Based Analysis of Red Reflex Images. Diagnostics (Basel) 2023; 13:2810. [PMID: 37685347 PMCID: PMC10486607 DOI: 10.3390/diagnostics13172810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/23/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023] Open
Abstract
Purpose/Background: We evaluate how a deep learning model can be applied to extract refractive error metrics from pupillary red reflex images taken by a low-cost handheld fundus camera. This could potentially provide a rapid and economical vision-screening method, allowing for early intervention to prevent myopic progression and reduce the socioeconomic burden associated with vision impairment in the later stages of life. Methods: Infrared and color images of pupillary crescents were extracted from eccentric photorefraction images of participants from Choithram Hospital in India and Dargaville Medical Center in New Zealand. The pre-processed images were then used to train different convolutional neural networks to predict refractive error in terms of spherical power and cylindrical power metrics. Results: The best-performing trained model achieved an overall accuracy of 75% for predicting spherical power using infrared images and a multiclass classifier. Conclusions: Even though the model's performance is not superior, the proposed method showed good usability of using red reflex images in estimating refractive error. Such an approach has never been experimented with before and can help guide researchers, especially when the future of eye care is moving towards highly portable and smartphone-based devices.
Collapse
Affiliation(s)
| | | | - Lydia Zhou
- University of Sydney, Sydney, NSW 2050, Australia
| | | | | | | | | | - Sheng Chiong Hong
- Public Health Unit, Dunedin Hospital, Te Whatu Ora Southern, Dunedin 9016, New Zealand
| |
Collapse
|
21
|
Li Y, Zhao H, Fan Y, Hu J, Li S, Wang K, Zhao M. A machine learning-based algorithm for estimating the original corneal curvature based on corneal topography after orthokeratology. Cont Lens Anterior Eye 2023; 46:101862. [PMID: 37208285 DOI: 10.1016/j.clae.2023.101862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 03/06/2023] [Accepted: 05/10/2023] [Indexed: 05/21/2023]
Abstract
OBJECTIVE To estimate the original corneal curvature after orthokeratology by applying a machine learning-based algorithm. METHODS A total of 497 right eyes of 497 patients undergoing overnight orthokeratology for myopia for more than 1 year were enrolled in this retrospective study. All patients were fitted with lenses from Paragon CRT. Corneal topography was obtained by a Sirius corneal topography system (CSO, Italy). Original flat K (K1) and original steep K (K2) were set as the targets of calculation. The importance of each variable was explored by Fisher's criterion. Two machine learning models were established to allow adaptation to more situations. Bagging Tree, Gaussian process, support vector machine (SVM), and decision tree were used for prediction. RESULTS K2 after one year of orthokeratology (K2after) was most important in the prediction of K1 and K2. Bagging Tree performed best in both models 1 and 2 for K1 prediction (R = 0.812, RMSE = 0.855 in model 1 and R = 0.812, RMSE = 0.858 in model 2) and K2 prediction (R = 0.831, RMSE = 0.898 in model 1 and R = 0.837, RMSE = 0.888 in model 2). In model 1, the difference was 0.006 ± 1.34 D (p = 0.93) between the predictive value of K1 and the true value of K1 (K1before) and was 0.005 ± 1.51 D(p = 0.94) between the predictive value of K2 and the true value of K2 (K2before). In model 2, the difference was -0.056 ± 1.75 D (p = 0.59) between the predictive value of K1 and K1before and was 0.017 ± 2.01 D(p = 0.88) between the predictive value of K2 and K2before. CONCLUSION Bagging Tree performed best in predicting K1 and K2. Machine learning can be applied to predict the corneal curvature for those who cannot provide the initial corneal parameters in the outpatient clinic, providing a relatively certain degree of reference for the refitting of the Ortho-k lenses.
Collapse
Affiliation(s)
- Yujing Li
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China
| | - Heng Zhao
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Institute of Medical Technology, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China
| | - Yuzhuo Fan
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Institute of Medical Technology, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China
| | - Jie Hu
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China
| | - Siying Li
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China
| | - Kai Wang
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Institute of Medical Technology, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China.
| | - Mingwei Zhao
- Department of Ophthalmology & Clinical Centre of Optometry, Peking University People's Hospital, Beijing, China; Eye Disease and Optometry Institute, Peking University People's Hospital, Beijing, China; College of Optemetry, Peking University Health Science Centre, Beijing, China; Institute of Medical Technology, Peking University Health Science Centre, Beijing, China; Beijing Key Laboratory of the Diagnosis and Therapy of Retinal and Choroid Diseases, Beijing, China
| |
Collapse
|
22
|
Wang J, Wang J, Chen D, Wu X, Xu Z, Yu X, Sheng S, Lin X, Chen X, Wu J, Ying H, Xu W. Prediction of postoperative visual acuity in patients with age-related cataracts using macular optical coherence tomography-based deep learning method. Front Med (Lausanne) 2023; 10:1165135. [PMID: 37250634 PMCID: PMC10213207 DOI: 10.3389/fmed.2023.1165135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 04/14/2023] [Indexed: 05/31/2023] Open
Abstract
Background To predict postoperative visual acuity (VA) in patients with age-related cataracts using macular optical coherence tomography-based deep learning method. Methods A total of 2,051 eyes from 2,051 patients with age-related cataracts were included. Preoperative optical coherence tomography (OCT) images and best-corrected visual acuity (BCVA) were collected. Five novel models (I, II, III, IV, and V) were proposed to predict postoperative BCVA. The dataset was randomly divided into a training (n = 1,231), validation (n = 410), and test set (n = 410). The performance of the models in predicting exact postoperative BCVA was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The performance of the models in predicting whether postoperative BCVA was improved by at least two lines in the visual chart (0.2LogMAR) was evaluated using precision, sensitivity, accuracy, F1 and area under curve (AUC). Results Model V containing preoperative OCT images with horizontal and vertical B-scans, macular morphological feature indices, and preoperative BCVA had a better performance in predicting postoperative VA, with the lowest MAE (0.1250 and 0.1194LogMAR) and RMSE (0.2284 and 0.2362LogMAR), and the highest precision (90.7% and 91.7%), sensitivity (93.4% and 93.8%), accuracy (88% and 89%), F1 (92% and 92.7%) and AUCs (0.856 and 0.854) in the validation and test datasets, respectively. Conclusion The model had a good performance in predicting postoperative VA, when the input information contained preoperative OCT scans, macular morphological feature indices, and preoperative BCVA. The preoperative BCVA and macular OCT indices were of great significance in predicting postoperative VA in patients with age-related cataracts.
Collapse
Affiliation(s)
- Jingwen Wang
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jinhong Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
| | - Dan Chen
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xingdi Wu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zhe Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xuewen Yu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Ophthalmology, The First People’s Hospital of Xiaoshan District, Xiaoshan Affiliated Hospital of Wenzhou Medical University, Hangzhou, Zhejiang, China
| | - Siting Sheng
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xueqi Lin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiang Chen
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jian Wu
- Second Affiliated Hospital School of Medicine, School of Public Health, and Institute of Wenzhou, Zhejiang University, Hangzhou, Zhejiang, China
| | - Haochao Ying
- School of Public Health, Zhejiang University, Hangzhou, Zhejiang, China
| | - Wen Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
23
|
Ahadi S, Wilson KA, Babenko B, McLean CY, Bryant D, Pritchard O, Kumar A, Carrera EM, Lamy R, Stewart JM, Varadarajan A, Berndl M, Kapahi P, Bashir A. Longitudinal fundus imaging and its genome-wide association analysis provide evidence for a human retinal aging clock. eLife 2023; 12:e82364. [PMID: 36975205 PMCID: PMC10110236 DOI: 10.7554/elife.82364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 03/22/2023] [Indexed: 03/29/2023] Open
Abstract
Biological age, distinct from an individual's chronological age, has been studied extensively through predictive aging clocks. However, these clocks have limited accuracy in short time-scales. Here we trained deep learning models on fundus images from the EyePACS dataset to predict individuals' chronological age. Our retinal aging clocking, 'eyeAge', predicted chronological age more accurately than other aging clocks (mean absolute error of 2.86 and 3.30 years on quality-filtered data from EyePACS and UK Biobank, respectively). Additionally, eyeAge was independent of blood marker-based measures of biological age, maintaining an all-cause mortality hazard ratio of 1.026 even when adjusted for phenotypic age. The individual-specific nature of eyeAge was reinforced via multiple GWAS hits in the UK Biobank cohort. The top GWAS locus was further validated via knockdown of the fly homolog, Alk, which slowed age-related decline in vision in flies. This study demonstrates the potential utility of a retinal aging clock for studying aging and age-related diseases and quantitatively measuring aging on very short time-scales, opening avenues for quick and actionable evaluation of gero-protective therapeutics.
Collapse
Affiliation(s)
- Sara Ahadi
- Google ResearchMountain ViewUnited States
| | | | | | | | | | | | - Ajay Kumar
- Department of Biophysics, Post Graduate Institute of Medical Education and ResearchChandigarhIndia
| | | | - Ricardo Lamy
- Department of Ophthalmology, Zuckerberg San Francisco General Hospital and Trauma CenterSan FranciscoUnited States
| | - Jay M Stewart
- Department of Ophthalmology, University of California, San FranciscoSan FranciscoUnited States
| | | | | | - Pankaj Kapahi
- Buck Institute for Research on AgingNovatoUnited States
| | - Ali Bashir
- Google ResearchMountain ViewUnited States
| |
Collapse
|
24
|
Li Y, Yip MYT, Ting DSW, Ang M. Artificial intelligence and digital solutions for myopia. Taiwan J Ophthalmol 2023; 13:142-150. [PMID: 37484621 PMCID: PMC10361438 DOI: 10.4103/tjo.tjo-d-23-00032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 03/16/2023] [Indexed: 07/25/2023] Open
Abstract
Myopia as an uncorrected visual impairment is recognized as a global public health issue with an increasing burden on health-care systems. Moreover, high myopia increases one's risk of developing pathologic myopia, which can lead to irreversible visual impairment. Thus, increased resources are needed for the early identification of complications, timely intervention to prevent myopia progression, and treatment of complications. Emerging artificial intelligence (AI) and digital technologies may have the potential to tackle these unmet needs through automated detection for screening and risk stratification, individualized prediction, and prognostication of myopia progression. AI applications in myopia for children and adults have been developed for the detection, diagnosis, and prediction of progression. Novel AI technologies, including multimodal AI, explainable AI, federated learning, automated machine learning, and blockchain, may further improve prediction performance, safety, accessibility, and also circumvent concerns of explainability. Digital technology advancements include digital therapeutics, self-monitoring devices, virtual reality or augmented reality technology, and wearable devices - which provide possible avenues for monitoring myopia progression and control. However, there are challenges in the implementation of these technologies, which include requirements for specific infrastructure and resources, demonstrating clinically acceptable performance and safety of data management. Nonetheless, this remains an evolving field with the potential to address the growing global burden of myopia.
Collapse
Affiliation(s)
- Yong Li
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Michelle Y. T. Yip
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Daniel S. W. Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, National University of Singapore, Singapore
| |
Collapse
|
25
|
Woods C, Naroo S, Zeri F, Bakkar M, Barodawala F, Evans V, Fadel D, Kalikivayi L, Lira M, Maseedupally V, Huarte ST, Eperjesi F. Evidence for commonly used teaching, learning and assessment methods in contact lens clinical skills education. Cont Lens Anterior Eye 2023; 46:101821. [PMID: 36805277 DOI: 10.1016/j.clae.2023.101821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 02/08/2023] [Indexed: 02/18/2023]
Abstract
INTRODUCTION Evidence based practice is now an important part of healthcare education. The aim of this narrative literature review was to determine what evidence exists on the efficacy of commonly used teaching and learning and assessment methods in the realm of contact lens skills education (CLE) in order to provide insights into best practice. A summary of the global regulation and provision of postgraduate learning and continuing professional development in CLE is included. METHOD An expert panel of educators was recruited and completed a literature review of current evidence of teaching and learning and assessment methods in healthcare training, with an emphasis on health care, general optometry and CLE. RESULTS No direct evidence of benefit of teaching and learning and assessment methods in CLE were found. There was evidence for the benefit of some teaching and learning and assessment methods in other disciplines that could be transferable to CLE and could help students meet the intended learning outcomes. There was evidence that the following teaching and learning methods helped health-care and general optometry students meet the intended learning outcomes; clinical teaching and learning, flipped classrooms, clinical skills videos and clerkships. For assessment these methods were; essays, case presentations, objective structured clinical examinations, self-assessment and formative assessment. There was no evidence that the following teaching and learning methods helped health-care and general optometry students meet the intended learning outcomes; journal clubs and case discussions. Nor was any evidence found for the following assessment methods; multiple-choice questions, oral examinations, objective structured practical examinations, holistic assessment, and summative assessment. CONCLUSION Investigation into the efficacy of common teaching and learning and assessment methods in CLE are required and would be beneficial for the entire community of contact lens educators, and other disciplines that wish to adapt this approach of evidence-based teaching.
Collapse
Affiliation(s)
- Craig Woods
- School of Optometry and Vision Science, University of New South Wales, Australia; International Association of Contact Lens Educators, Canada
| | - Shehzad Naroo
- College of Health and Life Sciences, Aston University, UK; International Association of Contact Lens Educators, Canada
| | - Fabrizio Zeri
- College of Health and Life Sciences, Aston University, UK; University of Milano-Bicocca, Department of Materials Science, Milan, Italy; International Association of Contact Lens Educators, Canada
| | - May Bakkar
- Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Jordan
| | - Fakhruddin Barodawala
- Faculty of Optometry and Vision Sciences, SEGi University, Malaysia; International Association of Contact Lens Educators, Canada
| | - Vicki Evans
- Faculty of Health, University of Canberra, Australia; International Association of Contact Lens Educators, Canada
| | - Daddi Fadel
- Center for Ocular Research & Education (CORE), School of Optometry & Vision Science, University of Waterloo, Waterloo, Canada
| | | | - Madalena Lira
- Physics Center of Minho and Porto Universities (CF-UM-UP), School of Sciences, University of Minho, Portugal; International Association of Contact Lens Educators, Canada
| | - Vinod Maseedupally
- School of Optometry and Vision Science, University of New South Wales, Australia
| | | | | |
Collapse
|
26
|
Harikiran J, Chandana BS, Rao BS, Raviteja B. Ocular disease examination of fundus images by hybriding SFCNN and rule mining algorithms. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- J. Harikiran
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Sai Chandana
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Srinivasa Rao
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Raviteja
- Department of Computer Science and Engineering, GITAM Deemed to be University, Visakhapatnam, India
| |
Collapse
|
27
|
Sun Y, Li Y, Zhang F, Zhao H, Liu H, Wang N, Li H. A deep network using coarse clinical prior for myopic maculopathy grading. Comput Biol Med 2023; 154:106556. [PMID: 36682177 DOI: 10.1016/j.compbiomed.2023.106556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 12/19/2022] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Pathological Myopia (PM) is a globally prevalent eye disease which is one of the main causes of blindness. In the long-term clinical observation, myopic maculopathy is a main criterion to diagnose PM severity. The grading of myopic maculopathy can provide a severity and progression prediction of PM to perform treatment and prevent myopia blindness in time. In this paper, we propose a feature fusion framework to utilize tessellated fundus and the brightest region in fundus images as prior knowledge. The proposed framework consists of prior knowledge extraction module and feature fusion module. Prior knowledge extraction module uses traditional image processing methods to extract the prior knowledge to indicate coarse lesion positions in fundus images. Furthermore, the prior, tessellated fundus and the brightest region in fundus images, are integrated into deep learning network as global and local constrains respectively by feature fusion module. In addition, rank loss is designed to increase the continuity of classification score. We collect a private color fundus dataset from Beijing TongRen Hospital containing 714 clinical images. The dataset contains all 5 grades of myopic maculopathy which are labeled by experienced ophthalmologists. Our framework achieves 0.8921 five-grade accuracy on our private dataset. Pathological Myopia (PALM) dataset is used for comparison with other related algorithms. Our framework is trained with 400 images and achieves an AUC of 0.9981 for two-class grading. The results show that our framework can achieve a good performance for myopic maculopathy grading.
Collapse
Affiliation(s)
- Yun Sun
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Yu Li
- Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - Fengju Zhang
- Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - He Zhao
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| | - Hanruo Liu
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China; Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - Ningli Wang
- Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - Huiqi Li
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| |
Collapse
|
28
|
Schuhmacher A, Haefner N, Honsberg K, Goldhahn J, Gassmann O. The dominant logic of Big Tech in healthcare and pharma. Drug Discov Today 2023; 28:103457. [PMID: 36427777 DOI: 10.1016/j.drudis.2022.103457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 09/19/2022] [Accepted: 11/17/2022] [Indexed: 11/25/2022]
Abstract
Digital health and digital pharma are considered supportive tools for patients and healthcare providers (HCPs), making the market highly attractive for industry players. Not surprisingly, Tech Giants have started to move into this area. We utilized established management models and publicly available information sources, such as annual company reports, and performed a thorough analysis to uncover the underlying business models of Alphabet, Amazon, Apple, IBM, and Microsoft in order to better understand their intention and course of entering the healthcare and pharma industries. Our results indicate that Big Tech or Tech Giants do address the needs of patients and physicians, while having built clear value propositions, value chains, and revenue models to sustainably revolutionize the healthcare and pharma industries.
Collapse
Affiliation(s)
- Alexander Schuhmacher
- Technische Hochschule Ingolstadt, THI Business School, Esplanade 10, 85049 Ingolstadt, Germany; University of St. Gallen, Institute of Technology Management, Dufourstrasse 40a, 9000 St. Gallen, Switzerland.
| | - Naomi Haefner
- University of St. Gallen, Institute of Technology Management, Dufourstrasse 40a, 9000 St. Gallen, Switzerland
| | | | - Jörg Goldhahn
- ETH Zurich, D-HEST, HCP H15.3 Leopold-Ruzicka-Weg 4, 8093 Zurich, Switzerland
| | - Oliver Gassmann
- University of St. Gallen, Institute of Technology Management, Dufourstrasse 40a, 9000 St. Gallen, Switzerland
| |
Collapse
|
29
|
Shah R, Petch J, Nelson W, Roth K, Noseworthy MD, Ghassemi M, Gerstein HC. Nailfold capillaroscopy and deep learning in diabetes. J Diabetes 2023; 15:145-151. [PMID: 36641812 PMCID: PMC9934957 DOI: 10.1111/1753-0407.13354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/27/2022] [Accepted: 12/21/2022] [Indexed: 01/17/2023] Open
Abstract
OBJECTIVE To determine whether nailfold capillary images, acquired using video capillaroscopy, can provide diagnostic information about diabetes and its complications. RESEARCH DESIGN AND METHODS Nailfold video capillaroscopy was performed in 120 adult patients with and without type 1 or type 2 diabetes, and with and without cardiovascular disease. Nailfold images were analyzed using convolutional neural networks, a deep learning technique. Cross-validation was used to develop and test the ability of models to predict five5 prespecified states (diabetes, high glycosylated hemoglobin, cardiovascular event, retinopathy, albuminuria, and hypertension). The performance of each model for a particular state was assessed by estimating areas under the receiver operating characteristics curves (AUROC) and precision recall curves (AUPR). RESULTS A total of 5236 nailfold images were acquired from 120 participants (mean 44 images per participant) and were all available for analysis. Models were able to accurately identify the presence of diabetes, with AUROC 0.84 (95% confidence interval [CI] 0.76, 0.91) and AUPR 0.84 (95% CI 0.78, 0.93), respectively. Models were also able to predict a history of cardiovascular events in patients with diabetes, with AUROC 0.65 (95% CI 0.51, 0.78) and AUPR 0.72 (95% CI 0.62, 0.88) respectively. CONCLUSIONS This proof-of-concept study demonstrates the potential of machine learning for identifying people with microvascular capillary changes from diabetes based on nailfold images, and for possibly identifying those most likely to have diabetes-related complications.
Collapse
Affiliation(s)
- Reema Shah
- Population Health Research Institute, McMaster University and Hamilton Health SciencesHamiltonOntarioCanada
| | - Jeremy Petch
- Population Health Research Institute, McMaster University and Hamilton Health SciencesHamiltonOntarioCanada
- Centre for Data Science and Digital HealthHamilton Health SciencesHamiltonOntarioCanada
- Institute for Health Policy, Management and EvaluationUniversity of TorontoTorontoOntarioCanada
- Division of CardiologyMcMaster UniversityHamiltonOntarioCanada
| | - Walter Nelson
- Centre for Data Science and Digital HealthHamilton Health SciencesHamiltonOntarioCanada
- Department of Statistical SciencesUniversity of TorontoTorontoOntarioCanada
| | - Karsten Roth
- Cluster of Excellence Machine LearningUniversity of TübingenTübingenGermany
| | - Michael D. Noseworthy
- Electrical and Computer EngineeringMcMaster UniversityHamiltonOntarioCanada
- McMaster School of Biomedical EngineeringHamiltonOntarioCanada
- Department of RadiologyMcMaster UniversityHamiltonOntarioCanada
| | | | - Hertzel C. Gerstein
- Population Health Research Institute, McMaster University and Hamilton Health SciencesHamiltonOntarioCanada
| |
Collapse
|
30
|
Foo LL, Lim GYS, Lanca C, Wong CW, Hoang QV, Zhang XJ, Yam JC, Schmetterer L, Chia A, Wong TY, Ting DSW, Saw SM, Ang M. Deep learning system to predict the 5-year risk of high myopia using fundus imaging in children. NPJ Digit Med 2023; 6:10. [PMID: 36702878 PMCID: PMC9879938 DOI: 10.1038/s41746-023-00752-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 01/10/2023] [Indexed: 01/27/2023] Open
Abstract
Our study aims to identify children at risk of developing high myopia for timely assessment and intervention, preventing myopia progression and complications in adulthood through the development of a deep learning system (DLS). Using a school-based cohort in Singapore comprising of 998 children (aged 6-12 years old), we train and perform primary validation of the DLS using 7456 baseline fundus images of 1878 eyes; with external validation using an independent test dataset of 821 baseline fundus images of 189 eyes together with clinical data (age, gender, race, parental myopia, and baseline spherical equivalent (SE)). We derive three distinct algorithms - image, clinical and mix (image + clinical) models to predict high myopia development (SE ≤ -6.00 diopter) during teenage years (5 years later, age 11-17). Model performance is evaluated using area under the receiver operating curve (AUC). Our image models (Primary dataset AUC 0.93-0.95; Test dataset 0.91-0.93), clinical models (Primary dataset AUC 0.90-0.97; Test dataset 0.93-0.94) and mixed (image + clinical) models (Primary dataset AUC 0.97; Test dataset 0.97-0.98) achieve clinically acceptable performance. The addition of 1 year SE progression variable has minimal impact on the DLS performance (clinical model AUC 0.98 versus 0.97 in primary dataset, 0.97 versus 0.94 in test dataset; mixed model AUC 0.99 versus 0.97 in primary dataset, 0.95 versus 0.98 in test dataset). Thus, our DLS allows prediction of the development of high myopia by teenage years amongst school-going children. This has potential utility as a clinical-decision support tool to identify "at-risk" children for early intervention.
Collapse
Affiliation(s)
- Li Lian Foo
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Yong San Lim
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Carla Lanca
- grid.418858.80000 0000 9084 0599Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), Instituto Politécnico de Lisboa, Lisboa, Portugal ,grid.10772.330000000121511713Comprehensive Health Research Center (CHRC), Escola Nacional de Saúde Pública, Universidade Nova de Lisboa, Lisboa, Portugal
| | - Chee Wai Wong
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore ,grid.415572.00000 0004 0620 9577Asia Pacific Eye Centre, Gleneagles Hospital, Singapore, Singapore
| | - Quan V. Hoang
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore ,grid.21729.3f0000000419368729Dept. of Ophthalmology, Columbia University, Columbia, SC USA
| | - Xiu Juan Zhang
- grid.10784.3a0000 0004 1937 0482Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Jason C. Yam
- grid.10784.3a0000 0004 1937 0482Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China ,grid.490089.c0000 0004 1803 8779Hong Kong Eye Hospital, Hong Kong, China ,grid.415197.f0000 0004 1764 7206Department of Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong, China ,grid.10784.3a0000 0004 1937 0482Hong Kong Hub of Paediatric Excellence, The Chinese University of Hong Kong, Hong Kong, China ,Department of Ophthalmology, Hong Kong Children’s Hospital, Hong Kong, China
| | - Leopold Schmetterer
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Audrey Chia
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Tien Yin Wong
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Daniel S. W. Ting
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Seang-Mei Saw
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Marcus Ang
- grid.272555.20000 0001 0706 4670Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore ,grid.4280.e0000 0001 2180 6431Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| |
Collapse
|
31
|
Zhang J, Zou H. Artificial intelligence technology for myopia challenges: A review. Front Cell Dev Biol 2023; 11:1124005. [PMID: 36733459 PMCID: PMC9887165 DOI: 10.3389/fcell.2023.1124005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
Myopia is a significant global health concern and affects human visual function, resulting in blurred vision at a distance. There are still many unsolved challenges in this field that require the help of new technologies. Currently, artificial intelligence (AI) technology is dominating medical image and data analysis and has been introduced to address challenges in the clinical practice of many ocular diseases. AI research in myopia is still in its early stages. Understanding the strengths and limitations of each AI method in specific tasks of myopia could be of great value and might help us to choose appropriate approaches for different tasks. This article reviews and elaborates on the technical details of AI methods applied for myopia risk prediction, screening and diagnosis, pathogenesis, and treatment.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China,National Clinical Research Center for Eye Diseases, Shanghai, China,Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China,*Correspondence: Haidong Zou,
| |
Collapse
|
32
|
Lanca C, Repka MX, Grzybowski A. Topical Review: Studies on Management of Myopia Progression from 2019 to 2021. Optom Vis Sci 2023; 100:23-30. [PMID: 36705712 DOI: 10.1097/opx.0000000000001947] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
SIGNIFICANCE Myopia is a common eye condition that increases the risk of sight-threatening complications. Each additional diopter increases the chance of complications. The purpose of this review was to make an overview of myopia control treatment options for children with myopia progression.In this nonsystematic review, we searched PubMed and Cochrane databases for English-language studies published from 2019 to September 2021. Emphasis was given to selection of randomized controlled trials. Nineteen randomized controlled trials and two retrospective studies were included. Topical atropine and orthokeratology remain the most used treatments, whereas lenses with novel designs are emerging treatments. Overall myopia progression in the treatment groups for low-dose atropine and orthokeratology was lower than in the control groups, and their efficacy was reported in several randomized controlled trials and confirmed by various systematic reviews and meta-analysis. The findings of myopia progression and axial elongation for the MiSight, defocus incorporated multiple segment spectacle lens, highly aspherical lenslets, and diffusion optics technology spectacle lens were comparable. Public health interventions to optimize environmental influences may also be important strategies to control myopia. Optimal choice of management of myopia depends on treatment availability, acceptability to child and parents, and specific patient features such as age, baseline myopia, and lifestyle. Eye care providers need to understand the advantages and disadvantages of each therapy to best counsel parents of children with myopia.
Collapse
Affiliation(s)
| | - Michael X Repka
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | | |
Collapse
|
33
|
Wang S, Ji Y, Bai W, Ji Y, Li J, Yao Y, Zhang Z, Jiang Q, Li K. Advances in artificial intelligence models and algorithms in the field of optometry. Front Cell Dev Biol 2023; 11:1170068. [PMID: 37187617 PMCID: PMC10175695 DOI: 10.3389/fcell.2023.1170068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023] Open
Abstract
The rapid development of computer science over the past few decades has led to unprecedented progress in the field of artificial intelligence (AI). Its wide application in ophthalmology, especially image processing and data analysis, is particularly extensive and its performance excellent. In recent years, AI has been increasingly applied in optometry with remarkable results. This review is a summary of the application progress of different AI models and algorithms used in optometry (for problems such as myopia, strabismus, amblyopia, keratoconus, and intraocular lens) and includes a discussion of the limitations and challenges associated with its application in this field.
Collapse
Affiliation(s)
- Suyu Wang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yuke Ji
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Wen Bai
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Jiajun Li
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yujia Yao
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Ziran Zhang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Qin Jiang
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
- *Correspondence: Qin Jiang, ; Keran Li,
| | - Keran Li
- Department of Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
- *Correspondence: Qin Jiang, ; Keran Li,
| |
Collapse
|
34
|
Zou H, Shi S, Yang X, Ma J, Fan Q, Chen X, Wang Y, Zhang M, Song J, Jiang Y, Li L, He X, Jhanji V, Wang S, Song M, Wang Y. Identification of ocular refraction based on deep learning algorithm as a novel retinoscopy method. Biomed Eng Online 2022; 21:87. [PMID: 36528597 PMCID: PMC9758840 DOI: 10.1186/s12938-022-01057-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND The evaluation of refraction is indispensable in ophthalmic clinics, generally requiring a refractor or retinoscopy under cycloplegia. Retinal fundus photographs (RFPs) supply a wealth of information related to the human eye and might provide a promising approach that is more convenient and objective. Here, we aimed to develop and validate a fusion model-based deep learning system (FMDLS) to identify ocular refraction via RFPs and compare with the cycloplegic refraction. In this population-based comparative study, we retrospectively collected 11,973 RFPs from May 1, 2020 to November 20, 2021. The performance of the regression models for sphere and cylinder was evaluated using mean absolute error (MAE). The accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, and F1-score were used to evaluate the classification model of the cylinder axis. RESULTS Overall, 7873 RFPs were retained for analysis. For sphere and cylinder, the MAE values between the FMDLS and cycloplegic refraction were 0.50 D and 0.31 D, representing an increase of 29.41% and 26.67%, respectively, when compared with the single models. The correlation coefficients (r) were 0.949 and 0.807, respectively. For axis analysis, the accuracy, specificity, sensitivity, and area under the curve value of the classification model were 0.89, 0.941, 0.882, and 0.814, respectively, and the F1-score was 0.88. CONCLUSIONS The FMDLS successfully identified the ocular refraction in sphere, cylinder, and axis, and showed good agreement with the cycloplegic refraction. The RFPs can provide not only comprehensive fundus information but also the refractive state of the eye, highlighting their potential clinical value.
Collapse
Affiliation(s)
- Haohan Zou
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Shenda Shi
- grid.31880.320000 0000 8780 1230School of Computer Science, School of National Pilot Software Engineering, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Hai-Dian District, Beijing, 100876 China ,HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Xiaoyan Yang
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Jiaonan Ma
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Qian Fan
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Xuan Chen
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Yibing Wang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Mingdong Zhang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Jiaxin Song
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Yanglin Jiang
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Lihua Li
- grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Xin He
- HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Vishal Jhanji
- grid.21925.3d0000 0004 1936 9000UPMC Eye Center, University of Pittsburgh School of Medicine, Pittsburgh, PA USA
| | - Shengjin Wang
- HuaHui Jian AI Tech Ltd., Tianjin, China ,grid.12527.330000 0001 0662 3178Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Meina Song
- grid.31880.320000 0000 8780 1230School of Computer Science, School of National Pilot Software Engineering, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Hai-Dian District, Beijing, 100876 China ,HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Yan Wang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.216938.70000 0000 9878 7032Nankai University Eye Institute, Nankai University, Tianjin, China
| |
Collapse
|
35
|
Wang R, He J, Chen Q, Ye L, Sun D, Yin L, Zhou H, Zhao L, Zhu J, Zou H, Tan Q, Huang D, Liang B, He L, Wang W, Fan Y, Xu X. Efficacy of a Deep Learning System for Screening Myopic Maculopathy Based on Color Fundus Photographs. Ophthalmol Ther 2022; 12:469-484. [PMID: 36495394 PMCID: PMC9735275 DOI: 10.1007/s40123-022-00621-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 11/23/2022] [Indexed: 12/14/2022] Open
Abstract
INTRODUCTION The maculopathy in highly myopic eyes is complex. Its clinical diagnosis is a huge workload and subjective. To simply and quickly classify pathologic myopia (PM), a deep learning algorithm was developed and assessed to screen myopic maculopathy lesions based on color fundus photographs. METHODS This study included 10,347 ocular fundus photographs from 7606 participants. Of these photographs, 8210 were used for training and validation, and 2137 for external testing. A deep learning algorithm was trained, validated, and externally tested to screen myopic maculopathy which was classified into four categories: normal or mild tessellated fundus, severe tessellated fundus, early-stage PM, and advanced-stage PM. The area under the precision-recall curve, the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and Cohen's kappa were calculated and compared with those of retina specialists. RESULTS In the validation data set, the model detected normal or mild tessellated fundus, severe tessellated fundus, early-stage PM, and advanced-stage PM with AUCs of 0.98, 0.95, 0.99, and 1.00, respectively; while in the external-testing data set of 2137 photographs, the model had AUCs of 0.99, 0.96, 0.98, and 1.00, respectively. CONCLUSIONS We developed a deep learning model for detection and classification of myopic maculopathy based on fundus photographs. Our model achieved high sensitivities, specificities, and reliable Cohen's kappa, compared with those of attending ophthalmologists.
Collapse
Affiliation(s)
- Ruonan Wang
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Jiangnan He
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.24516.340000000123704535School of Medicine, Tongji University, Shanghai, China
| | - Qiuying Chen
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Luyao Ye
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Dandan Sun
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Lili Yin
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Hao Zhou
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Lijun Zhao
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Jianfeng Zhu
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China
| | - Haidong Zou
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Qichao Tan
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Difeng Huang
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Bo Liang
- grid.459411.c0000 0004 1761 0825School of Biology and Food Engineering, Changshu Institute of Technology, Changshu, China
| | - Lin He
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Weijun Wang
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China ,No. 100 Haining Road, Shanghai, 200080 China
| | - Ying Fan
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China ,No. 380 Kangding Road, Shanghai, 200080 China
| | - Xun Xu
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| |
Collapse
|
36
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
37
|
Srinivasan V, Strodthoff N, Ma J, Binder A, Müller KR, Samek W. To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy. PLoS One 2022; 17:e0274291. [PMID: 36256665 PMCID: PMC9578637 DOI: 10.1371/journal.pone.0274291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 08/26/2022] [Indexed: 11/06/2022] Open
Abstract
There is an increasing number of medical use cases where classification algorithms based on deep neural networks reach performance levels that are competitive with human medical experts. To alleviate the challenges of small dataset sizes, these systems often rely on pretraining. In this work, we aim to assess the broader implications of these approaches in order to better understand what type of pretraining works reliably (with respect to performance, robustness, learned representation etc.) in practice and what type of pretraining dataset is best suited to achieve good performance in small target dataset size scenarios. Considering diabetic retinopathy grading as an exemplary use case, we compare the impact of different training procedures including recently established self-supervised pretraining methods based on contrastive learning. To this end, we investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions. Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions. In particular, self-supervised models show further benefits to supervised models. Self-supervised models with initialization from ImageNet pretraining not only report higher performance, they also reduce overfitting to large lesions along with improvements in taking into account minute lesions indicative of the progression of the disease. Understanding the effects of pretraining in a broader sense that goes beyond simple performance comparisons is of crucial importance for the broader medical imaging community beyond the use case considered in this work.
Collapse
Affiliation(s)
- Vignesh Srinivasan
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Nils Strodthoff
- School of Medicine and Health Services, Oldenburg University, Oldenburg, Germany
| | - Jackie Ma
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Alexander Binder
- Singapore Institute of Technology, ICT Cluster, Singapore, Singapore
- Department of Informatics, Oslo University, Oslo, Norway
| | - Klaus-Robert Müller
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
- Max Planck Institute for Informatics, Saarbrücken, Germany
- * E-mail: (KRM); (WS)
| | - Wojciech Samek
- Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
- BIFOLD - Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Department of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- * E-mail: (KRM); (WS)
| |
Collapse
|
38
|
Benchmarking saliency methods for chest X-ray interpretation. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00536-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Collapse
|
39
|
Yoo TK, Ryu IH, Kim JK, Lee IS. Deep learning for predicting uncorrected refractive error using posterior segment optical coherence tomography images. Eye (Lond) 2022; 36:1959-1965. [PMID: 34611313 PMCID: PMC9500028 DOI: 10.1038/s41433-021-01795-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 09/10/2021] [Accepted: 09/24/2021] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND/OBJECTIVES This study aimed to evaluate a deep learning model for estimating uncorrected refractive error using posterior segment optical coherence tomography (OCT) images. METHODS In this retrospective study, we assigned healthy subjects to development (N = 688 eyes of 344 subjects) and test (N = 248 eyes of 124 subjects) datasets (prospective validation design). We developed and validated OCT-based deep learning models to estimate refractive error. A regression model based on a pretrained ResNet50 architecture was trained using horizontal OCT images to predict the spherical equivalent (SE). The performance of the deep learning model for detecting high myopia was also evaluated. A saliency map was generated using the Grad-CAM technique to visualize the characteristic features. RESULTS The developed model showed a low mean absolute error for SE prediction (2.66 D) and a significant Pearson correlation coefficient of 0.588 (P < 0.001) in the test dataset validation. To detect high myopia, the model yielded an area under the receiver operating characteristic curve of 0.813 (95% confidence interval [CI], 0.744-0.881) and an accuracy of 71.4% (95% CI, 65.3-76.9%). The inner retinal layers and relatively steepened curvatures were highlighted using a saliency map to detect high myopia. CONCLUSION A deep learning algorithm showed that OCT could potentially be used as an imaging modality to estimate refractive error. This method will facilitate the evaluation of refractive error to prevent clinicians from overlooking the risks associated with refractive error during OCT assessment.
Collapse
Affiliation(s)
- Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
- VISUWORKS, Seoul, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| |
Collapse
|
40
|
Xu D, Ding S, Zheng T, Zhu X, Gu Z, Ye B, Fu W. Deep learning for predicting refractive error from multiple photorefraction images. Biomed Eng Online 2022; 21:55. [PMID: 35941613 PMCID: PMC9360706 DOI: 10.1186/s12938-022-01025-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/26/2022] [Indexed: 11/29/2022] Open
Abstract
Background Refractive error detection is a significant factor in preventing the development of myopia. To improve the efficiency and accuracy of refractive error detection, a refractive error detection network (REDNet) is proposed that combines the advantages of a convolutional neural network (CNN) and a recurrent neural network (RNN). It not only extracts the features of each image, but also fully utilizes the sequential relationship between images. In this article, we develop a system to predict the spherical power, cylindrical power, and spherical equivalent in multiple eccentric photorefraction images. Approach First, images of the pupil area are extracted from multiple eccentric photorefraction images; then, the features of each pupil image are extracted using the REDNet convolution layers. Finally, the features are fused by the recurrent layers in REDNet to predict the spherical power, cylindrical power, and spherical equivalent. Results The results show that the mean absolute error (MAE) values of the spherical power, cylindrical power, and spherical equivalent can reach 0.1740 D (diopters), 0.0702 D, and 0.1835 D, respectively. Significance This method demonstrates a much higher accuracy than those of current state-of-the-art deep-learning methods. Moreover, it is effective and practical.
Collapse
Affiliation(s)
- Daoliang Xu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Shangshang Ding
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Tianli Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Xingshuai Zhu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Zhiheng Gu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Bin Ye
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China
| | - Weiwei Fu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China. .,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, China.
| |
Collapse
|
41
|
Lu HC, Chen HY, Huang CJ, Chu PH, Wu LS, Tsai CY. Predicting Axial Length From Choroidal Thickness on Optical Coherence Tomography Images With Machine Learning Based Algorithms. Front Med (Lausanne) 2022; 9:850284. [PMID: 35836947 PMCID: PMC9273745 DOI: 10.3389/fmed.2022.850284] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 05/25/2022] [Indexed: 02/03/2023] Open
Abstract
PurposeWe formulated and tested ensemble learning models to classify axial length (AXL) from choroidal thickness (CT) as indicated on fovea-centered, 2D single optical coherence tomography (OCT) images.DesignRetrospective cross-sectional study.ParticipantsWe analyzed 710 OCT images from 355 eyes of 188 patients. Each eye had 2 OCT images.MethodsThe CT was estimated from 3 points of each image. We used five machine-learning base algorithms to construct the classifiers. This study trained and validated the models to classify the AXLs eyes based on binary (AXL < or > 26 mm) and multiclass (AXL < 22 mm, between 22 and 26 mm, and > 26 mm) classifications.ResultsNo features were redundant or duplicated after an analysis using Pearson’s correlation coefficient, LASSO-Pattern search algorithm, and variance inflation factors. Among the positions, CT at the nasal side had the highest correlation with AXL followed by the central area. In binary classification, our classifiers obtained high accuracy, as indicated by accuracy, recall, positive predictive value (PPV), negative predictive value (NPV), F1 score, and area under ROC curve (AUC) values of 94.37, 100, 90.91, 100, 86.67, and 95.61%, respectively. In multiclass classification, our classifiers were also highly accurate, as indicated by accuracy, weighted recall, weighted PPV, weighted NPV, weighted F1 score, and macro AUC of 88.73, 88.73, 91.21, 85.83, 87.42, and 93.42%, respectively.ConclusionsOur binary and multiclass classifiers classify AXL well from CT, as indicated on OCT images. We demonstrated the effectiveness of the proposed classifiers and provided an assistance tool for physicians.
Collapse
Affiliation(s)
- Hao-Chun Lu
- Graduate Institute of Business and Management, Chang Gung University, Taoyuan, Taiwan
- Division of Cardiology, Department of Internal Medicine, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Hsin-Yi Chen
- Department of Ophthalmology, Fu Jen Catholic University Hospital, New Taipei City, Taiwan
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Chien-Jung Huang
- Department of Ophthalmology, Fu Jen Catholic University Hospital, New Taipei City, Taiwan
| | - Pao-Hsien Chu
- Division of Cardiology, Department of Internal Medicine, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Taipei, Taiwan
| | - Lung-Sheng Wu
- Division of Cardiology, Department of Internal Medicine, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Taipei, Taiwan
| | - Chia-Ying Tsai
- Department of Ophthalmology, Fu Jen Catholic University Hospital, New Taipei City, Taiwan
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan
- *Correspondence: Chia-Ying Tsai,
| |
Collapse
|
42
|
Kim J, Ryu IH, Kim JK, Lee IS, Kim HK, Han E, Yoo TK. Machine learning predicting myopic regression after corneal refractive surgery using preoperative data and fundus photography. Graefes Arch Clin Exp Ophthalmol 2022; 260:3701-3710. [PMID: 35748936 DOI: 10.1007/s00417-022-05738-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 05/28/2022] [Accepted: 06/14/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE Myopic regression after surgery is the most common long-term complication of refractive surgery, but it is difficult to identify myopic regression without long-term observation. This study aimed to develop machine learning models to identify high-risk patients for refractive regression based on preoperative data and fundus photography. METHODS This retrospective study assigned subjects to the training (n = 1606 eyes) and validation (n = 403 eyes) datasets with chronological data splitting. Machine learning models with ResNet50 (for image analysis) and XGBoost (for integration of all variables and fundus photography) were developed based on subjects who underwent corneal refractive surgery. The primary outcome was the predictive performance for the presence of myopic regression at 4 years of follow-up examination postoperatively. RESULTS By integrating all factors and fundus photography, the final combined machine learning model showed good performance to predict myopic regression of more than 0.5 D (area under the receiver operating characteristic curve [ROC-AUC], 0.753; 95% confidence interval [CI], 0.710-0.793). The performance of the final model was better than the single ResNet50 model only using fundus photography (ROC-AUC, 0.673; 95% CI, 0.627-0.716). The top-five most important input features were fundus photography, preoperative anterior chamber depth, planned ablation thickness, age, and preoperative central corneal thickness. CONCLUSION Our machine learning algorithm provides an efficient strategy to identify high-risk patients with myopic regression without additional labor, cost, and time. Surgeons might benefit from preoperative risk assessment of myopic regression, patient counseling before surgery, and surgical option decisions.
Collapse
Affiliation(s)
| | - Ik Hee Ryu
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.,VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.,VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Eoksoo Han
- Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea. .,VISUWORKS, Seoul, South Korea. .,Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| |
Collapse
|
43
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106735. [PMID: 35305492 DOI: 10.1016/j.cmpb.2022.106735] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 02/15/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Patients with angle-closure glaucoma (ACG) are asymptomatic until they experience a painful attack. Shallow anterior chamber depth (ACD) is considered a significant risk factor for ACG. We propose a deep learning approach to detect shallow ACD using fundus photographs and to identify the hidden features of shallow ACD. METHODS This retrospective study assigned healthy subjects to the training (n = 1188 eyes) and test (n = 594) datasets (prospective validation design). We used a deep learning approach to estimate ACD and build a classification model to identify eyes with a shallow ACD. The proposed method, including subtraction of the input and output images of CycleGAN and a thresholding algorithm, was adopted to visualize the characteristic features of fundus photographs with a shallow ACD. RESULTS The deep learning model integrating fundus photographs and clinical variables achieved areas under the receiver operating characteristic curve of 0.978 (95% confidence interval [CI], 0.963-0.988) for an ACD ≤ 2.60 mm and 0.895 (95% CI, 0.868-0.919) for an ACD ≤ 2.80 mm, and outperformed the regression model using only clinical variables. However, the difference between shallow and deep ACD classes on fundus photographs was difficult to be detected with the naked eye. We were unable to identify the features of shallow ACD using the Grad-CAM. The CycleGAN-based feature images showed that area around the macula and optic disk significantly contributed to the classification of fundus photographs with a shallow ACD. CONCLUSIONS We demonstrated the feasibility of a novel deep learning model to detect a shallow ACD as a screening tool for ACG using fundus photographs. The CycleGAN-based feature map showed the hidden characteristic features of shallow ACD that were previously undetectable by conventional techniques and ophthalmologists. This framework will facilitate the early detection of shallow ACD to prevent overlooking the risks associated with ACG.
Collapse
Affiliation(s)
- Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea; Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
44
|
Betzler BK, Rim TH, Sabanayagam C, Cheng CY. Artificial Intelligence in Predicting Systemic Parameters and Diseases From Ophthalmic Imaging. Front Digit Health 2022; 4:889445. [PMID: 35706971 PMCID: PMC9190759 DOI: 10.3389/fdgth.2022.889445] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/06/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial Intelligence (AI) analytics has been used to predict, classify, and aid clinical management of multiple eye diseases. Its robust performances have prompted researchers to expand the use of AI into predicting systemic, non-ocular diseases and parameters based on ocular images. Herein, we discuss the reasons why the eye is well-suited for systemic applications, and review the applications of deep learning on ophthalmic images in the prediction of demographic parameters, body composition factors, and diseases of the cardiovascular, hematological, neurodegenerative, metabolic, renal, and hepatobiliary systems. Three main imaging modalities are included—retinal fundus photographs, optical coherence tomographs and external ophthalmic images. We examine the range of systemic factors studied from ophthalmic imaging in current literature and discuss areas of future research, while acknowledging current limitations of AI systems based on ophthalmic images.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
45
|
Du R, Ohno-Matsui K. Novel Uses and Challenges of Artificial Intelligence in Diagnosing and Managing Eyes with High Myopia and Pathologic Myopia. Diagnostics (Basel) 2022; 12:diagnostics12051210. [PMID: 35626365 PMCID: PMC9141019 DOI: 10.3390/diagnostics12051210] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 05/09/2022] [Accepted: 05/10/2022] [Indexed: 02/04/2023] Open
Abstract
Myopia is a global health issue, and the prevalence of high myopia has increased significantly in the past five to six decades. The high incidence of myopia and its vision-threatening course emphasize the need for automated methods to screen for high myopia and its serious form, named pathologic myopia (PM). Artificial intelligence (AI)-based applications have been extensively applied in medicine, and these applications have focused on analyzing ophthalmic images to diagnose the disease and to determine prognosis from these images. However, unlike diseases that mainly show pathologic changes in the fundus, high myopia and PM generate even more data because both the ophthalmic information and morphological changes in the retina and choroid need to be analyzed. In this review, we present how AI techniques have been used to diagnose and manage high myopia, PM, and other ocular diseases and discuss the current capacity of AI in assisting in preventing high myopia.
Collapse
|
46
|
Myopia prediction: a systematic review. Eye (Lond) 2022; 36:921-929. [PMID: 34645966 PMCID: PMC9046389 DOI: 10.1038/s41433-021-01805-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 08/21/2021] [Accepted: 10/01/2021] [Indexed: 11/08/2022] Open
Abstract
Myopia is a leading cause of visual impairment and has raised significant international concern in recent decades with rapidly increasing prevalence and incidence worldwide. Accurate prediction of future myopia risk could help identify high-risk children for early targeted intervention to delay myopia onset or slow myopia progression. Researchers have built and assessed various myopia prediction models based on different datasets, including baseline refraction or biometric data, lifestyle data, genetic data, and data integration. Here, we summarize all related work published in the past 30 years and provide a comprehensive review of myopia prediction methods, datasets, and performance, which could serve as a useful reference and valuable guideline for future research.
Collapse
|
47
|
Lim JS, Hong M, Lam WST, Zhang Z, Teo ZL, Liu Y, Ng WY, Foo LL, Ting DSW. Novel technical and privacy-preserving technology for artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2022; 33:174-187. [PMID: 35266894 DOI: 10.1097/icu.0000000000000846] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and aiding clinical decision-making. The use of digital data has also heralded the need for privacy-preserving technology to protect patient confidentiality and to guard against threats such as adversarial attacks. Hence, this review aims to outline novel AI-based systems for ophthalmology use, privacy-preserving measures, potential challenges, and future directions of each. RECENT FINDINGS Several key AI algorithms used to improve disease detection and outcomes include: Data-driven, imagedriven, natural language processing (NLP)-driven, genomics-driven, and multimodality algorithms. However, deep learning systems are susceptible to adversarial attacks, and use of data for training models is associated with privacy concerns. Several data protection methods address these concerns in the form of blockchain technology, federated learning, and generative adversarial networks. SUMMARY AI-applications have vast potential to meet many eyecare needs, consequently reducing burden on scarce healthcare resources. A pertinent challenge would be to maintain data privacy and confidentiality while supporting AI endeavors, where data protection methods would need to rapidly evolve with AI technology needs. Ultimately, for AI to succeed in medicine and ophthalmology, a balance would need to be found between innovation and privacy.
Collapse
Affiliation(s)
- Jane S Lim
- Singapore National Eye Centre, Singapore Eye Research Institute
| | | | - Walter S T Lam
- Yong Loo Lin School of Medicine, National University of Singapore
| | - Zheting Zhang
- Lee Kong Chian School of Medicine, Nanyang Technological University
| | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Yong Liu
- National University of Singapore, DukeNUS Medical School, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute
| |
Collapse
|
48
|
Chandrasekaran R, Loganathan B. Retinopathy grading with deep learning and wavelet hyper-analytic activations. THE VISUAL COMPUTER 2022; 39:1-16. [PMID: 35493724 PMCID: PMC9035984 DOI: 10.1007/s00371-022-02489-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/12/2022] [Indexed: 06/14/2023]
Abstract
Recent developments reveal the prominence of Diabetic Retinopathy (DR) grading. In the past few decades, Wavelet-based DR classification has shown successful impacts and the Deep Learning models, like Convolutional Neural Networks (CNN's), have evolved in offering the highest prediction accuracy. In this work, the features of the input image are enhanced with the integration of Multi-Resolution Analysis (MRA) and a CNN framework without costing more convolution filters. The bottleneck with conventional activation functions, used in CNN's, is the nullification of the feature maps that are negative in value. In this work, a novel Hyper-analytic Wavelet (HW) phase activation function is formulated with unique characteristics for the wavelet sub-bands. Instead of dismissal, the function transforms these negative coefficients that correspond to significant edge feature maps. The hyper-analytic wavelet phase forms the imaginary part of the complex activation. And the hyper-parameter of the activation function is selected such that the corresponding magnitude spectrum produces monotonic and effective activations. The performance of 3 CNN models (1 custom, shallow CNN, ResNet with Soft attention, Alex Net for DR) with spatial-Wavelet quilts is better. With the spatial-Wavelet quilts, the Alex Net for DR has an improvement with an 11% of accuracy level (from 87 to 98%). The highest accuracy level of 98% and the highest Sensitivity of 99% are attained through Modified Alex Net for DR. The proposal also illustrates the visualization of the negative edge preservation with assumed image patches. From this study, the researcher infers that models with spatial-Wavelet quilts, with the hyper-analytic activations, have better generalization ability. And the visualization of heat maps provides evidence of better learning of the feature maps from the wavelet sub-bands.
Collapse
Affiliation(s)
| | - Balaji Loganathan
- Department of ECE, Vel Tech Rangarajan Dr. Sagunthala R & D Institute of Science and Technology, Chennai, Tamil Nadu 600062 India
| |
Collapse
|
49
|
Espinosa J, Pérez J, Villanueva A. Prediction of Subjective Refraction From Anterior Corneal Surface, Eye Lengths, and Age Using Machine Learning Algorithms. Transl Vis Sci Technol 2022; 11:8. [PMID: 35404439 PMCID: PMC9034724 DOI: 10.1167/tvst.11.4.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To develop a machine learning regression model of subjective refractive prescription from minimum ocular biometry and corneal topography features. Methods Anterior corneal surface parameters (Zernike coefficients and keratometry), axial length, anterior chamber depth, and age were posed as features to predict subjective refractions. Measurements from 355 eyes were split into training (75%) and test (25%) sets. Different machine learning regression algorithms were trained by 10-fold cross-validation, optimized, and tested. A neighborhood component analysis provided features’ normalized weights in predictions. Results Gaussian process regression algorithms provided the best models with mean absolute errors of around 1.00 diopters (D) in the spherical component and 0.15 D in the astigmatic components. Conclusions The normalized weights showed that subjective refraction can be predicted by only keratometry, age, and axial length. Increasing the topographic description detail of the anterior corneal surface implied by a high-order Zernike decomposition versus adjustment to a spherocylindrical surface is not reflected as improved subjective refraction prediction, which is poor, mainly in the spherical component. However, the highest achievable accuracy differs by only 0.75 D from that of other works with a more exhaustive eye refractive elements description. Although the chosen parameters may have not been the most efficient, applying machine learning and big data to predict subjective refraction can be risky and impractical when evaluating a particular subject at statistical extremes. Translational Relevance This work evaluates subjective refraction prediction by machine learning from the anterior corneal surface and ocular biometry. It shows the minimum biometric information required and the highest achievable accuracy. RESUMEN Objetivo El desarrollo de un modelo de regresión de aprendizaje automático prescripción refractiva subjetiva a partir de las características mínimas de la biometría ocular y la superficie corneal. Métodos Los parámetros de la superficie corneal anterior (coeficientes de Zernike y queratometría), además de longitudes axiales y de cámara anterior, edades y las refracciones subjetivas no ciclopléjicas de 355 ojos se dividieron en un conjunto de entrenamiento (75%) y otro de test (25%) y se entrenaron diferentes algoritmos de regresión de aprendizaje automático mediante validación cruzada 10 veces, se optimizaron y se probaron sobre el conjunto test. Resultados Los algoritmos de regresión del proceso gaussiano proporcionaron los mejores modelos con un error absoluto medio fue de alrededor de 1.00 D en el componente esférico y de 0.25 D en los componentes astigmáticos. Conclusiones Los pesos normalizados mostraron que la refracción subjetiva puede predecirse utilizando únicamente la queratometría, la edad y la longitud axial como características. El aumento del detalle de la descripción topográfica de la superficie corneal anterior que supone una descomposición de Zernike de alto orden frente al ajuste a una superficie esferocilíndrica realizado por queratometría no se refleja en una mejora de la predicción de la refracción subjetiva, que es pobre, en cualquier caso, principalmente en el componente esférico. Sin embargo, la máxima precisión alcanzada difiere en sólo 0,75 D de la de otros trabajos con una descripción más exhaustiva de los elementos refractivos del ojo. De todos modos, el aprendizaje automático y los datos masivos aplicados a la predicción de la refracción subjetiva pueden ser arriesgados y poco prácticos cuando se evalúa a un sujeto concreto en los extremos estadísticos, aunque los parámetros elegidos puedan no haber sido los más ineficaces. Relevancia Traslativa El trabajo evalúa la predicción de la refracción subjetiva mediante aprendizaje automático a partir de la superficie corneal anterior y la biometría ocular, mostrando la mínima información biométrica requerida y la máxima precisión alcanzable.
Collapse
Affiliation(s)
- Julián Espinosa
- IUFACyT, Universidad de Alicante, San Vicente del Raspeig, Spain.,Departamento de Óptica, Farmacología y Anatomía, Universidad de Alicante, San Vicente del Raspeig, Spain
| | - Jorge Pérez
- IUFACyT, Universidad de Alicante, San Vicente del Raspeig, Spain.,Departamento de Óptica, Farmacología y Anatomía, Universidad de Alicante, San Vicente del Raspeig, Spain
| | - Asier Villanueva
- IUFACyT, Universidad de Alicante, San Vicente del Raspeig, Spain
| |
Collapse
|
50
|
Yang D, Li M, Li W, Wang Y, Niu L, Shen Y, Zhang X, Fu B, Zhou X. Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients. Front Med (Lausanne) 2022; 9:834281. [PMID: 35433763 PMCID: PMC9007166 DOI: 10.3389/fmed.2022.834281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/04/2022] [Indexed: 11/21/2022] Open
Abstract
Summary Ultrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power. Purpose To explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images. Methods UWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel. Results ResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map. Conclusions It was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved.
Collapse
Affiliation(s)
- Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Weizhen Li
- School of Data Science, Fudan University, Shanghai, China
| | - Yunzhe Wang
- Shanghai Medical College, Fudan University, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Xiaoyu Zhang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Bo Fu
- School of Data Science, Fudan University, Shanghai, China
- Bo Fu
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- *Correspondence: Xingtao Zhou
| |
Collapse
|