1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Li J, Guan Z, Wang J, Cheung CY, Zheng Y, Lim LL, Lim CC, Ruamviboonsuk P, Raman R, Corsino L, Echouffo-Tcheugui JB, Luk AOY, Chen LJ, Sun X, Hamzah H, Wu Q, Wang X, Liu R, Wang YX, Chen T, Zhang X, Yang X, Yin J, Wan J, Du W, Quek TC, Goh JHL, Yang D, Hu X, Nguyen TX, Szeto SKH, Chotcomwongse P, Malek R, Normatova N, Ibragimova N, Srinivasan R, Zhong P, Huang W, Deng C, Ruan L, Zhang C, Zhang C, Zhou Y, Wu C, Dai R, Koh SWC, Abdullah A, Hee NKY, Tan HC, Liew ZH, Tien CSY, Kao SL, Lim AYL, Mok SF, Sun L, Gu J, Wu L, Li T, Cheng D, Wang Z, Qin Y, Dai L, Meng Z, Shu J, Lu Y, Jiang N, Hu T, Huang S, Huang G, Yu S, Liu D, Ma W, Guo M, Guan X, Yang X, Bascaran C, Cleland CR, Bao Y, Ekinci EI, Jenkins A, Chan JCN, Bee YM, Sivaprasad S, Shaw JE, Simó R, Keane PA, Cheng CY, Tan GSW, Jia W, Tham YC, Li H, Sheng B, Wong TY. Integrated image-based deep learning and language models for primary diabetes care. Nat Med 2024; 30:2886-2896. [PMID: 39030266 PMCID: PMC11485246 DOI: 10.1038/s41591-024-03139-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 06/18/2024] [Indexed: 07/21/2024]
Abstract
Primary diabetes care and diabetic retinopathy (DR) screening persist as major public health challenges due to a shortage of trained primary care physicians (PCPs), particularly in low-resource settings. Here, to bridge the gaps, we developed an integrated image-language system (DeepDR-LLM), combining a large language model (LLM module) and image-based deep learning (DeepDR-Transformer), to provide individualized diabetes management recommendations to PCPs. In a retrospective evaluation, the LLM module demonstrated comparable performance to PCPs and endocrinology residents when tested in English and outperformed PCPs and had comparable performance to endocrinology residents in Chinese. For identifying referable DR, the average PCP's accuracy was 81.0% unassisted and 92.3% assisted by DeepDR-Transformer. Furthermore, we performed a single-center real-world prospective study, deploying DeepDR-LLM. We compared diabetes management adherence of patients under the unassisted PCP arm (n = 397) with those under the PCP+DeepDR-LLM arm (n = 372). Patients with newly diagnosed diabetes in the PCP+DeepDR-LLM arm showed better self-management behaviors throughout follow-up (P < 0.05). For patients with referral DR, those in the PCP+DeepDR-LLM arm were more likely to adhere to DR referrals (P < 0.01). Additionally, DeepDR-LLM deployment improved the quality and empathy level of management recommendations. Given its multifaceted performance, DeepDR-LLM holds promise as a digital solution for enhancing primary diabetes care and DR screening.
Collapse
Affiliation(s)
- Jiajia Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Jing Wang
- Department of Ophthalmology, Huadong Sanatorium, Wuxi, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lee-Ling Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Cynthia Ciwei Lim
- Department of Renal Medicine, Singapore General Hospital, SingHealth-Duke Academic Medical Centre, Singapore, Singapore
| | - Paisan Ruamviboonsuk
- Faculty of Medicine, Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Leonor Corsino
- Department of Medicine, Division of Endocrinology, Metabolism and Nutrition, and Department of Population Health Sciences, Duke University School of Medicine, Durham, NC, USA
| | - Justin B Echouffo-Tcheugui
- Department of Medicine, Division of Endocrinology, Diabetes and Metabolism, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Andrea O Y Luk
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- Hong Kong Institute of Diabetes and Obesity, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- Li Ka Shing Institute of Health Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- Asia Diabetes Foundation, Hong Kong Special Administrative Region, China
| | - Li Jia Chen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Xiaodong Sun
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ruhan Liu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Tingli Chen
- Department of Ophthalmology, Huadong Sanatorium, Wuxi, China
| | - Xiao Zhang
- The People's Hospital of Sixian County, Anhui, China
| | - Xiaolong Yang
- Department of Ophthalmology, Huadong Sanatorium, Wuxi, China
| | - Jun Yin
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Jing Wan
- Department of Endocrinology and Metabolism, Shanghai Eighth People's Hospital, Shanghai, China
| | - Wei Du
- Department of Endocrinology and Metabolism, Shanghai Eighth People's Hospital, Shanghai, China
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Truong X Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Simon K H Szeto
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Peranut Chotcomwongse
- Faculty of Medicine, Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Rachid Malek
- Department of Internal Medicine, Setif University Ferhat Abbas, Setif, Algeria
| | - Nargiza Normatova
- Ophthalmology Department at Tashkent Advanced Training Institute for Doctors, Tashkent, Uzbekistan
| | - Nilufar Ibragimova
- Charity Union of Persons with Disabilities and People with Diabetes UMID, Tashkent, Uzbekistan
| | - Ramyaa Srinivasan
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Pingting Zhong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Chenxin Deng
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Ruan
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cuntai Zhang
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Chenxi Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Yan Zhou
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Sky Wei Chee Koh
- National University Polyclinics, National University Health System, Department of Family Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Adina Abdullah
- Department of Primary Care Medicine, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia
| | | | - Hong Chang Tan
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Zhong Hong Liew
- Department of Renal Medicine, Singapore General Hospital, SingHealth-Duke Academic Medical Centre, Singapore, Singapore
| | - Carolyn Shan-Yeu Tien
- Department of Renal Medicine, Singapore General Hospital, SingHealth-Duke Academic Medical Centre, Singapore, Singapore
| | - Shih Ling Kao
- Division of Endocrinology, University Medicine Cluster, National University Health System, Singapore, Singapore
- Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Amanda Yuan Ling Lim
- Division of Endocrinology, University Medicine Cluster, National University Health System, Singapore, Singapore
- Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Shao Feng Mok
- Division of Endocrinology, University Medicine Cluster, National University Health System, Singapore, Singapore
- Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Lina Sun
- Department of Internal Medicine, Huadong Sanatorium, Wuxi, China
| | - Jing Gu
- Department of Internal Medicine, Huadong Sanatorium, Wuxi, China
| | - Liang Wu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Di Cheng
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Zheyuan Wang
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiming Qin
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ling Dai
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziyao Meng
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jia Shu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuwei Lu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Nan Jiang
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Tingting Hu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Shan Huang
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Gengyou Huang
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shujie Yu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dan Liu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Weizhi Ma
- Institute for AI Industry Research, Tsinghua University, Beijing, China
| | - Minyi Guo
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Xinping Guan
- Department of Automation and the Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Covadonga Bascaran
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, University of London, London, UK
| | - Charles R Cleland
- International Centre for Eye Health, London School of Hygiene and Tropical Medicine, University of London, London, UK
| | - Yuqian Bao
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Elif I Ekinci
- Department of Endocrinology, Austin Health, Melbourne, Victoria, Australia
- Department of Medicine, The University of Melbourne (Austin Health), Melbourne, Victoria, Australia
- Australian Centre for Accelerating Diabetes Innovations, The University of Melbourne, Parkville, Victoria, Australia
| | - Alicia Jenkins
- Australian Centre for Accelerating Diabetes Innovations, The University of Melbourne, Parkville, Victoria, Australia
- Baker Heart and Diabetes Institute, Melbourne, Victoria, Australia
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Juliana C N Chan
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- Hong Kong Institute of Diabetes and Obesity, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- Li Ka Shing Institute of Health Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- Asia Diabetes Foundation, Hong Kong Special Administrative Region, China
| | - Yong Mong Bee
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Jonathan E Shaw
- Department of Medicine, The University of Melbourne (Austin Health), Melbourne, Victoria, Australia
| | - Rafael Simó
- Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas, Instituto de Salud Carlos III, Madrid, Spain
- Diabetes and Metabolism Research Unit, Vall d'Hebron Research Institut, Autonomous University of Barcelona, Barcelona, Spain
| | - Pearse A Keane
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Center for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Center for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China.
- Beijing Tsinghua Changgung Hospital, Beijing, China.
- Zhongshan Ophthalmic Center, Guangzhou, China.
| |
Collapse
|
3
|
Nakayama LF, Matos J, Quion J, Novaes F, Mitchell WG, Mwavu R, Hung CJYJ, Santiago APD, Phanphruk W, Cardoso JS, Celi LA. Unmasking biases and navigating pitfalls in the ophthalmic artificial intelligence lifecycle: A narrative review. PLOS DIGITAL HEALTH 2024; 3:e0000618. [PMID: 39378192 PMCID: PMC11460710 DOI: 10.1371/journal.pdig.0000618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
Over the past 2 decades, exponential growth in data availability, computational power, and newly available modeling techniques has led to an expansion in interest, investment, and research in Artificial Intelligence (AI) applications. Ophthalmology is one of many fields that seek to benefit from AI given the advent of telemedicine screening programs and the use of ancillary imaging. However, before AI can be widely deployed, further work must be done to avoid the pitfalls within the AI lifecycle. This review article breaks down the AI lifecycle into seven steps-data collection; defining the model task; data preprocessing and labeling; model development; model evaluation and validation; deployment; and finally, post-deployment evaluation, monitoring, and system recalibration-and delves into the risks for harm at each step and strategies for mitigating them.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Sao Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - João Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering (FEUP), University of Porto, Porto, Portugal
- Institute for Systems and Computer Engineering (INESC TEC), Technology and Science, Porto, Portugal
| | - Justin Quion
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Frederico Novaes
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Sao Paulo, Brazil
| | | | - Rogers Mwavu
- Department of Information Technology, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Claudia Ju-Yi Ji Hung
- Department of Ophthalmology, Byers Eye Institute at Stanford, California, United States of America
- Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
| | - Alvina Pauline Dy Santiago
- University of the Philippines Manila College of Medicine, Manila, Philippines
- Division of Pediatric Ophthalmology, Department of Ophthalmology & Visual Sciences, Philippine General Hospital, Manila, Philippines
- Section of Pediatric Ophthalmology, Eye and Vision Institute, The Medical City, Pasig, Philippines
- Section of Pediatric Ophthalmology, International Eye and Institute, St. Luke’s Medical Center, Quezon City, Philippines
| | - Warachaya Phanphruk
- Department of Ophthalmology, Faculty of Medicine, Khon Kaen University, Khon Kaen, Thailand
| | - Jaime S. Cardoso
- Faculty of Engineering (FEUP), University of Porto, Porto, Portugal
- Institute for Systems and Computer Engineering (INESC TEC), Technology and Science, Porto, Portugal
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, Massachusetts, United States of America
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| |
Collapse
|
4
|
Wenderott K, Krups J, Zaruchas F, Weigl M. Effects of artificial intelligence implementation on efficiency in medical imaging-a systematic literature review and meta-analysis. NPJ Digit Med 2024; 7:265. [PMID: 39349815 PMCID: PMC11442995 DOI: 10.1038/s41746-024-01248-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 08/31/2024] [Indexed: 10/04/2024] Open
Abstract
In healthcare, integration of artificial intelligence (AI) holds strong promise for facilitating clinicians' work, especially in clinical imaging. We aimed to assess the impact of AI implementation for medical imaging on efficiency in real-world clinical workflows and conducted a systematic review searching six medical databases. Two reviewers double-screened all records. Eligible records were evaluated for methodological quality. The outcomes of interest were workflow adaptation due to AI implementation, changes in time for tasks, and clinician workload. After screening 13,756 records, we identified 48 original studies to be incuded in the review. Thirty-three studies measured time for tasks, with 67% reporting reductions. Yet, three separate meta-analyses of 12 studies did not show significant effects after AI implementation. We identified five different workflows adapting to AI use. Most commonly, AI served as a secondary reader for detection tasks. Alternatively, AI was used as the primary reader for identifying positive cases, resulting in reorganizing worklists or issuing alerts. Only three studies scrutinized workload calculations based on the time saved through AI use. This systematic review and meta-analysis represents an assessment of the efficiency improvements offered by AI applications in real-world clinical imaging, predominantly revealing enhancements across the studies. However, considerable heterogeneity in available studies renders robust inferences regarding overall effectiveness in imaging tasks. Further work is needed on standardized reporting, evaluation of system integration, and real-world data collection to better understand the technological advances of AI in real-world healthcare workflows. Systematic review registration: Prospero ID CRD42022303439, International Registered Report Identifier (IRRID): RR2-10.2196/40485.
Collapse
Affiliation(s)
| | - Jim Krups
- Institute for Patient Safety, University Hospital Bonn, Bonn, Germany
| | - Fiona Zaruchas
- Institute for Patient Safety, University Hospital Bonn, Bonn, Germany
| | - Matthias Weigl
- Institute for Patient Safety, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
5
|
Liu L, Hong J, Wu Y, Liu S, Wang K, Li M, Zhao L, Liu Z, Li L, Cui T, Tsui CK, Xu F, Hu W, Yun D, Chen X, Shang Y, Bi S, Wei X, Lai Y, Lin D, Fu Z, Deng Y, Cai K, Xie Y, Cao Z, Wang D, Zhang X, Dongye M, Lin H, Wu X. Digital ray: enhancing cataractous fundus images using style transfer generative adversarial networks to improve retinopathy detection. Br J Ophthalmol 2024; 108:1423-1429. [PMID: 38839251 PMCID: PMC11503040 DOI: 10.1136/bjo-2024-325403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 05/15/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND/AIMS The aim of this study was to develop and evaluate digital ray, based on preoperative and postoperative image pairs using style transfer generative adversarial networks (GANs), to enhance cataractous fundus images for improved retinopathy detection. METHODS For eligible cataract patients, preoperative and postoperative colour fundus photographs (CFP) and ultra-wide field (UWF) images were captured. Then, both the original CycleGAN and a modified CycleGAN (C2ycleGAN) framework were adopted for image generation and quantitatively compared using Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Additionally, CFP and UWF images from another cataract cohort were used to test model performances. Different panels of ophthalmologists evaluated the quality, authenticity and diagnostic efficacy of the generated images. RESULTS A total of 959 CFP and 1009 UWF image pairs were included in model development. FID and KID indicated that images generated by C2ycleGAN presented significantly improved quality. Based on ophthalmologists' average ratings, the percentages of inadequate-quality images decreased from 32% to 18.8% for CFP, and from 18.7% to 14.7% for UWF. Only 24.8% and 13.8% of generated CFP and UWF images could be recognised as synthetic. The accuracy of retinopathy detection significantly increased from 78% to 91% for CFP and from 91% to 93% for UWF. For retinopathy subtype diagnosis, the accuracies also increased from 87%-94% to 91%-100% for CFP and from 87%-95% to 93%-97% for UWF. CONCLUSION Digital ray could generate realistic postoperative CFP and UWF images with enhanced quality and accuracy for overall detection and subtype diagnosis of retinopathies, especially for CFP.\ TRIAL REGISTRATION NUMBER: This study was registered with ClinicalTrials.gov (NCT05491798).
Collapse
Affiliation(s)
- Lixue Liu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiaming Hong
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yuxuan Wu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Shaopeng Liu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Kai Wang
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Mingyuan Li
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zhenzhen Liu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Longhui Li
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Tingxin Cui
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Ching-Kit Tsui
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Fabao Xu
- Qilu Hospital of Shandong University, Jinan, Shandong, China
| | - Weiling Hu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xi Chen
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Shaowei Bi
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xiaoyue Wei
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yunxi Lai
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Duoru Lin
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Yaru Deng
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Kaimin Cai
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Yi Xie
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Zizheng Cao
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Dongni Wang
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xulin Zhang
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Meimei Dongye
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Haotian Lin
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| |
Collapse
|
6
|
Antaki F, Hammana I, Tessier MC, Boucher A, David Jetté ML, Beauchemin C, Hammamji K, Ong AY, Rhéaume MA, Gauthier D, Harissi-Dagher M, Keane PA, Pomp A. Implementation of Artificial Intelligence-Based Diabetic Retinopathy Screening in a Tertiary Care Hospital in Quebec: Prospective Validation Study. JMIR Diabetes 2024; 9:e59867. [PMID: 39226095 PMCID: PMC11408885 DOI: 10.2196/59867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/28/2024] [Accepted: 07/06/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) affects about 25% of people with diabetes in Canada. Early detection of DR is essential for preventing vision loss. OBJECTIVE We evaluated the real-world performance of an artificial intelligence (AI) system that analyzes fundus images for DR screening in a Quebec tertiary care center. METHODS We prospectively recruited adult patients with diabetes at the Centre hospitalier de l'Université de Montréal (CHUM) in Montreal, Quebec, Canada. Patients underwent dual-pathway screening: first by the Computer Assisted Retinal Analysis (CARA) AI system (index test), then by standard ophthalmological examination (reference standard). We measured the AI system's sensitivity and specificity for detecting referable disease at the patient level, along with its performance for detecting any retinopathy and diabetic macular edema (DME) at the eye level, and potential cost savings. RESULTS This study included 115 patients. CARA demonstrated a sensitivity of 87.5% (95% CI 71.9-95.0) and specificity of 66.2% (95% CI 54.3-76.3) for detecting referable disease at the patient level. For any retinopathy detection at the eye level, CARA showed 88.2% sensitivity (95% CI 76.6-94.5) and 71.4% specificity (95% CI 63.7-78.1). For DME detection, CARA had 100% sensitivity (95% CI 64.6-100) and 81.9% specificity (95% CI 75.6-86.8). Potential yearly savings from implementing CARA at the CHUM were estimated at CAD $245,635 (US $177,643.23, as of July 26, 2024) considering 5000 patients with diabetes. CONCLUSIONS Our study indicates that integrating a semiautomated AI system for DR screening demonstrates high sensitivity for detecting referable disease in a real-world setting. This system has the potential to improve screening efficiency and reduce costs at the CHUM, but more work is needed to validate it.
Collapse
Affiliation(s)
- Fares Antaki
- Institute of Ophthalmology, University College London, London, United Kingdom
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
- The CHUM School of Artificial Intelligence in Healthcare, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Imane Hammana
- Health Technology Assessment Unit, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Marie-Catherine Tessier
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Andrée Boucher
- Division of Endocrinology, Department of Medicine, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Maud Laurence David Jetté
- Direction du soutien à la transformation, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | | | - Karim Hammamji
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Ariel Yuhan Ong
- Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Oxford Eye Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Marc-André Rhéaume
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Danny Gauthier
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Mona Harissi-Dagher
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Ophthalmology, Université de Montréal, Montreal, QC, Canada
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom
| | - Alfons Pomp
- Health Technology Assessment Unit, Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
- Department of Surgery, University of Montréal, Montreal, QC, Canada
| |
Collapse
|
7
|
Peng J, Abdulla R, Liu X, He F, Xin X, Aisa HA. Polyphenol-Rich Extract of Apocynum venetum L. Leaves Protects Human Retinal Pigment Epithelial Cells against High Glucose-Induced Damage through Polyol Pathway and Autophagy. Nutrients 2024; 16:2944. [PMID: 39275261 PMCID: PMC11397065 DOI: 10.3390/nu16172944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Revised: 08/27/2024] [Accepted: 08/29/2024] [Indexed: 09/16/2024] Open
Abstract
Diabetic retinopathy (DR) is a specific microvascular problem of diabetes, which is mainly caused by hyperglycemia and may lead to rapid vision loss. Dietary polyphenols have been reported to decrease the risk of DR. Apocynum venetum L. leaves are rich in polyphenolic compounds and are popular worldwide for their health benefits as a national tea drink. Building on previous findings of antioxidant activity and aldose reductase inhibition of A. venetum, this study investigated the chemical composition of polyphenol-rich extract of A. venetum leaves (AVL) and its protective mechanism on ARPE-19 cells in hyperglycemia. Ninety-three compounds were identified from AVL by LC-MS/MS, including sixty-eight flavonoids, twenty-one organic acids, and four coumarins. AVL regulated the polyol pathway by decreasing the expression of aldose reductase and the content of sorbitol, enhancing the Na+K+-ATPase activity, and weakening intracellular oxidative stress effectively; it also could regulate the expression of autophagy-related proteins via the AMPK/mTOR/ULK1 signaling pathway to maintain intracellular homeostasis. AVL could restore the polyol pathway, inhibit oxidative stress, and maintain intracellular autophagy to protect cellular morphology and improve DR. The study reveals the phytochemical composition and protective mechanisms of AVL against DR, which could be developed as a functional food and/or candidate pharmaceutical, aiming for retina protection in diabetic retinopathy.
Collapse
Affiliation(s)
- Jun Peng
- The State Key Laboratory Basis Xinjiang Indigenous Medicinal Plant Resource, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
- University of Chinese Academy of Sciences, Beijing 100039, China
| | - Rahima Abdulla
- The State Key Laboratory Basis Xinjiang Indigenous Medicinal Plant Resource, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
| | - Xiaoyan Liu
- The State Key Laboratory Basis Xinjiang Indigenous Medicinal Plant Resource, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
- University of Chinese Academy of Sciences, Beijing 100039, China
| | - Fei He
- The State Key Laboratory Basis Xinjiang Indigenous Medicinal Plant Resource, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
| | - Xuelei Xin
- The State Key Laboratory Basis Xinjiang Indigenous Medicinal Plant Resource, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
| | - Haji Akber Aisa
- The State Key Laboratory Basis Xinjiang Indigenous Medicinal Plant Resource, Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
| |
Collapse
|
8
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
9
|
Dos Reis MA, Künas CA, da Silva Araújo T, Schneiders J, de Azevedo PB, Nakayama LF, Rados DRV, Umpierre RN, Berwanger O, Lavinsky D, Malerbi FK, Navaux POA, Schaan BD. Advancing healthcare with artificial intelligence: diagnostic accuracy of machine learning algorithm in diagnosis of diabetic retinopathy in the Brazilian population. Diabetol Metab Syndr 2024; 16:209. [PMID: 39210394 PMCID: PMC11360296 DOI: 10.1186/s13098-024-01447-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND In healthcare systems in general, access to diabetic retinopathy (DR) screening is limited. Artificial intelligence has the potential to increase care delivery. Therefore, we trained and evaluated the diagnostic accuracy of a machine learning algorithm for automated detection of DR. METHODS We included color fundus photographs from individuals from 4 databases (primary and specialized care settings), excluding uninterpretable images. The datasets consist of images from Brazilian patients, which differs from previous work. This modification allows for a more tailored application of the model to Brazilian patients, ensuring that the nuances and characteristics of this specific population are adequately captured. The sample was fractionated in training (70%) and testing (30%) samples. A convolutional neural network was trained for image classification. The reference test was the combined decision from three ophthalmologists. The sensitivity, specificity, and area under the ROC curve of the algorithm for detecting referable DR (moderate non-proliferative DR; severe non-proliferative DR; proliferative DR and/or clinically significant macular edema) were estimated. RESULTS A total of 15,816 images (4590 patients) were included. The overall prevalence of any degree of DR was 26.5%. Compared with human evaluators (manual method of diagnosing DR performed by an ophthalmologist), the deep learning algorithm achieved an area under the ROC curve of 0.98 (95% CI 0.97-0.98), with a specificity of 94.6% (95% CI 93.8-95.3) and a sensitivity of 93.5% (95% CI 92.2-94.9) at the point of greatest efficiency to detect referable DR. CONCLUSIONS A large database showed that this deep learning algorithm was accurate in detecting referable DR. This finding aids to universal healthcare systems like Brazil, optimizing screening processes and can serve as a tool for improving DR screening, making it more agile and expanding care access.
Collapse
Affiliation(s)
- Mateus A Dos Reis
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil.
- Universidade Feevale, Novo Hamburgo, RS, Brazil.
| | - Cristiano A Künas
- Institute of Informatics, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Thiago da Silva Araújo
- Institute of Informatics, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Josiane Schneiders
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | | | - Luis F Nakayama
- Department of Ophthalmology and Visual Sciences, Universidade Federal de São Paulo, São Paulo, Brazil
- Laboratory for Computational Physiology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dimitris R V Rados
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- TelessaúdeRS Project, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Roberto N Umpierre
- TelessaúdeRS Project, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- Department of Social Medicine, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
| | - Otávio Berwanger
- The George Institute for Global Health, Imperial College London, London, UK
| | - Daniel Lavinsky
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- Department of Ophthalmology, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
| | - Fernando K Malerbi
- Department of Ophthalmology and Visual Sciences, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Philippe O A Navaux
- Institute of Informatics, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Beatriz D Schaan
- Graduate Program in Medical Sciences: Endocrinology, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil
- Institute for Health Technology Assessment (IATS) - CNPq, Porto Alegre, Brazil
- Endocrinology Unit, Hospital de Clínicas de Porto Alegre, Porto Alegre, RS, Brazil
| |
Collapse
|
10
|
Rotem O, Schwartz T, Maor R, Tauber Y, Shapiro MT, Meseguer M, Gilboa D, Seidman DS, Zaritsky A. Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization. Nat Commun 2024; 15:7390. [PMID: 39191720 DOI: 10.1038/s41467-024-51136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 07/31/2024] [Indexed: 08/29/2024] Open
Abstract
The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a "black box" lacking human meaningful explanations for the models' decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables "human-in-the-loop" interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of "black box" classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.
Collapse
Affiliation(s)
- Oded Rotem
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel
| | | | - Ron Maor
- AIVF Ltd., Tel Aviv, 69271, Israel
| | | | | | - Marcos Meseguer
- IVI Foundation Instituto de Investigación Sanitaria La FeValencia, Valencia, 46026, Spain
- Department of Reproductive Medicine, IVIRMA Valencia, 46015, Valencia, Spain
| | | | - Daniel S Seidman
- AIVF Ltd., Tel Aviv, 69271, Israel
- The Faculty of Medicine, Tel Aviv University, Tel-Aviv, 69978, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel.
| |
Collapse
|
11
|
Wang Y, Han X, Li C, Luo L, Yin Q, Zhang J, Peng G, Shi D, He M. Impact of Gold-Standard Label Errors on Evaluating Performance of Deep Learning Models in Diabetic Retinopathy Screening: Nationwide Real-World Validation Study. J Med Internet Res 2024; 26:e52506. [PMID: 39141915 PMCID: PMC11358665 DOI: 10.2196/52506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/30/2023] [Accepted: 03/22/2024] [Indexed: 08/16/2024] Open
Abstract
BACKGROUND For medical artificial intelligence (AI) training and validation, human expert labels are considered the gold standard that represents the correct answers or desired outputs for a given data set. These labels serve as a reference or benchmark against which the model's predictions are compared. OBJECTIVE This study aimed to assess the accuracy of a custom deep learning (DL) algorithm on classifying diabetic retinopathy (DR) and further demonstrate how label errors may contribute to this assessment in a nationwide DR-screening program. METHODS Fundus photographs from the Lifeline Express, a nationwide DR-screening program, were analyzed to identify the presence of referable DR using both (1) manual grading by National Health Service England-certificated graders and (2) a DL-based DR-screening algorithm with validated good lab performance. To assess the accuracy of labels, a random sample of images with disagreement between the DL algorithm and the labels was adjudicated by ophthalmologists who were masked to the previous grading results. The error rates of labels in this sample were then used to correct the number of negative and positive cases in the entire data set, serving as postcorrection labels. The DL algorithm's performance was evaluated against both pre- and postcorrection labels. RESULTS The analysis included 736,083 images from 237,824 participants. The DL algorithm exhibited a gap between the real-world performance and the lab-reported performance in this nationwide data set, with a sensitivity increase of 12.5% (from 79.6% to 92.5%, P<.001) and a specificity increase of 6.9% (from 91.6% to 98.5%, P<.001). In the random sample, 63.6% (560/880) of negative images and 5.2% (140/2710) of positive images were misclassified in the precorrection human labels. High myopia was the primary reason for misclassifying non-DR images as referable DR images, while laser spots were predominantly responsible for misclassified referable cases. The estimated label error rate for the entire data set was 1.2%. The label correction was estimated to bring about a 12.5% enhancement in the estimated sensitivity of the DL algorithm (P<.001). CONCLUSIONS Label errors based on human image grading, although in a small percentage, can significantly affect the performance evaluation of DL algorithms in real-world DR screening.
Collapse
Affiliation(s)
- Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Cong Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qiuxia Yin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Guankai Peng
- Guangzhou Vision Tech Medical Technology Co, Ltd, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Centre for Eye and Vision Research, Hong Kong, China (Hong Kong)
| |
Collapse
|
12
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
13
|
Wang Y, Yang Z, Guo X, Jin W, Lin D, Chen A, Zhou M. Automated early detection of acute retinal necrosis from ultra-widefield color fundus photography using deep learning. EYE AND VISION (LONDON, ENGLAND) 2024; 11:27. [PMID: 39085922 PMCID: PMC11293155 DOI: 10.1186/s40662-024-00396-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 06/23/2024] [Indexed: 08/02/2024]
Abstract
BACKGROUND Acute retinal necrosis (ARN) is a relatively rare but highly damaging and potentially sight-threatening type of uveitis caused by infection with the human herpesvirus. Without timely diagnosis and appropriate treatment, ARN can lead to severe vision loss. We aimed to develop a deep learning framework to distinguish ARN from other types of intermediate, posterior, and panuveitis using ultra-widefield color fundus photography (UWFCFP). METHODS We conducted a two-center retrospective discovery and validation study to develop and validate a deep learning model called DeepDrARN for automatic uveitis detection and differentiation of ARN from other uveitis types using 11,508 UWFCFPs from 1,112 participants. Model performance was evaluated with the area under the receiver operating characteristic curve (AUROC), the area under the precision and recall curves (AUPR), sensitivity and specificity, and compared with seven ophthalmologists. RESULTS DeepDrARN for uveitis screening achieved an AUROC of 0.996 (95% CI: 0.994-0.999) in the internal validation cohort and demonstrated good generalizability with an AUROC of 0.973 (95% CI: 0.956-0.990) in the external validation cohort. DeepDrARN also demonstrated excellent predictive ability in distinguishing ARN from other types of uveitis with AUROCs of 0.960 (95% CI: 0.943-0.977) and 0.971 (95% CI: 0.956-0.986) in the internal and external validation cohorts. DeepDrARN was also tested in the differentiation of ARN, non-ARN uveitis (NAU) and normal subjects, with sensitivities of 88.9% and 78.7% and specificities of 93.8% and 89.1% in the internal and external validation cohorts, respectively. The performance of DeepDrARN is comparable to that of ophthalmologists and even exceeds the average accuracy of seven ophthalmologists, showing an improvement of 6.57% in uveitis screening and 11.14% in ARN identification. CONCLUSIONS Our study demonstrates the feasibility of deep learning algorithms in enabling early detection, reducing treatment delays, and improving outcomes for ARN patients.
Collapse
Affiliation(s)
- Yuqin Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zijian Yang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xingneng Guo
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Wang Jin
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Dan Lin
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Anying Chen
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, 315042, China
| | - Meng Zhou
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
14
|
Richardson A, Kundu A, Henao R, Lee T, Scott BL, Grewal DS, Fekrat S. Multimodal Retinal Imaging Classification for Parkinson's Disease Using a Convolutional Neural Network. Transl Vis Sci Technol 2024; 13:23. [PMID: 39136960 PMCID: PMC11323992 DOI: 10.1167/tvst.13.8.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 06/23/2024] [Indexed: 08/16/2024] Open
Abstract
Purpose Changes in retinal structure and microvasculature are connected to parallel changes in the brain. Two recent studies described machine learning algorithms trained on retinal images and quantitative data that identified Alzheimer's dementia and mild cognitive impairment with high accuracy. Prior studies also demonstrated retinal differences in individuals with PD. Herein, we developed a convolutional neural network (CNN) to classify multimodal retinal imaging from either a Parkinson's disease (PD) or control group. Methods We trained a CNN to receive retinal image inputs of optical coherence tomography (OCT) ganglion cell-inner plexiform layer (GC-IPL) thickness color maps, OCT angiography 6 × 6-mm en face macular images of the superficial capillary plexus, and ultra-widefield (UWF) fundus color and autofluorescence photographs to classify the retinal imaging as PD or control. The model consists of a shared pretrained VGG19 feature extractor and image-specific feature transformations which converge to a single output. Model results were assessed using receiver operating characteristic (ROC) curves and bootstrapped 95% confidence intervals for area under the ROC curve (AUC) values. Results In total, 371 eyes of 249 control subjects and 75 eyes of 52 PD subjects were used for training, validation, and testing. Our best CNN variant achieved an AUC of 0.918. UWF color photographs were the most effective imaging input, and GC-IPL thickness maps were the least contributory. Conclusions Using retinal images, our pilot CNN was able to identify individuals with PD and serves as a proof of concept to spur the collection of larger imaging datasets needed for clinical-grade algorithms. Translational Relevance Developing machine learning models for automated detection of Parkinson's disease from retinal imaging could lead to earlier and more widespread diagnoses.
Collapse
Affiliation(s)
- Alexander Richardson
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Computer Science, Duke University, Durham, NC, USA
| | - Anita Kundu
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
| | - Ricardo Henao
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Computer Science, Duke University, Durham, NC, USA
| | - Terry Lee
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
| | - Burton L. Scott
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| | - Dilraj S. Grewal
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
| | - Sharon Fekrat
- Duke Eye Center, Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- iMIND Research Group, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
15
|
Serikbaeva A, Li Y, Ma S, Yi D, Kazlauskas A. Resilience to diabetic retinopathy. Prog Retin Eye Res 2024; 101:101271. [PMID: 38740254 PMCID: PMC11262066 DOI: 10.1016/j.preteyeres.2024.101271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/03/2024] [Accepted: 05/10/2024] [Indexed: 05/16/2024]
Abstract
Chronic elevation of blood glucose at first causes relatively minor changes to the neural and vascular components of the retina. As the duration of hyperglycemia persists, the nature and extent of damage increases and becomes readily detectable. While this second, overt manifestation of diabetic retinopathy (DR) has been studied extensively, what prevents maximal damage from the very start of hyperglycemia remains largely unexplored. Recent studies indicate that diabetes (DM) engages mitochondria-based defense during the retinopathy-resistant phase, and thereby enables the retina to remain healthy in the face of hyperglycemia. Such resilience is transient, and its deterioration results in progressive accumulation of retinal damage. The concepts that co-emerge with these discoveries set the stage for novel intellectual and therapeutic opportunities within the DR field. Identification of biomarkers and mediators of protection from DM-mediated damage will enable development of resilience-based therapies that will indefinitely delay the onset of DR.
Collapse
Affiliation(s)
- Anara Serikbaeva
- Department of Physiology and Biophysics, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Yanliang Li
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Simon Ma
- Department of Bioengineering, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA; Department of Bioengineering, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA
| | - Andrius Kazlauskas
- Department of Physiology and Biophysics, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA; Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612, USA.
| |
Collapse
|
16
|
Malerbi FK, Nakayama LF, Melo GB, Stuchi JA, Lencione D, Prado PV, Ribeiro LZ, Dib SA, Regatieri CV. Automated Identification of Different Severity Levels of Diabetic Retinopathy Using a Handheld Fundus Camera and Single-Image Protocol. OPHTHALMOLOGY SCIENCE 2024; 4:100481. [PMID: 38694494 PMCID: PMC11060947 DOI: 10.1016/j.xops.2024.100481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 01/20/2024] [Accepted: 01/25/2024] [Indexed: 05/04/2024]
Abstract
Purpose To evaluate the performance of artificial intelligence (AI) systems embedded in a mobile, handheld retinal camera, with a single retinal image protocol, in detecting both diabetic retinopathy (DR) and more-than-mild diabetic retinopathy (mtmDR). Design Multicenter cross-sectional diagnostic study, conducted at 3 diabetes care and eye care facilities. Participants A total of 327 individuals with diabetes mellitus (type 1 or type 2) underwent a retinal imaging protocol enabling expert reading and automated analysis. Methods Participants underwent fundus photographs using a portable retinal camera (Phelcom Eyer). The captured images were automatically analyzed by deep learning algorithms retinal alteration score (RAS) and diabetic retinopathy alteration score (DRAS), consisting of convolutional neural networks trained on EyePACS data sets and fine-tuned using data sets of portable device fundus images. The ground truth was the classification of DR corresponding to adjudicated expert reading, performed by 3 certified ophthalmologists. Main Outcome Measures Primary outcome measures included the sensitivity and specificity of the AI system in detecting DR and/or mtmDR using a single-field, macula-centered fundus photograph for each eye, compared with a rigorous clinical reference standard comprising the reading center grading of 2-field imaging protocol using the International Classification of Diabetic Retinopathy severity scale. Results Of 327 analyzed patients (mean age, 57.0 ± 16.8 years; mean diabetes duration, 16.3 ± 9.7 years), 307 completed the study protocol. Sensitivity and specificity of the AI system were high in detecting any DR with DRAS (sensitivity, 90.48% [95% confidence interval (CI), 84.99%-94.46%]; specificity, 90.65% [95% CI, 84.54%-94.93%]) and mtmDR with the combination of RAS and DRAS (sensitivity, 90.23% [95% CI, 83.87%-94.69%]; specificity, 85.06% [95% CI, 78.88%-90.00%]). The area under the receiver operating characteristic curve was 0.95 for any DR and 0.89 for mtmDR. Conclusions This study showed a high accuracy for the detection of DR in different levels of severity with a single retinal photo per eye in an all-in-one solution, composed of a portable retinal camera powered by AI. Such a strategy holds great potential for increasing coverage rates of screening programs, contributing to prevention of avoidable blindness. Financial Disclosures F.K.M. is a medical consultant for Phelcom Technologies. J.A.S. is Chief Executive Officer and proprietary of Phelcom Technologies. D.L. is Chief Technology Officer and proprietary of Phelcom Technologies. P.V.P. is an employee at Phelcom Technologies.
Collapse
|
17
|
Papazafiropoulou AK. Diabetes management in the era of artificial intelligence. Arch Med Sci Atheroscler Dis 2024; 9:e122-e128. [PMID: 39086621 PMCID: PMC11289240 DOI: 10.5114/amsad/183420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 01/29/2024] [Indexed: 08/02/2024] Open
Abstract
Artificial intelligence is growing quickly, and its application in the global diabetes pandemic has the potential to completely change the way this chronic illness is identified and treated. Machine learning methods have been used to construct algorithms supporting predictive models for the risk of getting diabetes or its complications. Social media and Internet forums also increase patient participation in diabetes care. Diabetes resource usage optimisation has benefited from technological improvements. As a lifestyle therapy intervention, digital therapies have made a name for themselves in the treatment of diabetes. Artificial intelligence will cause a paradigm shift in diabetes care, moving away from current methods and toward the creation of focused, data-driven precision treatment.
Collapse
|
18
|
Fleming AD, Mellor J, McGurnaghan SJ, Blackbourn LAK, Goatman KA, Styles C, Storkey AJ, McKeigue PM, Colhoun HM. Deep learning detection of diabetic retinopathy in Scotland's diabetic eye screening programme. Br J Ophthalmol 2024; 108:984-988. [PMID: 37704266 DOI: 10.1136/bjo-2023-323395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/17/2023] [Indexed: 09/15/2023]
Abstract
BACKGROUND/AIMS Support vector machine-based automated grading (known as iGradingM) has been shown to be safe, cost-effective and robust in the diabetic retinopathy (DR) screening (DES) programme in Scotland. It triages screening episodes as gradable with no DR versus manual grading required. The study aim was to develop a deep learning-based autograder using images and gradings from DES and to compare its performance with that of iGradingM. METHODS Retinal images, quality assurance (QA) data and routine DR grades were obtained from national datasets in 179 944 patients for years 2006-2016. QA grades were available for 744 images. We developed a deep learning-based algorithm to detect whether either eye contained ungradable images or any DR. The sensitivity and specificity were evaluated against consensus QA grades and routine grades. RESULTS Images used in QA which were ungradable or with DR were detected by deep learning with better specificity compared with manual graders (p<0.001) and with iGradingM (p<0.001) at the same sensitivities. Any DR according to the DES final grade was detected with 89.19% (270 392/303 154) sensitivity and 77.41% (500 945/647 158) specificity. Observable disease and referable disease were detected with sensitivities of 96.58% (16 613/17 201) and 98.48% (22 600/22 948), respectively. Overall, 43.84% of screening episodes would require manual grading. CONCLUSION A deep learning-based system for DR grading was evaluated in QA data and images from 11 years in 50% of people attending a national DR screening programme. The system could reduce the manual grading workload at the same sensitivity compared with the current automated grading system.
Collapse
Affiliation(s)
- Alan D Fleming
- The Institute of Genetics and Cancer, University of Edinburgh Western General Hospital, Edinburgh, UK
| | - Joseph Mellor
- Usher Institute, The University of Edinburgh, Edinburgh, UK
| | - Stuart J McGurnaghan
- The Institute of Genetics and Cancer, University of Edinburgh Western General Hospital, Edinburgh, UK
| | - Luke A K Blackbourn
- The Institute of Genetics and Cancer, University of Edinburgh Western General Hospital, Edinburgh, UK
| | | | | | - Amos J Storkey
- School of Informatics, The University of Edinburgh, Edinburgh, UK
| | | | - Helen M Colhoun
- The Institute of Genetics and Cancer, University of Edinburgh Western General Hospital, Edinburgh, UK
| |
Collapse
|
19
|
Yao J, Lim J, Lim GYS, Ong JCL, Ke Y, Tan TF, Tan TE, Vujosevic S, Ting DSW. Novel artificial intelligence algorithms for diabetic retinopathy and diabetic macular edema. EYE AND VISION (LONDON, ENGLAND) 2024; 11:23. [PMID: 38880890 PMCID: PMC11181581 DOI: 10.1186/s40662-024-00389-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/09/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Diabetic retinopathy (DR) and diabetic macular edema (DME) are major causes of visual impairment that challenge global vision health. New strategies are needed to tackle these growing global health problems, and the integration of artificial intelligence (AI) into ophthalmology has the potential to revolutionize DR and DME management to meet these challenges. MAIN TEXT This review discusses the latest AI-driven methodologies in the context of DR and DME in terms of disease identification, patient-specific disease profiling, and short-term and long-term management. This includes current screening and diagnostic systems and their real-world implementation, lesion detection and analysis, disease progression prediction, and treatment response models. It also highlights the technical advancements that have been made in these areas. Despite these advancements, there are obstacles to the widespread adoption of these technologies in clinical settings, including regulatory and privacy concerns, the need for extensive validation, and integration with existing healthcare systems. We also explore the disparity between the potential of AI models and their actual effectiveness in real-world applications. CONCLUSION AI has the potential to revolutionize the management of DR and DME, offering more efficient and precise tools for healthcare professionals. However, overcoming challenges in deployment, regulatory compliance, and patient privacy is essential for these technologies to realize their full potential. Future research should aim to bridge the gap between technological innovation and clinical application, ensuring AI tools integrate seamlessly into healthcare workflows to enhance patient outcomes.
Collapse
Affiliation(s)
- Jie Yao
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Joshua Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Yong San Lim
- Duke-NUS Medical School, Singapore, Singapore
- SingHealth AI Health Program, Singapore, Singapore
| | - Jasmine Chiat Ling Ong
- Duke-NUS Medical School, Singapore, Singapore
- Division of Pharmacy, Singapore General Hospital, Singapore, Singapore
| | - Yuhe Ke
- Department of Anesthesiology and Perioperative Science, Singapore General Hospital, Singapore, Singapore
| | - Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Tien-En Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
- Eye Clinic, IRCCS MultiMedica, Milan, Italy
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- SingHealth AI Health Program, Singapore, Singapore.
| |
Collapse
|
20
|
Brant R, Nakayama LF, de Oliveira TVF, de Oliveira JAE, Ribeiro LZ, Richter GD, Rodacki R, Penha FM. Image quality comparison of AirDoc portable retina camera versus eyer in a diabetic retinopathy screening program. Int J Retina Vitreous 2024; 10:43. [PMID: 38877585 PMCID: PMC11177418 DOI: 10.1186/s40942-024-00559-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 05/27/2024] [Indexed: 06/16/2024] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) stands as the foremost cause of preventable blindness in adults. Despite efforts to expand DR screening coverage in the Brazilian public healthcare system, challenges persist due to various factors including social, medical, and financial constraints. Our objective was to evaluate the quality of images obtained with the AirDoc, a novel device, compared to Eyer portable camera which has already been clinically validated. METHODS Images were captured by two portable retinal devices: AirDoc and Eyer. The included patients had their fundus images obtained in a screening program conducted in Blumenau, Santa Catarina. Two retina specialists independently assessed image's quality. A comparison was performed between both devices regarding image quality and the presence of artifacts. RESULTS The analysis included 129 patients (mean age of 61 years), with 29 (43.28%) male and an average disease duration of 11.1 ± 8 years. In Ardoc, 21 (16.28%) images were classified as poor quality, with 88 (68%) presenting artifacts; in Eyer, 4 (3.1%) images were classified as poor quality, with 94 (72.87%) presenting artifacts. CONCLUSIONS Although both Eyer and AirDoc devices show potential as screening tools, the AirDoc images displayed higher rates of ungradable and low-quality images, that may directly affect the DR and DME grading. We must acknowledge the limitations of our study, including the relatively small sample size. Therefore, the interpretations of our analyses should be approached with caution, and further investigations with larger patient cohorts are warranted to validate our findings.
Collapse
Affiliation(s)
- Rodrigo Brant
- Ophthalmology and Visual Science Department, Sao Paulo Federal University, Sao Paulo, SP, Brazil.
- Keck School of Medicine, Roski Eye Institute, University of Southern California, Los Angeles, USA.
| | - Luis Filipe Nakayama
- Ophthalmology and Visual Science Department, Sao Paulo Federal University, Sao Paulo, SP, Brazil
- Laboratory for Computational Physiology, Massachusetts Insitute of Technology, Cambridge, MA, USA
| | | | | | - Lucas Zago Ribeiro
- Ophthalmology and Visual Science Department, Sao Paulo Federal University, Sao Paulo, SP, Brazil
| | | | - Rafael Rodacki
- Fundação Universidade Regional de Blumenau, Blumenau, SC, Brazil
| | | |
Collapse
|
21
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
22
|
Yao H, Wu Z, Gao SS, Guymer RH, Steffen V, Chen H, Hejrati M, Zhang M. Deep Learning Approaches for Detecting of Nascent Geographic Atrophy in Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2024; 4:100428. [PMID: 38284101 PMCID: PMC10818248 DOI: 10.1016/j.xops.2023.100428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 10/31/2023] [Accepted: 11/08/2023] [Indexed: 01/30/2024]
Abstract
Purpose Nascent geographic atrophy (nGA) refers to specific features seen on OCT B-scans, which are strongly associated with the future development of geographic atrophy (GA). This study sought to develop a deep learning model to screen OCT B-scans for nGA that warrant further manual review (an artificial intelligence [AI]-assisted approach), and to determine the extent of reduction in OCT B-scan load requiring manual review while maintaining near-perfect nGA detection performance. Design Development and evaluation of a deep learning model. Participants One thousand eight hundred and eighty four OCT volume scans (49 B-scans per volume) without neovascular age-related macular degeneration from 280 eyes of 140 participants with bilateral large drusen at baseline, seen at 6-monthly intervals up to a 36-month period (from which 40 eyes developed nGA). Methods OCT volume and B-scans were labeled for the presence of nGA. Their presence at the volume scan level provided the ground truth for training a deep learning model to identify OCT B-scans that potentially showed nGA requiring manual review. Using a threshold that provided a sensitivity of 0.99, the B-scans identified were assigned the ground truth label with the AI-assisted approach. The performance of this approach for detecting nGA across all visits, or at the visit of nGA onset, was evaluated using fivefold cross-validation. Main Outcome Measures Sensitivity for detecting nGA, and proportion of OCT B-scans requiring manual review. Results The AI-assisted approach (utilizing outputs from the deep learning model to guide manual review) had a sensitivity of 0.97 (95% confidence interval [CI] = 0.93-1.00) and 0.95 (95% CI = 0.87-1.00) for detecting nGA across all visits and at the visit of nGA onset, respectively, when requiring manual review of only 2.7% and 1.9% of selected OCT B-scans, respectively. Conclusions A deep learning model could be used to enable near-perfect detection of nGA onset while reducing the number of OCT B-scans requiring manual review by over 50-fold. This AI-assisted approach shows promise for substantially reducing the current burden of manual review of OCT B-scans to detect this crucial feature that portends future development of GA. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Heming Yao
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Ophthalmology Division, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
| | - Simon S. Gao
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Robyn H. Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Ophthalmology Division, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
| | - Verena Steffen
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Hao Chen
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Mohsen Hejrati
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Miao Zhang
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| |
Collapse
|
23
|
Watanabe T, Tohyama T, Ikeda M, Fujino T, Hashimoto T, Matsushima S, Kishimoto J, Todaka K, Kinugawa S, Tsutsui H, Ide T. Development of deep-learning models for real-time anaerobic threshold and peak VO2 prediction during cardiopulmonary exercise testing. Eur J Prev Cardiol 2024; 31:448-457. [PMID: 38078901 DOI: 10.1093/eurjpc/zwad375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 09/27/2023] [Accepted: 12/03/2023] [Indexed: 01/26/2024]
Abstract
AIMS Exercise intolerance is a clinical feature of patients with heart failure (HF). Cardiopulmonary exercise testing (CPET) is the first-line examination for assessing exercise capacity in patients with HF. However, the need for extensive experience in assessing anaerobic threshold (AT) and the potential risk associated with the excessive exercise load when measuring peak oxygen uptake (peak VO2) limit the utility of CPET. This study aimed to use deep-learning approaches to identify AT in real time during testing (defined as real-time AT) and to predict peak VO2 at real-time AT. METHODS AND RESULTS This study included the time-series data of CPET recorded at the Department of Cardiovascular Medicine, Kyushu University Hospital. Two deep neural network models were developed to: (i) estimate the AT probability using breath-by-breath data and (ii) predict peak VO2 using the data at the real-time AT. The eligible CPET contained 1472 records of 1053 participants aged 18-90 years and 20% were used for model evaluation. The developed model identified real-time AT with 0.82 for correlation coefficient (Corr) and 1.20 mL/kg/min for mean absolute error (MAE), and the corresponding AT time with 0.86 for Corr and 0.66 min for MAE. The peak VO2 prediction model achieved 0.87 for Corr and 2.25 mL/kg/min for MAE. CONCLUSION Deep-learning models for real-time CPET analysis can accurately identify AT and predict peak VO2. The developed models can be a competent assistant system to assess a patient's condition in real time, expanding CPET utility.
Collapse
Affiliation(s)
- Tatsuya Watanabe
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Takeshi Tohyama
- Centre for Advanced Medical Open Innovation, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka-shi, Fukuoka 812-8582, Japan
| | - Masataka Ikeda
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Takeo Fujino
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Toru Hashimoto
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Shouji Matsushima
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Junji Kishimoto
- Centre for Clinical and Translational Research of Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka-shi, Fukuoka 812-8582, Japan
| | - Koji Todaka
- Centre for Advanced Medical Open Innovation, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka-shi, Fukuoka 812-8582, Japan
- Centre for Clinical and Translational Research of Kyushu University Hospital, 3-1-1 Maidashi, Higashi-ku, Fukuoka-shi, Fukuoka 812-8582, Japan
| | - Shintaro Kinugawa
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| | - Hiroyuki Tsutsui
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- School of Medicine and Graduate School, International University of Health and Welfare, 141-11 Sakami, Okawa-shi, Fukuoka 831-0016, Japan
| | - Tomomi Ide
- Department of Cardiovascular Medicine, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
- Division of Cardiovascular Medicine, Research Institute of Angiocardiology, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
| |
Collapse
|
24
|
Zang F, Ma H. CRA-Net: Transformer guided category-relation attention network for diabetic retinopathy grading. Comput Biol Med 2024; 170:107993. [PMID: 38277925 DOI: 10.1016/j.compbiomed.2024.107993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 12/30/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Automated grading of diabetic retinopathy (DR) is an important means for assisting clinical diagnosis and preventing further retinal damage. However, imbalances and similarities between categories in the DR dataset make it highly challenging to accurately grade the severity of the condition. Furthermore, DR images encompass various lesions, and the pathological relationship information among these lesions can be easily overlooked. For instance, under different severity levels, the varying contributions of different lesions to accurate model grading differ significantly. To address the aforementioned issues, we design a transformer guided category-relation attention network (CRA-Net). Specifically, we propose a novel category attention block that enhances feature information within the class from the perspective of DR image categories, thereby alleviating class imbalance problems. Additionally, we design a lesion relation attention block that captures relationships between lesions by incorporating attention mechanisms in two primary aspects: capsule attention models the relative importance of different lesions, allowing the model to focus on more "informative" ones. Spatial attention captures the global position relationship between lesion features under transformer guidance, facilitating more accurate localization of lesions. Experimental and ablation studies on two datasets DDR and APTOS 2019 demonstrate the effectiveness of CRA-Net and obtain competitive performance.
Collapse
Affiliation(s)
- Feng Zang
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China.
| | - Hui Ma
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China.
| |
Collapse
|
25
|
Gu C, Wang Y, Jiang Y, Xu F, Wang S, Liu R, Yuan W, Abudureyimu N, Wang Y, Lu Y, Li X, Wu T, Dong L, Chen Y, Wang B, Zhang Y, Wei WB, Qiu Q, Zheng Z, Liu D, Chen J. Application of artificial intelligence system for screening multiple fundus diseases in Chinese primary healthcare settings: a real-world, multicentre and cross-sectional study of 4795 cases. Br J Ophthalmol 2024; 108:424-431. [PMID: 36878715 PMCID: PMC10894824 DOI: 10.1136/bjo-2022-322940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/19/2023] [Indexed: 03/08/2023]
Abstract
BACKGROUND/AIMS This study evaluates the performance of the Airdoc retinal artificial intelligence system (ARAS) for detecting multiple fundus diseases in real-world scenarios in primary healthcare settings and investigates the fundus disease spectrum based on ARAS. METHODS This real-world, multicentre, cross-sectional study was conducted in Shanghai and Xinjiang, China. Six primary healthcare settings were included in this study. Colour fundus photographs were taken and graded by ARAS and retinal specialists. The performance of ARAS is described by its accuracy, sensitivity, specificity and positive and negative predictive values. The spectrum of fundus diseases in primary healthcare settings has also been investigated. RESULTS A total of 4795 participants were included. The median age was 57.0 (IQR 39.0-66.0) years, and 3175 (66.2%) participants were female. The accuracy, specificity and negative predictive value of ARAS for detecting normal fundus and 14 retinal abnormalities were high, whereas the sensitivity and positive predictive value varied in detecting different abnormalities. The proportion of retinal drusen, pathological myopia and glaucomatous optic neuropathy was significantly higher in Shanghai than in Xinjiang. Moreover, the percentages of referable diabetic retinopathy, retinal vein occlusion and macular oedema in middle-aged and elderly people in Xinjiang were significantly higher than in Shanghai. CONCLUSION This study demonstrated the dependability of ARAS for detecting multiple retinal diseases in primary healthcare settings. Implementing the AI-assisted fundus disease screening system in primary healthcare settings might be beneficial in reducing regional disparities in medical resources. However, the ARAS algorithm must be improved to achieve better performance. TRIAL REGISTRATION NUMBER NCT04592068.
Collapse
Affiliation(s)
- Chufeng Gu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yujie Wang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yan Jiang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Feiping Xu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Shasha Wang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Rui Liu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Wen Yuan
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Nurbiyimu Abudureyimu
- Department of Ophthalmology, Bachu County Traditional Chinese Medicine Hospital of Kashgar, Xinjiang, China
| | - Ying Wang
- Department of Ophthalmology, Bachu Country People's Hospital of Kashgar, Xinjiang, China
| | - Yulan Lu
- Department of Ophthalmology, Linfen Community Health Service Center of Jing'an District, Shanghai, China
| | - Xiaolong Li
- Department of Ophthalmology, Pengpu New Village Community Health Service Center of Jing'an District, Shanghai, China
| | - Tao Wu
- Department of Ophthalmology, Pengpu Town Community Health Service Center of Jing'an District, Shanghai, China
| | - Li Dong
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | | | - Wen Bin Wei
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Qinghua Qiu
- Department of Ophthalmology, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhi Zheng
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Deng Liu
- Bachu Country People's Hospital of Kashgar, Xinjiang, China
- Shanghai No. 3 Rehabilitation Hospital, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| |
Collapse
|
26
|
Wang Y, Liu C, Hu W, Luo L, Shi D, Zhang J, Yin Q, Zhang L, Han X, He M. Economic evaluation for medical artificial intelligence: accuracy vs. cost-effectiveness in a diabetic retinopathy screening case. NPJ Digit Med 2024; 7:43. [PMID: 38383738 PMCID: PMC10881978 DOI: 10.1038/s41746-024-01032-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/05/2024] [Indexed: 02/23/2024] Open
Abstract
Artificial intelligence (AI) models have shown great accuracy in health screening. However, for real-world implementation, high accuracy may not guarantee cost-effectiveness. Improving AI's sensitivity finds more high-risk patients but may raise medical costs while increasing specificity reduces unnecessary referrals but may weaken detection capability. To evaluate the trade-off between AI model performance and the long-running cost-effectiveness, we conducted a cost-effectiveness analysis in a nationwide diabetic retinopathy (DR) screening program in China, comprising 251,535 participants with diabetes over 30 years. We tested a validated AI model in 1100 different diagnostic performances (presented as sensitivity/specificity pairs) and modeled annual screening scenarios. The status quo was defined as the scenario with the most accurate AI performance. The incremental cost-effectiveness ratio (ICER) was calculated for other scenarios against the status quo as cost-effectiveness metrics. Compared to the status quo (sensitivity/specificity: 93.3%/87.7%), six scenarios were cost-saving and seven were cost-effective. To achieve cost-saving or cost-effective, the AI model should reach a minimum sensitivity of 88.2% and specificity of 80.4%. The most cost-effective AI model exhibited higher sensitivity (96.3%) and lower specificity (80.4%) than the status quo. In settings with higher DR prevalence and willingness-to-pay levels, the AI needed higher sensitivity for optimal cost-effectiveness. Urban regions and younger patient groups also required higher sensitivity in AI-based screening. In real-world DR screening, the most accurate AI model may not be the most cost-effective. Cost-effectiveness should be independently evaluated, which is most likely to be affected by the AI's sensitivity.
Collapse
Affiliation(s)
- Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Chi Liu
- Faculty of Data Science, City University of Macau, Macao SAR, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qiuxia Yin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lei Zhang
- Clinical Medical Research Center, Children's Hospital of Nanjing Medical University, Nanjing, Jiangsu, 210008, China.
- Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia.
- Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia.
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Shatin, Hong Kong.
| |
Collapse
|
27
|
Abou Taha A, Dinesen S, Vergmann AS, Grauslund J. Present and future screening programs for diabetic retinopathy: a narrative review. Int J Retina Vitreous 2024; 10:14. [PMID: 38310265 PMCID: PMC10838429 DOI: 10.1186/s40942-024-00534-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 01/19/2024] [Indexed: 02/05/2024] Open
Abstract
Diabetes is a prevalent global concern, with an estimated 12% of the global adult population affected by 2045. Diabetic retinopathy (DR), a sight-threatening complication, has spurred diverse screening approaches worldwide due to advances in DR knowledge, rapid technological developments in retinal imaging and variations in healthcare resources.Many high income countries have fully implemented or are on the verge of completing a national Diabetic Eye Screening Programme (DESP). Although there have been some improvements in DR screening in Africa, Asia, and American countries further progress is needed. In low-income countries, only one out of 29, partially implemented a DESP, while 21 out of 50 lower-middle-income countries have started the DR policy cycle. Among upper-middle-income countries, a third of 59 nations have advanced in DR agenda-setting, with five having a comprehensive national DESP and 11 in the early stages of implementation.Many nations use 2-4 fields fundus images, proven effective with 80-98% sensitivity and 86-100% specificity compared to the traditional seven-field evaluation for DR. A cell phone based screening with a hand held retinal camera presents a potential low-cost alternative as imaging device. While this method in low-resource settings may not entirely match the sensitivity and specificity of seven-field stereoscopic photography, positive outcomes are observed.Individualized DR screening intervals are the standard in many high-resource nations. In countries that lacks a national DESP and resources, screening are more sporadic, i.e. screening intervals are not evidence-based and often less frequently, which can lead to late recognition of treatment required DR.The rising global prevalence of DR poses an economic challenge to nationwide screening programs AI-algorithms have showed high sensitivity and specificity for detection of DR and could provide a promising solution for the future screening burden.In summary, this narrative review enlightens on the epidemiology of DR and the necessity for effective DR screening programs. Worldwide evolution in existing approaches for DR screening has showed promising results but has also revealed limitations. Technological advancements, such as handheld imaging devices, tele ophthalmology and artificial intelligence enhance cost-effectiveness, but also the accessibility of DR screening in countries with low resources or where distance to or a shortage of ophthalmologists exists.
Collapse
Affiliation(s)
- Andreas Abou Taha
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark.
| | - Sebastian Dinesen
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Anna Stage Vergmann
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| |
Collapse
|
28
|
Chia MA, Hersch F, Sayres R, Bavishi P, Tiwari R, Keane PA, Turner AW. Validation of a deep learning system for the detection of diabetic retinopathy in Indigenous Australians. Br J Ophthalmol 2024; 108:268-273. [PMID: 36746615 PMCID: PMC10850716 DOI: 10.1136/bjo-2022-322237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 12/31/2022] [Indexed: 02/08/2023]
Abstract
BACKGROUND/AIMS Deep learning systems (DLSs) for diabetic retinopathy (DR) detection show promising results but can underperform in racial and ethnic minority groups, therefore external validation within these populations is critical for health equity. This study evaluates the performance of a DLS for DR detection among Indigenous Australians, an understudied ethnic group who suffer disproportionately from DR-related blindness. METHODS We performed a retrospective external validation study comparing the performance of a DLS against a retinal specialist for the detection of more-than-mild DR (mtmDR), vision-threatening DR (vtDR) and all-cause referable DR. The validation set consisted of 1682 consecutive, single-field, macula-centred retinal photographs from 864 patients with diabetes (mean age 54.9 years, 52.4% women) at an Indigenous primary care service in Perth, Australia. Three-person adjudication by a panel of specialists served as the reference standard. RESULTS For mtmDR detection, sensitivity of the DLS was superior to the retina specialist (98.0% (95% CI, 96.5 to 99.4) vs 87.1% (95% CI, 83.6 to 90.6), McNemar's test p<0.001) with a small reduction in specificity (95.1% (95% CI, 93.6 to 96.4) vs 97.0% (95% CI, 95.9 to 98.0), p=0.006). For vtDR, the DLS's sensitivity was again superior to the human grader (96.2% (95% CI, 93.4 to 98.6) vs 84.4% (95% CI, 79.7 to 89.2), p<0.001) with a slight drop in specificity (95.8% (95% CI, 94.6 to 96.9) vs 97.8% (95% CI, 96.9 to 98.6), p=0.002). For all-cause referable DR, there was a substantial increase in sensitivity (93.7% (95% CI, 91.8 to 95.5) vs 74.4% (95% CI, 71.1 to 77.5), p<0.001) and a smaller reduction in specificity (91.7% (95% CI, 90.0 to 93.3) vs 96.3% (95% CI, 95.2 to 97.4), p<0.001). CONCLUSION The DLS showed improved sensitivity and similar specificity compared with a retina specialist for DR detection. This demonstrates its potential to support DR screening among Indigenous Australians, an underserved population with a high burden of diabetic eye disease.
Collapse
Affiliation(s)
- Mark A Chia
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
| | | | | | | | | | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Nedlands, Western Australia, Australia
| |
Collapse
|
29
|
Wang X, Fang J, Yang L. Research progress on ocular complications caused by type 2 diabetes mellitus and the function of tears and blepharons. Open Life Sci 2024; 19:20220773. [PMID: 38299009 PMCID: PMC10828665 DOI: 10.1515/biol-2022-0773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/20/2023] [Accepted: 10/19/2023] [Indexed: 02/02/2024] Open
Abstract
The purpose of this study was to explore the related research progress of ocular complications (OCs) caused by type 2 diabetes mellitus (T2DM), tear and tarsal function, and the application of deep learning (DL) in the diagnosis of diabetes and OCs caused by it, to provide reference for the prevention and control of OCs in T2DM patients. This study reviewed the pathogenesis and treatment of diabetes retinopathy, keratopathy, dry eye disease, glaucoma, and cataract, analyzed the relationship between OCs and tear function and tarsal function, and discussed the application value of DL in the diagnosis of diabetes and OCs. Diabetes retinopathy is related to hyperglycemia, angiogenic factors, oxidative stress, hypertension, hyperlipidemia, and other factors. The increase in water content in the corneal stroma leads to corneal relaxation, loss of transparency, and elasticity, and can lead to the occurrence of corneal lesions. Dry eye syndrome is related to abnormal stability of the tear film and imbalance in neural and immune regulation. Elevated intraocular pressure, inflammatory reactions, atrophy of the optic nerve head, and damage to optic nerve fibers are the causes of glaucoma. Cataract is a common eye disease in the elderly, which is a visual disorder caused by lens opacity. Oxidative stress is an important factor in the occurrence of cataracts. In clinical practice, blood sugar control, laser therapy, and drug therapy are used to control the above eye complications. The function of tear and tarsal plate will be affected by eye diseases. Retinopathy and dry eye disease caused by diabetes will cause dysfunction of tear and tarsal plate, which will affect the eye function of patients. Furthermore, DL can automatically diagnose and classify eye diseases, automatically analyze fundus images, and accurately diagnose diabetes retinopathy, macular degeneration, and other diseases by analyzing and processing eye images and data. The treatment of T2DM is difficult and prone to OCs, which seriously threatens the normal life of patients. The occurrence of OCs is closely related to abnormal tear and tarsal function. Based on DL, clinical diagnosis and treatment of diabetes and its OCs can be carried out, which has positive application value.
Collapse
Affiliation(s)
- Xiaohong Wang
- Department of Operating Room, Xinchang County Peoples Hospital, Xinchang, 312500, Shaoxing City, Zhejiang, China
| | - Jian Fang
- Department of Ophthalmolgy, Xinchang County Peoples Hospital, Xinchang, 312500, Shaoxing City, Zhejiang, China
| | - Lina Yang
- Department of Ophthalmolgy, Xinchang County Peoples Hospital, Xinchang, 312500, Shaoxing City, Zhejiang, China
| |
Collapse
|
30
|
Avanesova TA, Oganezova JG, Anisimova VV, Baeva AB, Miaev DK. [Prevalence of diabetic retinopathy assessed using two-field mydriatic fundus photography]. Vestn Oftalmol 2024; 140:60-67. [PMID: 39254391 DOI: 10.17116/oftalma202414004160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Early detection of diabetic retinopathy (DR) is an urgent ophthalmological problem in Russia and globally. PURPOSE This study assesses the prevalence of asymptomatic retinopathy and attempts to identify risk groups for its development in patients with type 1 and 2 diabetes mellitus (T1DM and T2DM). MATERIAL AND METHODS The study involved clinics from 5 cities in the Russian Federation and it included 367 patients with DM, 34.88% men and 65.12% women, aged 50.88±20.55 years. 34.88% of patients suffered from T1DM, 65.12% suffered from T2DM, the average duration of the disease was 9.02±7.22 years. 58.31% of patients had a history of arterial hypertension, 13.08% had a history of smoking. The primary endpoint was the frequency of detection of diabetic changes in the eye fundus of patients with T1DM and T2DM in general; the secondary endpoint - same but separately, and for T2DM patients depending on the duration of the disease. The exploratory endpoint was the assessment of the influence of various factors on the development of DR. The patients underwent visometry (modified ETDRS table), biomicroscopy, mydriatic fundus photography according to the «2 fields» protocol. RESULTS The average detection rate of DR was 12.26%, primarily observed in patients with T2DM (13.81%), women (9.26%), in both eyes (8.17%). Among patients with DR, 26 (19.55%) had glycated hemoglobin (HbA1c) level exceeding 7.5% (p=0.002), indicating a direct relationship between this indicator and the incidence of DR. Logistic regression analysis showed that the duration of diabetes of more than 10 years has a statistically significant effect on the development of DR. In the modified model for odds estimation, the likelihood of developing DR is increased by the duration of DM for more than 10 years; increased blood pressure; HbA1c level >7.5%. CONCLUSION The obtained results, some of which will be presented in subsequent publications, highlight the effectiveness of using two-field mydriatic fundus photography as a screening for DR.
Collapse
Affiliation(s)
- T A Avanesova
- OOO Liga+, Reutov, Russia
- OOO TMG Podmoskovye, Sergiev Posad, Russia
| | - J G Oganezova
- Pirogov Russian National Research Medical University, Moscow, Russia
| | - V V Anisimova
- Central Clinical Hospital of the Administrative directorate of the President of the Russian Federation, Moscow, Russia
| | - A B Baeva
- Pirogov Russian National Research Medical University, Moscow, Russia
- Hospital for War Veterans No. 2, Moscow, Russia
| | | |
Collapse
|
31
|
Hu W, Joseph S, Li R, Woods E, Sun J, Shen M, Jan CL, Zhu Z, He M, Zhang L. Population impact and cost-effectiveness of artificial intelligence-based diabetic retinopathy screening in people living with diabetes in Australia: a cost effectiveness analysis. EClinicalMedicine 2024; 67:102387. [PMID: 38314061 PMCID: PMC10837545 DOI: 10.1016/j.eclinm.2023.102387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/29/2023] [Accepted: 12/05/2023] [Indexed: 02/06/2024] Open
Abstract
Background We aimed to evaluate the cost-effectiveness of an artificial intelligence-(AI) based diabetic retinopathy (DR) screening system in the primary care setting for both non-Indigenous and Indigenous people living with diabetes in Australia. Methods We performed a cost-effectiveness analysis between January 01, 2022 and August 01, 2023. A decision-analytic Markov model was constructed to simulate DR progression in a population of 1,197,818 non-Indigenous and 65,160 Indigenous Australians living with diabetes aged ≥20 years over 40 years. From a healthcare provider's perspective, we compared current practice to three primary care AI-based screening scenarios-(A) substitution of current manual grading, (B) scaling up to patient acceptance level, and (C) achieving universal screening. Study results were presented as incremental cost-effectiveness ratio (ICER), benefit-cost ratio (BCR), and net monetary benefits (NMB). A Willingness-to-pay (WTP) threshold of AU$50,000 per quality-adjusted life year (QALY) and a discount rate of 3.5% were adopted in this study. Findings With the status quo, the non-Indigenous diabetic population was projected to develop 96,269 blindness cases, resulting in AU$13,039.6 m spending on DR screening and treatment during 2020-2060. In comparison, all three intervention scenarios were effective and cost-saving. In particular, if a universal screening program was to be implemented (Scenario C), it would prevent 38,347 blindness cases, gain 172,090 QALYs and save AU$595.8 m, leading to a BCR of 3.96 and NMB of AU$9,200 m. Similar findings were also reported in the Indigenous population. With the status quo, 3,396 Indigenous individuals would develop blindness, which would cost the health system AU$796.0 m during 2020-2060. All three intervention scenarios were cost-saving for the Indigenous population. Notably, universal AI-based DR screening (Scenario C) would prevent 1,211 blindness cases and gain 9,800 QALYs in the Indigenous population, leading to a saving of AU$19.2 m with a BCR of 1.62 and NMB of AU$509 m. Interpretation Our findings suggest that implementing AI-based DR screening in primary care is highly effective and cost-saving in both Indigenous and non-Indigenous populations. Funding This project received grant funding from the Australian Government: the National Critical Research Infrastructure Initiative, Medical Research Future Fund (MRFAI00035) and the NHMRC Investigator Grant (APP1175405). The contents of the published material are solely the responsibility of the Administering Institution, a participating institution or individual authors and do not reflect the views of the NHMRC. This work was supported by the Global STEM Professorship Scheme (P0046113), the Fundamental Research Funds of the State Key Laboratory of Ophthalmology, Project of Investigation on Health Status of Employees in Financial Industry in Guangzhou, China (Z012014075). The Centre for Eye Research Australia receives Operational Infrastructure Support from the Victorian State Government. W.H. is supported by the Melbourne Research Scholarship established by the University of Melbourne. The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Collapse
Affiliation(s)
- Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Sanil Joseph
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Rui Li
- Central Clinical School, Faculty of Medicine, Monash University, Melbourne, VIC, Australia
- Artificial Intelligence and Modelling in Epidemiology Program, Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia
- China-Australia Joint Research Center for Infectious Diseases, School of Public Health, Xi’an Jiaotong University Health Science Center, Xi’an, Shaanxi, 710061, PR China
| | - Ekaterina Woods
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Jason Sun
- Eyetelligence Pty Ltd., Melbourne, Australia
| | - Mingwang Shen
- China-Australia Joint Research Center for Infectious Diseases, School of Public Health, Xi’an Jiaotong University Health Science Center, Xi’an, Shaanxi, 710061, PR China
| | - Catherine Lingxue Jan
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Lei Zhang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Clinical Medical Research Center, Children's Hospital of Nanjing Medical University, Nanjing, Jiangsu Province 210008, China
- Central Clinical School, Faculty of Medicine, Monash University, Melbourne, VIC, Australia
- Artificial Intelligence and Modelling in Epidemiology Program, Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia
| |
Collapse
|
32
|
Kemp O, Bascaran C, Cartwright E, McQuillan L, Matthew N, Shillingford-Ricketts H, Zondervan M, Foster A, Burton M. Real-world evaluation of smartphone-based artificial intelligence to screen for diabetic retinopathy in Dominica: a clinical validation study. BMJ Open Ophthalmol 2023; 8:e001491. [PMID: 38135351 DOI: 10.1136/bmjophth-2023-001491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 12/10/2023] [Indexed: 12/24/2023] Open
Abstract
OBJECTIVE Several artificial intelligence (AI) systems for diabetic retinopathy screening have been validated but there is limited evidence on their performance in real-world settings. This study aimed to assess the performance of an AI software deployed within the diabetic retinopathy screening programme in Dominica. METHODS AND ANALYSIS We conducted a prospective, cross-sectional clinical validation study. Patients with diabetes aged 18 years and above attending the diabetic retinopathy screening in primary care facilities in Dominica from 5 June to 3 July 2021 were enrolled.Grading was done at the point of care by the field grader, followed by counselling and referral to the eye clinic. Images were then graded by an AI system. Sensitivity, specificity with 95% CIs and area under the curve (AUC) were calculated for comparing the AI to field grader as gold standard. RESULTS A total of 587 participants were screened. The AI had a sensitivity and specificity for detecting referable diabetic retinopathy of 77.5% and 91.5% compared with the grader, for all participants, including ungradable images. The AUC was 0.8455. Excluding 52 participants deemed ungradable by the grader, the AI had a sensitivity and specificity of 81.4% and 91.5%, with an AUC of 0.9648. CONCLUSION This study provides evidence that AI has the potential to be deployed to assist a diabetic screening programme in a middle-income real-world setting and perform with reasonable accuracy compared with a specialist grader.
Collapse
Affiliation(s)
- Oliver Kemp
- London School of Hygiene and Tropical Medicine, London, UK
| | | | | | | | - Nanda Matthew
- Dominica China Friendship Hospital, Roseau, Dominica
| | | | | | - Allen Foster
- London School of Hygiene and Tropical Medicine, London, UK
| | - Matthew Burton
- London School of Hygiene and Tropical Medicine, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
33
|
Liu L, Li M, Lin D, Yun D, Lin Z, Zhao L, Pang J, Li L, Wu Y, Shang Y, Lin H, Wu X. Protocol to analyze fundus images for multidimensional quality grading and real-time guidance using deep learning techniques. STAR Protoc 2023; 4:102565. [PMID: 37733597 PMCID: PMC10519839 DOI: 10.1016/j.xpro.2023.102565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/09/2023] [Accepted: 08/18/2023] [Indexed: 09/23/2023] Open
Abstract
Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| |
Collapse
|
34
|
Than J, Sim PY, Muttuvelu D, Ferraz D, Koh V, Kang S, Huemer J. Teleophthalmology and retina: a review of current tools, pathways and services. Int J Retina Vitreous 2023; 9:76. [PMID: 38053188 PMCID: PMC10699065 DOI: 10.1186/s40942-023-00502-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 10/02/2023] [Indexed: 12/07/2023] Open
Abstract
Telemedicine, the use of telecommunication and information technology to deliver healthcare remotely, has evolved beyond recognition since its inception in the 1970s. Advances in telecommunication infrastructure, the advent of the Internet, exponential growth in computing power and associated computer-aided diagnosis, and medical imaging developments have created an environment where telemedicine is more accessible and capable than ever before, particularly in the field of ophthalmology. Ever-increasing global demand for ophthalmic services due to population growth and ageing together with insufficient supply of ophthalmologists requires new models of healthcare provision integrating telemedicine to meet present day challenges, with the recent COVID-19 pandemic providing the catalyst for the widespread adoption and acceptance of teleophthalmology. In this review we discuss the history, present and future application of telemedicine within the field of ophthalmology, and specifically retinal disease. We consider the strengths and limitations of teleophthalmology, its role in screening, community and hospital management of retinal disease, patient and clinician attitudes, and barriers to its adoption.
Collapse
Affiliation(s)
- Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Peng Y Sim
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Danson Muttuvelu
- Department of Ophthalmology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- MitØje ApS/Danske Speciallaeger Aps, Aarhus, Denmark
| | - Daniel Ferraz
- D'Or Institute for Research and Education (IDOR), São Paulo, Brazil
- Institute of Ophthalmology, University College London, London, UK
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Josef Huemer
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK.
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
| |
Collapse
|
35
|
Shi D, Zhang W, He S, Chen Y, Song F, Liu S, Wang R, Zheng Y, He M. Translation of Color Fundus Photography into Fluorescein Angiography Using Deep Learning for Enhanced Diabetic Retinopathy Screening. OPHTHALMOLOGY SCIENCE 2023; 3:100401. [PMID: 38025160 PMCID: PMC10630672 DOI: 10.1016/j.xops.2023.100401] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/23/2023] [Accepted: 09/08/2023] [Indexed: 12/01/2023]
Abstract
Purpose To develop and validate a deep learning model that can transform color fundus (CF) photography into corresponding venous and late-phase fundus fluorescein angiography (FFA) images. Design Cross-sectional study. Participants We included 51 370 CF-venous FFA pairs and 14 644 CF-late FFA pairs from 4438 patients for model development. External testing involved 50 eyes with CF-FFA pairs and 2 public datasets for diabetic retinopathy (DR) classification, with 86 952 CF from EyePACs, and 1744 CF from MESSIDOR2. Methods We trained a deep-learning model to transform CF into corresponding venous and late-phase FFA images. The translated FFA images' quality was evaluated quantitatively on the internal test set and subjectively on 100 eyes with CF-FFA paired images (50 from external), based on the realisticity of the global image, anatomical landmarks (macula, optic disc, and vessels), and lesions. Moreover, we validated the clinical utility of the translated FFA for classifying 5-class DR and diabetic macular edema (DME) in the EyePACs and MESSIDOR2 datasets. Main Outcome Measures Image generation was quantitatively assessed by structural similarity measures (SSIM), and subjectively by 2 clinical experts on a 5-point scale (1 refers real FFA); intragrader agreement was assessed by kappa. The DR classification accuracy was assessed by area under the receiver operating characteristic curve. Results The SSIM of the translated FFA images were > 0.6, and the subjective quality scores ranged from 1.37 to 2.60. Both experts reported similar quality scores with substantial agreement (all kappas > 0.8). Adding the generated FFA on top of CF improved DR classification in the EyePACs and MESSIDOR2 datasets, with the area under the receiver operating characteristic curve increased from 0.912 to 0.939 on the EyePACs dataset and from 0.952 to 0.972 on the MESSIDOR2 dataset. The DME area under the receiver operating characteristic curve also increased from 0.927 to 0.974 in the MESSIDOR2 dataset. Conclusions Our CF-to-FFA framework produced realistic FFA images. Moreover, adding the translated FFA images on top of CF improved the accuracy of DR screening. These results suggest that CF-to-FFA translation could be used as a surrogate method when FFA examination is not feasible and as a simple add-on to improve DR screening. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Weiyi Zhang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Shuang He
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Guangdong Provincial Clinical Research Center for Ocular Diseases, Sun Yat-sen University, Guangzhou, China
| | - Yanxian Chen
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Fan Song
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Shunming Liu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Ruobing Wang
- Department of Ophthalmology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Guangdong Provincial Clinical Research Center for Ocular Diseases, Sun Yat-sen University, Guangzhou, China
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| |
Collapse
|
36
|
Bora A, Tiwari R, Bavishi P, Virmani S, Huang R, Traynis I, Corrado GS, Peng L, Webster DR, Varadarajan AV, Pattanapongpaiboon W, Chopra R, Ruamviboonsuk P. Risk Stratification for Diabetic Retinopathy Screening Order Using Deep Learning: A Multicenter Prospective Study. Transl Vis Sci Technol 2023; 12:11. [PMID: 38079169 PMCID: PMC10715315 DOI: 10.1167/tvst.12.12.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 10/23/2023] [Indexed: 12/18/2023] Open
Abstract
Purpose Real-world evaluation of a deep learning model that prioritizes patients based on risk of progression to moderate or worse (MOD+) diabetic retinopathy (DR). Methods This nonrandomized, single-arm, prospective, interventional study included patients attending DR screening at four centers across Thailand from September 2019 to January 2020, with mild or no DR. Fundus photographs were input into the model, and patients were scheduled for their subsequent screening from September 2020 to January 2021 in order of predicted risk. Evaluation focused on model sensitivity, defined as correctly ranking patients that developed MOD+ within the first 50% of subsequent screens. Results We analyzed 1,757 patients, of which 52 (3.0%) developed MOD+. Using the model-proposed order, the model's sensitivity was 90.4%. Both the model-proposed order and mild/no DR plus HbA1c had significantly higher sensitivity than the random order (P < 0.001). Excluding one major (rural) site that had practical implementation challenges, the remaining sites included 567 patients and 15 (2.6%) developed MOD+. Here, the model-proposed order achieved 86.7% versus 73.3% for the ranking that used DR grade and hemoglobin A1c. Conclusions The model can help prioritize follow-up visits for the largest subgroups of DR patients (those with no or mild DR). Further research is needed to evaluate the impact on clinical management and outcomes. Translational Relevance Deep learning demonstrated potential for risk stratification in DR screening. However, real-world practicalities must be resolved to fully realize the benefit.
Collapse
Affiliation(s)
| | | | | | | | | | - Ilana Traynis
- Work done at Google via Advanced Clinical, Deerfield, IL, USA
| | | | | | | | | | | | | | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| |
Collapse
|
37
|
Wang G, Meng X, Zhang F. Past, present, and future of global research on artificial intelligence applications in dermatology: A bibliometric analysis. Medicine (Baltimore) 2023; 102:e35993. [PMID: 37960748 PMCID: PMC10637496 DOI: 10.1097/md.0000000000035993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 10/17/2023] [Indexed: 11/15/2023] Open
Abstract
In recent decades, artificial intelligence (AI) has played an increasingly important role in medicine, including dermatology. Worldwide, numerous studies have reported on AI applications in dermatology, rapidly increasing interest in this field. However, no bibliometric studies have been conducted to evaluate the past, present, or future of this topic. This study aimed to illustrate past and present research and outline future directions for global research on AI applications in dermatology using bibliometric analysis. We conducted an online search of the Web of Science Core Collection database to identify scientific papers on AI applications in dermatology. The bibliometric metadata of each selected paper were extracted, analyzed, and visualized using VOS viewer and Cite Space. A total of 406 papers, comprising 8 randomized controlled trials and 20 prospective studies, were deemed eligible for inclusion. The United States had the highest number of papers (n = 166). The University of California System (n = 24) and Allan C. Halpern (n = 11) were the institution and author with the highest number of papers, respectively. Based on keyword co-occurrence analysis, the studies were categorized into 9 distinct clusters, with clusters 2, 3, and 7 containing keywords with the latest average publication year. Wound progression prediction using machine learning, the integration of AI into teledermatology, and applications of the algorithms in skin diseases, are the current research priorities and will remain future research aims in this field.
Collapse
Affiliation(s)
- Guangxin Wang
- Shandong Innovation Center of Intelligent Diagnosis, Jinan Central Hospital, Shandong University, Jinan, Shandong, China
| | - Xianguang Meng
- Department of Dermatology, Jinan Central Hospital, Shandong University, Jinan, Shandong, China
| | - Fan Zhang
- Shandong Innovation Center of Intelligent Diagnosis, Jinan Central Hospital, Shandong University, Jinan, Shandong, China
| |
Collapse
|
38
|
Cui T, Lin D, Yu S, Zhao X, Lin Z, Zhao L, Xu F, Yun D, Pang J, Li R, Xie L, Zhu P, Huang Y, Huang H, Hu C, Huang W, Liang X, Lin H. Deep Learning Performance of Ultra-Widefield Fundus Imaging for Screening Retinal Lesions in Rural Locales. JAMA Ophthalmol 2023; 141:1045-1051. [PMID: 37856107 PMCID: PMC10587822 DOI: 10.1001/jamaophthalmol.2023.4650] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Pengzhi Zhu
- Greater Bay Area Center for Medical Device Evaluation and Inspection of National Medical Products Administration, Shenzhen, China
| | - Yuzhe Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Hongxin Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Changming Hu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
39
|
Abbas Q, Albathan M, Altameem A, Almakki RS, Hussain A. Deep-Ocular: Improved Transfer Learning Architecture Using Self-Attention and Dense Layers for Recognition of Ocular Diseases. Diagnostics (Basel) 2023; 13:3165. [PMID: 37891986 PMCID: PMC10605427 DOI: 10.3390/diagnostics13203165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023] Open
Abstract
It is difficult for clinicians or less-experienced ophthalmologists to detect early eye-related diseases. By hand, eye disease diagnosis is labor-intensive, prone to mistakes, and challenging because of the variety of ocular diseases such as glaucoma (GA), diabetic retinopathy (DR), cataract (CT), and normal eye-related diseases (NL). An automated ocular disease detection system with computer-aided diagnosis (CAD) tools is required to recognize eye-related diseases. Nowadays, deep learning (DL) algorithms enhance the classification results of retinograph images. To address these issues, we developed an intelligent detection system based on retinal fundus images. To create this system, we used ODIR and RFMiD datasets, which included various retinographics of distinct classes of the fundus, using cutting-edge image classification algorithms like ensemble-based transfer learning. In this paper, we suggest a three-step hybrid ensemble model that combines a classifier, a feature extractor, and a feature selector. The original image features are first extracted using a pre-trained AlexNet model with an enhanced structure. The improved AlexNet (iAlexNet) architecture with attention and dense layers offers enhanced feature extraction, task adaptability, interpretability, and potential accuracy benefits compared to other transfer learning architectures, making it particularly suited for tasks like retinograph classification. The extracted features are then selected using the ReliefF method, and then the most crucial elements are chosen to minimize the feature dimension. Finally, an XgBoost classifier offers classification outcomes based on the desired features. These classifications represent different ocular illnesses. We utilized data augmentation techniques to control class imbalance issues. The deep-ocular model, based mainly on the AlexNet-ReliefF-XgBoost model, achieves an accuracy of 95.13%. The results indicate the proposed ensemble model can assist dermatologists in making early decisions for the diagnosing and screening of eye-related diseases.
Collapse
Affiliation(s)
- Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Abdullah Altameem
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Riyad Saleh Almakki
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (M.A.); (A.A.); (R.S.A.)
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan;
| |
Collapse
|
40
|
Rajesh AE, Davidson OQ, Lee CS, Lee AY. Artificial Intelligence and Diabetic Retinopathy: AI Framework, Prospective Studies, Head-to-head Validation, and Cost-effectiveness. Diabetes Care 2023; 46:1728-1739. [PMID: 37729502 PMCID: PMC10516248 DOI: 10.2337/dci23-0032] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 07/15/2023] [Indexed: 09/22/2023]
Abstract
Current guidelines recommend that individuals with diabetes receive yearly eye exams for detection of referable diabetic retinopathy (DR), one of the leading causes of new-onset blindness. For addressing the immense screening burden, artificial intelligence (AI) algorithms have been developed to autonomously screen for DR from fundus photography without human input. Over the last 10 years, many AI algorithms have achieved good sensitivity and specificity (>85%) for detection of referable DR compared with human graders; however, many questions still remain. In this narrative review on AI in DR screening, we discuss key concepts in AI algorithm development as a background for understanding the algorithms. We present the AI algorithms that have been prospectively validated against human graders and demonstrate the variability of reference standards and cohort demographics. We review the limited head-to-head validation studies where investigators attempt to directly compare the available algorithms. Next, we discuss the literature regarding cost-effectiveness, equity and bias, and medicolegal considerations, all of which play a role in the implementation of these AI algorithms in clinical practice. Lastly, we highlight ongoing efforts to bridge gaps in AI model data sets to pursue equitable development and delivery.
Collapse
Affiliation(s)
- Anand E. Rajesh
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Oliver Q. Davidson
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| |
Collapse
|
41
|
Oikonomou EK, Khera R. Machine learning in precision diabetes care and cardiovascular risk prediction. Cardiovasc Diabetol 2023; 22:259. [PMID: 37749579 PMCID: PMC10521578 DOI: 10.1186/s12933-023-01985-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 09/07/2023] [Indexed: 09/27/2023] Open
Abstract
Artificial intelligence and machine learning are driving a paradigm shift in medicine, promising data-driven, personalized solutions for managing diabetes and the excess cardiovascular risk it poses. In this comprehensive review of machine learning applications in the care of patients with diabetes at increased cardiovascular risk, we offer a broad overview of various data-driven methods and how they may be leveraged in developing predictive models for personalized care. We review existing as well as expected artificial intelligence solutions in the context of diagnosis, prognostication, phenotyping, and treatment of diabetes and its cardiovascular complications. In addition to discussing the key properties of such models that enable their successful application in complex risk prediction, we define challenges that arise from their misuse and the role of methodological standards in overcoming these limitations. We also identify key issues in equity and bias mitigation in healthcare and discuss how the current regulatory framework should ensure the efficacy and safety of medical artificial intelligence products in transforming cardiovascular care and outcomes in diabetes.
Collapse
Affiliation(s)
- Evangelos K Oikonomou
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| | - Rohan Khera
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
- Section of Health Informatics, Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA.
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, CT, USA.
- Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, 195 Church St, 6th floor, New Haven, CT, 06510, USA.
| |
Collapse
|
42
|
Nakayama LF, Zago Ribeiro L, Novaes F, Miyawaki IA, Miyawaki AE, de Oliveira JAE, Oliveira T, Malerbi FK, Regatieri CVS, Celi LA, Silva PS. Artificial intelligence for telemedicine diabetic retinopathy screening: a review. Ann Med 2023; 55:2258149. [PMID: 37734417 PMCID: PMC10515659 DOI: 10.1080/07853890.2023.2258149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 08/31/2023] [Indexed: 09/23/2023] Open
Abstract
PURPOSE This study aims to compare artificial intelligence (AI) systems applied in diabetic retinopathy (DR) teleophthalmology screening, currently deployed systems, fairness initiatives and the challenges for implementation. METHODS The review included articles retrieved from PubMed/Medline/EMBASE literature search strategy regarding telemedicine, DR and AI. The screening criteria included human articles in English, Portuguese or Spanish and related to telemedicine and AI for DR screening. The author's affiliations and the study's population income group were classified according to the World Bank Country and Lending Groups. RESULTS The literature search yielded a total of 132 articles, and nine were included after full-text assessment. The selected articles were published between 2004 and 2020 and were grouped as telemedicine systems, algorithms, economic analysis and image quality assessment. Four telemedicine systems that perform a quality assessment, image preprocessing and pathological screening were reviewed. A data and post-deployment bias assessment are not performed in any of the algorithms, and none of the studies evaluate the social impact implementations. There is a lack of representativeness in the reviewed articles, with most authors and target populations from high-income countries and no low-income country representation. CONCLUSIONS Telemedicine and AI hold great promise for augmenting decision-making in medical care, expanding patient access and enhancing cost-effectiveness. Economic studies and social science analysis are crucial to support the implementation of AI in teleophthalmology screening programs. Promoting fairness and generalizability in automated systems combined with telemedicine screening programs is not straightforward. Improving data representativeness, reducing biases and promoting equity in deployment and post-deployment studies are all critical steps in model development.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | - Frederico Novaes
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | | | | | | | - Talita Oliveira
- Department of Ophthalmology, São Paulo Federal University, Sao Paulo, Brazil
| | | | | | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, MA, USA
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Paolo S. Silva
- Beetham Eye Institute, Joslin Diabetes Centre, Harvard Medical School, Boston, MA, USA
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| |
Collapse
|
43
|
Tan TF, Thirunavukarasu AJ, Jin L, Lim J, Poh S, Teo ZL, Ang M, Chan RVP, Ong J, Turner A, Karlström J, Wong TY, Stern J, Ting DSW. Artificial intelligence and digital health in global eye health: opportunities and challenges. Lancet Glob Health 2023; 11:e1432-e1443. [PMID: 37591589 DOI: 10.1016/s2214-109x(23)00323-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 06/26/2023] [Accepted: 07/04/2023] [Indexed: 08/19/2023]
Abstract
Global eye health is defined as the degree to which vision, ocular health, and function are maximised worldwide, thereby optimising overall wellbeing and quality of life. Improving eye health is a global priority as a key to unlocking human potential by reducing the morbidity burden of disease, increasing productivity, and supporting access to education. Although extraordinary progress fuelled by global eye health initiatives has been made over the last decade, there remain substantial challenges impeding further progress. The accelerated development of digital health and artificial intelligence (AI) applications provides an opportunity to transform eye health, from facilitating and increasing access to eye care to supporting clinical decision making with an objective, data-driven approach. Here, we explore the opportunities and challenges presented by digital health and AI in global eye health and describe how these technologies could be leveraged to improve global eye health. AI, telehealth, and emerging technologies have great potential, but require specific work to overcome barriers to implementation. We suggest that a global digital eye health task force could facilitate coordination of funding, infrastructural development, and democratisation of AI and digital health to drive progress forwards in this domain.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Arun J Thirunavukarasu
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Corpus Christi College, University of Cambridge, Cambridge, UK; School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - Joshua Lim
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Stanley Poh
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois College of Medicine, Urbana-Champaign, IL, USA
| | - Jasmine Ong
- Pharmacy Department, Singapore General Hospital, Singapore
| | - Angus Turner
- Lions Eye Institute, University of Western Australia, Nedlands, WA, Australia
| | - Jonas Karlström
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Jude Stern
- The International Agency for the Prevention of Blindness, London, UK
| | - Daniel Shu-Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
44
|
Nath S, Rahimy E, Kras A, Korot E. Toward safer ophthalmic artificial intelligence via distributed validation on real-world data. Curr Opin Ophthalmol 2023; 34:459-463. [PMID: 37459329 DOI: 10.1097/icu.0000000000000986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
PURPOSE OF REVIEW The current article provides an overview of the present approaches to algorithm validation, which are variable and largely self-determined, as well as solutions to address inadequacies. RECENT FINDINGS In the last decade alone, numerous machine learning applications have been proposed for ophthalmic diagnosis or disease monitoring. Remarkably, of these, less than 15 have received regulatory approval for implementation into clinical practice. Although there exists a vast pool of structured and relatively clean datasets from which to develop and test algorithms in the computational 'laboratory', real-world validation remains key to allow for safe, equitable, and clinically reliable implementation. Bottlenecks in the validation process stem from a striking paucity of regulatory guidance surrounding safety and performance thresholds, lack of oversight on critical postdeployment monitoring and context-specific recalibration, and inherent complexities of heterogeneous disease states and clinical environments. Implementation of secure, third-party, unbiased, pre and postdeployment validation offers the potential to address existing shortfalls in the validation process. SUMMARY Given the criticality of validation to the algorithm pipeline, there is an urgent need for developers, machine learning researchers, and end-user clinicians to devise a consensus approach, allowing for the rapid introduction of safe, equitable, and clinically valid machine learning implementations.
Collapse
Affiliation(s)
- Siddharth Nath
- Department of Ophthalmology and Visual Sciences, McGill University, Montréal, Québec, Canada
| | - Ehsan Rahimy
- Byers Eye Institute, Stanford University, Palo Alto, California, USA
| | - Ashley Kras
- Save Sight Institute, Sydney University, Sydney, Australia
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Edward Korot
- Byers Eye Institute, Stanford University, Palo Alto, California, USA
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Retina Specialists of Michigan, Grand Rapids, Michigan, USA
| |
Collapse
|
45
|
Zhang J, Li Z, Lin H, Xue M, Wang H, Fang Y, Liu S, Huo T, Zhou H, Yang J, Xie Y, Xie M, Lu L, Liu P, Ye Z. Deep learning assisted diagnosis system: improving the diagnostic accuracy of distal radius fractures. Front Med (Lausanne) 2023; 10:1224489. [PMID: 37663656 PMCID: PMC10471443 DOI: 10.3389/fmed.2023.1224489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/04/2023] [Indexed: 09/05/2023] Open
Abstract
Objectives To explore an intelligent detection technology based on deep learning algorithms to assist the clinical diagnosis of distal radius fractures (DRFs), and further compare it with human performance to verify the feasibility of this method. Methods A total of 3,240 patients (fracture: n = 1,620, normal: n = 1,620) were included in this study, with a total of 3,276 wrist joint anteroposterior (AP) X-ray films (1,639 fractured, 1,637 normal) and 3,260 wrist joint lateral X-ray films (1,623 fractured, 1,637 normal). We divided the patients into training set, validation set and test set in a ratio of 7:1.5:1.5. The deep learning models were developed using the data from the training and validation sets, and then their effectiveness were evaluated using the data from the test set. Evaluate the diagnostic performance of deep learning models using receiver operating characteristic (ROC) curves and area under the curve (AUC), accuracy, sensitivity, and specificity, and compare them with medical professionals. Results The deep learning ensemble model had excellent accuracy (97.03%), sensitivity (95.70%), and specificity (98.37%) in detecting DRFs. Among them, the accuracy of the AP view was 97.75%, the sensitivity 97.13%, and the specificity 98.37%; the accuracy of the lateral view was 96.32%, the sensitivity 94.26%, and the specificity 98.37%. When the wrist joint is counted, the accuracy was 97.55%, the sensitivity 98.36%, and the specificity 96.73%. In terms of these variables, the performance of the ensemble model is superior to that of both the orthopedic attending physician group and the radiology attending physician group. Conclusion This deep learning ensemble model has excellent performance in detecting DRFs on plain X-ray films. Using this artificial intelligence model as a second expert to assist clinical diagnosis is expected to improve the accuracy of diagnosing DRFs and enhance clinical work efficiency.
Collapse
Affiliation(s)
- Jiayao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhimin Li
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Heng Lin
- Department of Orthopedics, Nanzhang People’s Hospital, Nanzhang, China
| | - Mingdi Xue
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Honglin Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Fang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Songxiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Tongtong Huo
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Zhou
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jiaming Yang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yi Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mao Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lin Lu
- Department of Orthopedics, Renmin Hospital of Wuhan University, Wuhan, China
| | - Pengran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhewei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
46
|
Cleland CR, Rwiza J, Evans JR, Gordon I, MacLeod D, Burton MJ, Bascaran C. Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review. BMJ Open Diabetes Res Care 2023; 11:e003424. [PMID: 37532460 PMCID: PMC10401245 DOI: 10.1136/bmjdrc-2023-003424] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/11/2023] [Indexed: 08/04/2023] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness globally. There is growing evidence to support the use of artificial intelligence (AI) in diabetic eye care, particularly for screening populations at risk of sight loss from DR in low-income and middle-income countries (LMICs) where resources are most stretched. However, implementation into clinical practice remains limited. We conducted a scoping review to identify what AI tools have been used for DR in LMICs and to report their performance and relevant characteristics. 81 articles were included. The reported sensitivities and specificities were generally high providing evidence to support use in clinical practice. However, the majority of studies focused on sensitivity and specificity only and there was limited information on cost, regulatory approvals and whether the use of AI improved health outcomes. Further research that goes beyond reporting sensitivities and specificities is needed prior to wider implementation.
Collapse
Affiliation(s)
- Charles R Cleland
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Justus Rwiza
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Jennifer R Evans
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - Iris Gordon
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - David MacLeod
- Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK
| | - Matthew J Burton
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Covadonga Bascaran
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| |
Collapse
|
47
|
Xie H, Li Z, Wu C, Zhao Y, Lin C, Wang Z, Wang C, Gu Q, Wang M, Zheng Q, Jiang J, Chen W. Deep learning for detecting visually impaired cataracts using fundus images. Front Cell Dev Biol 2023; 11:1197239. [PMID: 37576595 PMCID: PMC10416247 DOI: 10.3389/fcell.2023.1197239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/20/2023] [Indexed: 08/15/2023] Open
Abstract
Purpose: To develop a visual function-based deep learning system (DLS) using fundus images to screen for visually impaired cataracts. Materials and methods: A total of 8,395 fundus images (5,245 subjects) with corresponding visual function parameters collected from three clinical centers were used to develop and evaluate a DLS for classifying non-cataracts, mild cataracts, and visually impaired cataracts. Three deep learning algorithms (DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best one for the system. The performance of the system was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: The AUC of the best algorithm (DenseNet121) on the internal test dataset and the two external test datasets were 0.998 (95% CI, 0.996-0.999) to 0.999 (95% CI, 0.998-1.000),0.938 (95% CI, 0.924-0.951) to 0.966 (95% CI, 0.946-0.983) and 0.937 (95% CI, 0.918-0.953) to 0.977 (95% CI, 0.962-0.989), respectively. In the comparison between the system and cataract specialists, better performance was observed in the system for detecting visually impaired cataracts (p < 0.05). Conclusion: Our study shows the potential of a function-focused screening tool to identify visually impaired cataracts from fundus images, enabling timely patient referral to tertiary eye hospitals.
Collapse
Affiliation(s)
- He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Chengchao Wu
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Yitian Zhao
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Chengmin Lin
- Department of Ophthalmology, Wenzhou Hospital of Integrated Traditional Chinese and Western Medicine, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Chenxi Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinyi Gu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Minye Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinxiang Zheng
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Wei Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| |
Collapse
|
48
|
Zhang J, Lin H, Wang H, Xue M, Fang Y, Liu S, Huo T, Zhou H, Yang J, Xie Y, Xie M, Cheng L, Lu L, Liu P, Ye Z. Deep learning system assisted detection and localization of lumbar spondylolisthesis. Front Bioeng Biotechnol 2023; 11:1194009. [PMID: 37539438 PMCID: PMC10394621 DOI: 10.3389/fbioe.2023.1194009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Accepted: 07/10/2023] [Indexed: 08/05/2023] Open
Abstract
Objective: Explore a new deep learning (DL) object detection algorithm for clinical auxiliary diagnosis of lumbar spondylolisthesis and compare it with doctors' evaluation to verify the effectiveness and feasibility of the DL algorithm in the diagnosis of lumbar spondylolisthesis. Methods: Lumbar lateral radiographs of 1,596 patients with lumbar spondylolisthesis from three medical institutions were collected, and senior orthopedic surgeons and radiologists jointly diagnosed and marked them to establish a database. These radiographs were randomly divided into a training set (n = 1,117), a validation set (n = 240), and a test set (n = 239) in a ratio of 0.7 : 0.15: 0.15. We trained two DL models for automatic detection of spondylolisthesis and evaluated their diagnostic performance by PR curves, areas under the curve, precision, recall, F1-score. Then we chose the model with better performance and compared its results with professionals' evaluation. Results: A total of 1,780 annotations were marked for training (1,242), validation (263), and test (275). The Faster Region-based Convolutional Neural Network (R-CNN) showed better precision (0.935), recall (0.935), and F1-score (0.935) in the detection of spondylolisthesis, which outperformed the doctor group with precision (0.927), recall (0.892), f1-score (0.910). In addition, with the assistance of the DL model, the precision of the doctor group increased by 4.8%, the recall by 8.2%, the F1-score by 6.4%, and the average diagnosis time per plain X-ray was shortened by 7.139 s. Conclusion: The DL detection algorithm is an effective method for clinical diagnosis of lumbar spondylolisthesis. It can be used as an assistant expert to improve the accuracy of lumbar spondylolisthesis diagnosis and reduce the clinical workloads.
Collapse
Affiliation(s)
- Jiayao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Heng Lin
- Department of Orthopedics, Nanzhang People’s Hospital, Nanzhang, China
| | - Honglin Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mingdi Xue
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Fang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Songxiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Tongtong Huo
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Zhou
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jiaming Yang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yi Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mao Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Liangli Cheng
- Department of Orthopedics, Daye People’s Hospital, Daye, China
| | - Lin Lu
- Department of Orthopedics, Renmin Hospital of Wuhan University, Wuhan, China
| | - Pengran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhewei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
49
|
Penha FM, Priotto BM, Hennig F, Przysiezny B, Wiethorn BA, Orsi J, Nagel IBF, Wiggers B, Stuchi JA, Lencione D, de Souza Prado PV, Yamanaka F, Lojudice F, Malerbi FK. Single retinal image for diabetic retinopathy screening: performance of a handheld device with embedded artificial intelligence. Int J Retina Vitreous 2023; 9:41. [PMID: 37430345 DOI: 10.1186/s40942-023-00477-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 06/23/2023] [Indexed: 07/12/2023] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) is a leading cause of blindness. Our objective was to evaluate the performance of an artificial intelligence (AI) system integrated into a handheld smartphone-based retinal camera for DR screening using a single retinal image per eye. METHODS Images were obtained from individuals with diabetes during a mass screening program for DR in Blumenau, Southern Brazil, conducted by trained operators. Automatic analysis was conducted using an AI system (EyerMaps™, Phelcom Technologies LLC, Boston, USA) with one macula-centered, 45-degree field of view retinal image per eye. The results were compared to the assessment by a retinal specialist, considered as the ground truth, using two images per eye. Patients with ungradable images were excluded from the analysis. RESULTS A total of 686 individuals (average age 59.2 ± 13.3 years, 56.7% women, diabetes duration 12.1 ± 9.4 years) were included in the analysis. The rates of insulin use, daily glycemic monitoring, and systemic hypertension treatment were 68.4%, 70.2%, and 70.2%, respectively. Although 97.3% of patients were aware of the risk of blindness associated with diabetes, more than half of them underwent their first retinal examination during the event. The majority (82.5%) relied exclusively on the public health system. Approximately 43.4% of individuals were either illiterate or had not completed elementary school. DR classification based on the ground truth was as follows: absent or nonproliferative mild DR 86.9%, more than mild (mtm) DR 13.1%. The AI system achieved sensitivity, specificity, positive predictive value, and negative predictive value percentages (95% CI) for mtmDR as follows: 93.6% (87.8-97.2), 71.7% (67.8-75.4), 42.7% (39.3-46.2), and 98.0% (96.2-98.9), respectively. The area under the ROC curve was 86.4%. CONCLUSION The portable retinal camera combined with AI demonstrated high sensitivity for DR screening using only one image per eye, offering a simpler protocol compared to the traditional approach of two images per eye. Simplifying the DR screening process could enhance adherence rates and overall program coverage.
Collapse
Affiliation(s)
- Fernando Marcondes Penha
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil.
- Botelho Hospital da Visão, Rua 2 de Setembro, 2958, Blumenau, 89052-504, SC, Brazil.
| | - Bruna Milene Priotto
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil
| | - Francini Hennig
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil
| | - Bernardo Przysiezny
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil
| | - Bruno Antunes Wiethorn
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil
| | - Julia Orsi
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil
| | | | - Brenda Wiggers
- Fundacao Universidade Regional de Blumenau, Rua Antonio Veiga 140, Blumenau, 89030-903, SC, Brazil
| | | | | | | | | | - Fernando Lojudice
- Bayer Healthcare - Brazil, São Paulo, SP, Brazil
- Cell and Molecular Theraphy Center (NUCEL), School of Medicine, University of São Paulo, São Paulo, SP, Brazil
| | - Fernando Korn Malerbi
- Department of Ophthalmology, Federal University of São Paulo (UNIFESP), São Paulo, SP, Brazil
| |
Collapse
|
50
|
Casey AE, Ansari S, Nakisa B, Kelly B, Brown P, Cooper P, Muhammad I, Livingstone S, Reddy S, Makinen VP. Application of a Comprehensive Evaluation Framework to COVID-19 Studies: Systematic Review of Translational Aspects of Artificial Intelligence in Health Care. JMIR AI 2023; 2:e42313. [PMID: 37457747 PMCID: PMC10337329 DOI: 10.2196/42313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 11/23/2022] [Accepted: 03/22/2023] [Indexed: 07/18/2023]
Abstract
Background Despite immense progress in artificial intelligence (AI) models, there has been limited deployment in health care environments. The gap between potential and actual AI applications is likely due to the lack of translatability between controlled research environments (where these models are developed) and clinical environments for which the AI tools are ultimately intended. Objective We previously developed the Translational Evaluation of Healthcare AI (TEHAI) framework to assess the translational value of AI models and to support successful transition to health care environments. In this study, we applied the TEHAI framework to the COVID-19 literature in order to assess how well translational topics are covered. Methods A systematic literature search for COVID-19 AI studies published between December 2019 and December 2020 resulted in 3830 records. A subset of 102 (2.7%) papers that passed the inclusion criteria was sampled for full review. The papers were assessed for translational value and descriptive data collected by 9 reviewers (each study was assessed by 2 reviewers). Evaluation scores and extracted data were compared by a third reviewer for resolution of discrepancies. The review process was conducted on the Covidence software platform. Results We observed a significant trend for studies to attain high scores for technical capability but low scores for the areas essential for clinical translatability. Specific questions regarding external model validation, safety, nonmaleficence, and service adoption received failed scores in most studies. Conclusions Using TEHAI, we identified notable gaps in how well translational topics of AI models are covered in the COVID-19 clinical sphere. These gaps in areas crucial for clinical translatability could, and should, be considered already at the model development stage to increase translatability into real COVID-19 health care environments.
Collapse
Affiliation(s)
- Aaron Edward Casey
- South Australian Health and Medical Research Institute Adelaide Australia
- Australian Centre for Precision Health Cancer Research Institute University of South Australia Adelaide Australia
| | - Saba Ansari
- School of Medicine Deakin University Geelong Australia
| | - Bahareh Nakisa
- School of Information Technology Deakin University Geelong Australia
| | | | | | - Paul Cooper
- School of Medicine Deakin University Geelong Australia
| | | | | | - Sandeep Reddy
- School of Medicine Deakin University Geelong Australia
| | - Ville-Petteri Makinen
- South Australian Health and Medical Research Institute Adelaide Australia
- Australian Centre for Precision Health Cancer Research Institute University of South Australia Adelaide Australia
- Computational Medicine Faculty of Medicine University of Oulu Oulu Finland
- Centre for Life Course Health Research Faculty of Medicine University of Oulu Oulu Finland
| |
Collapse
|