1
|
Liu L, Hong J, Wu Y, Liu S, Wang K, Li M, Zhao L, Liu Z, Li L, Cui T, Tsui CK, Xu F, Hu W, Yun D, Chen X, Shang Y, Bi S, Wei X, Lai Y, Lin D, Fu Z, Deng Y, Cai K, Xie Y, Cao Z, Wang D, Zhang X, Dongye M, Lin H, Wu X. Digital ray: enhancing cataractous fundus images using style transfer generative adversarial networks to improve retinopathy detection. Br J Ophthalmol 2024; 108:1423-1429. [PMID: 38839251 DOI: 10.1136/bjo-2024-325403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 05/15/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND/AIMS The aim of this study was to develop and evaluate digital ray, based on preoperative and postoperative image pairs using style transfer generative adversarial networks (GANs), to enhance cataractous fundus images for improved retinopathy detection. METHODS For eligible cataract patients, preoperative and postoperative colour fundus photographs (CFP) and ultra-wide field (UWF) images were captured. Then, both the original CycleGAN and a modified CycleGAN (C2ycleGAN) framework were adopted for image generation and quantitatively compared using Frechet Inception Distance (FID) and Kernel Inception Distance (KID). Additionally, CFP and UWF images from another cataract cohort were used to test model performances. Different panels of ophthalmologists evaluated the quality, authenticity and diagnostic efficacy of the generated images. RESULTS A total of 959 CFP and 1009 UWF image pairs were included in model development. FID and KID indicated that images generated by C2ycleGAN presented significantly improved quality. Based on ophthalmologists' average ratings, the percentages of inadequate-quality images decreased from 32% to 18.8% for CFP, and from 18.7% to 14.7% for UWF. Only 24.8% and 13.8% of generated CFP and UWF images could be recognised as synthetic. The accuracy of retinopathy detection significantly increased from 78% to 91% for CFP and from 91% to 93% for UWF. For retinopathy subtype diagnosis, the accuracies also increased from 87%-94% to 91%-100% for CFP and from 87%-95% to 93%-97% for UWF. CONCLUSION Digital ray could generate realistic postoperative CFP and UWF images with enhanced quality and accuracy for overall detection and subtype diagnosis of retinopathies, especially for CFP.\ TRIAL REGISTRATION NUMBER: This study was registered with ClinicalTrials.gov (NCT05491798).
Collapse
Affiliation(s)
- Lixue Liu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiaming Hong
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yuxuan Wu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Shaopeng Liu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Kai Wang
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Mingyuan Li
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zhenzhen Liu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Longhui Li
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Tingxin Cui
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Ching-Kit Tsui
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Fabao Xu
- Qilu Hospital of Shandong University, Jinan, Shandong, China
| | - Weiling Hu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xi Chen
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Shaowei Bi
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xiaoyue Wei
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yunxi Lai
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Duoru Lin
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Yaru Deng
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Kaimin Cai
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Yi Xie
- Sun Yat-sen University Zhongshan School of Medicine, Guangzhou, Guangdong, China
| | - Zizheng Cao
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Dongni Wang
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xulin Zhang
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Meimei Dongye
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Haotian Lin
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Assaf JF, Abou Mrad A, Reinstein DZ, Amescua G, Zakka C, Archer TJ, Yammine J, Lamah E, Haykal M, Awwad ST. Creating realistic anterior segment optical coherence tomography images using generative adversarial networks. Br J Ophthalmol 2024; 108:1414-1422. [PMID: 38697800 DOI: 10.1136/bjo-2023-324633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/21/2024] [Indexed: 05/05/2024]
Abstract
AIMS To develop a generative adversarial network (GAN) capable of generating realistic high-resolution anterior segment optical coherence tomography (AS-OCT) images. METHODS This study included 142 628 AS-OCT B-scans from the American University of Beirut Medical Center. The Style and WAvelet based GAN architecture was trained to generate realistic AS-OCT images and was evaluated through the Fréchet Inception Distance (FID) Score and a blinded assessment by three refractive surgeons who were asked to distinguish between real and generated images. To assess the suitability of the generated images for machine learning tasks, a convolutional neural network (CNN) was trained using a dataset of real and generated images over a classification task. The generated AS-OCT images were then upsampled using an enhanced super-resolution GAN (ESRGAN) to achieve high resolution. RESULTS The generated images exhibited visual and quantitative similarity to real AS-OCT images. Quantitative similarity assessed using FID scored an average of 6.32. Surgeons scored 51.7% in identifying real versus generated images which was not significantly better than chance (p value >0.3). The CNN accuracy improved from 78% to 100% when synthetic images were added to the dataset. The ESRGAN upsampled images were objectively more realistic and accurate compared with traditional upsampling techniques by scoring a lower Learned Perceptual Image Patch Similarity of 0.0905 compared with 0.4244 of bicubic interpolation. CONCLUSIONS This study successfully developed and leveraged GANs capable of generating high-definition synthetic AS-OCT images that are realistic and suitable for machine learning and image analysis tasks.
Collapse
Affiliation(s)
- Jad F Assaf
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon
- Casey Eye Institute, Pregon Health & Science University, Portland, OR, USA
| | | | - Dan Z Reinstein
- London Vision Clinic, London, UK
- Reinstein Vision, London, UK
- Columbia University Medical Center, New York, NY, USA
- Sorbonne Université, Paris, France
- Biomedical Science Research Institute, Ulster University, Coleraine, UK
| | | | - Cyril Zakka
- Department of Cardiothoracic Surgery, Stanford University, Stanford, California, USA
| | | | - Jeffrey Yammine
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon
| | - Elsa Lamah
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon
| | - Michèle Haykal
- Faculty of Medicine, Saint Joseph University, Beirut, Lebanon
| | - Shady T Awwad
- Department of Ophthalmology, American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
3
|
Sonmez SC, Sevgi M, Antaki F, Huemer J, Keane PA. Generative artificial intelligence in ophthalmology: current innovations, future applications and challenges. Br J Ophthalmol 2024; 108:1335-1340. [PMID: 38925907 DOI: 10.1136/bjo-2024-325458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024]
Abstract
The rapid advancements in generative artificial intelligence are set to significantly influence the medical sector, particularly ophthalmology. Generative adversarial networks and diffusion models enable the creation of synthetic images, aiding the development of deep learning models tailored for specific imaging tasks. Additionally, the advent of multimodal foundational models, capable of generating images, text and videos, presents a broad spectrum of applications within ophthalmology. These range from enhancing diagnostic accuracy to improving patient education and training healthcare professionals. Despite the promising potential, this area of technology is still in its infancy, and there are several challenges to be addressed, including data bias, safety concerns and the practical implementation of these technologies in clinical settings.
Collapse
Affiliation(s)
| | - Mertcan Sevgi
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, Quebec, Canada
| | - Josef Huemer
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
- Department of Ophthalmology and Optometry, Kepler University Hospital, Linz, Austria
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital, NIHR Moorfields Biomedical Research Centre, London, UK
| |
Collapse
|
4
|
Inouye K, Petrosyan A, Moskalensky L, Thankam FG. Artificial intelligence in therapeutic management of hyperlipidemic ocular pathology. Exp Eye Res 2024; 245:109954. [PMID: 38838975 DOI: 10.1016/j.exer.2024.109954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/09/2024] [Accepted: 06/02/2024] [Indexed: 06/07/2024]
Abstract
Hyperlipidemia has many ocular manifestations, the most prevalent being retinal vascular occlusion. Hyperlipidemic lesions and occlusions to the vessels supplying the retina result in permanent blindness, necessitating prompt detection and treatment. Retinal vascular occlusion is diagnosed using different imaging modalities, including optical coherence tomography angiography. These diagnostic techniques obtain images representing the blood flow through the retinal vessels, providing an opportunity for AI to utilize image recognition to detect blockages and abnormalities before patients present with symptoms. AI is already being used as a non-invasive method to detect retinal vascular occlusions and other vascular pathology, as well as predict treatment outcomes. As providers see an increase in patients presenting with new retinal vascular occlusions, the use of AI to detect and treat these conditions has the potential to improve patient outcomes and reduce the financial burden on the healthcare system. This article comprehends the implications of AI in the current management strategies of retinal vascular occlusion (RVO) in hyperlipidemia and the recent developments of AI technology in the management of ocular diseases.
Collapse
Affiliation(s)
- Keiko Inouye
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA
| | - Aelita Petrosyan
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA
| | - Liana Moskalensky
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA
| | - Finosh G Thankam
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA.
| |
Collapse
|
5
|
Lim JI, Rachitskaya AV, Hallak JA, Gholami S, Alam MN. Artificial intelligence for retinal diseases. Asia Pac J Ophthalmol (Phila) 2024; 13:100096. [PMID: 39209215 DOI: 10.1016/j.apjo.2024.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 08/02/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
PURPOSE To discuss the worldwide applications and potential impact of artificial intelligence (AI) for the diagnosis, management and analysis of treatment outcomes of common retinal diseases. METHODS We performed an online literature review, using PubMed Central (PMC), of AI applications to evaluate and manage retinal diseases. Search terms included AI for screening, diagnosis, monitoring, management, and treatment outcomes for age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal surgery, retinal vascular disease, retinopathy of prematurity (ROP) and sickle cell retinopathy (SCR). Additional search terms included AI and color fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). We included original research articles and review articles. RESULTS Research studies have investigated and shown the utility of AI for screening for diseases such as DR, AMD, ROP, and SCR. Research studies using validated and labeled datasets confirmed AI algorithms could predict disease progression and response to treatment. Studies showed AI facilitated rapid and quantitative interpretation of retinal biomarkers seen on OCT and OCTA imaging. Research articles suggest AI may be useful for planning and performing robotic surgery. Studies suggest AI holds the potential to help lessen the impact of socioeconomic disparities on the outcomes of retinal diseases. CONCLUSIONS AI applications for retinal diseases can assist the clinician, not only by disease screening and monitoring for disease recurrence but also in quantitative analysis of treatment outcomes and prediction of treatment response. The public health impact on the prevention of blindness from DR, AMD, and other retinal vascular diseases remains to be determined.
Collapse
Affiliation(s)
- Jennifer I Lim
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States.
| | - Aleksandra V Rachitskaya
- Department of Ophthalmology at Case Western Reserve University, Cleveland Clinic Lerner College of Medicine, Cleveland Clinic Cole Eye Institute, United States
| | - Joelle A Hallak
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States; Department of Ophthalmology and Visual Sciences, College of Medicine, University of Illinois at Chicago, Chicago, IL, United States
| | - Sina Gholami
- University of North Carolina at Charlotte, United States
| | - Minhaj N Alam
- University of North Carolina at Charlotte, United States
| |
Collapse
|
6
|
Feng X, Xu K, Luo MJ, Chen H, Yang Y, He Q, Song C, Li R, Wu Y, Wang H, Tham YC, Ting DSW, Lin H, Wong TY, Lam DSC. Latest developments of generative artificial intelligence and applications in ophthalmology. Asia Pac J Ophthalmol (Phila) 2024; 13:100090. [PMID: 39128549 DOI: 10.1016/j.apjo.2024.100090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/13/2024] Open
Abstract
The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.
Collapse
Affiliation(s)
- Xiaoru Feng
- School of Biomedical Engineering, Tsinghua Medicine, Tsinghua University, Beijing, China; Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kezheng Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Ming-Jie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haichao Chen
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Yangfan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qi He
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Chenxin Song
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Ruiyao Li
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - You Wu
- Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China; School of Basic Medical Sciences, Tsinghua Medicine, Tsinghua University, Beijing, China; Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA.
| | - Haibo Wang
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
| | - Tien Yin Wong
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dennis Shun-Chiu Lam
- The International Eye Research Institute, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER International Eye Care Group, Hong Kong, Hong Kong, China
| |
Collapse
|
7
|
Borrelli E, Serafino S, Ricardi F, Coletto A, Neri G, Olivieri C, Ulla L, Foti C, Marolo P, Toro MD, Bandello F, Reibaldi M. Deep Learning in Neovascular Age-Related Macular Degeneration. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:990. [PMID: 38929607 PMCID: PMC11205843 DOI: 10.3390/medicina60060990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 05/29/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024]
Abstract
Background and objectives: Age-related macular degeneration (AMD) is a complex and multifactorial condition that can lead to permanent vision loss once it progresses to the neovascular exudative stage. This review aims to summarize the use of deep learning in neovascular AMD. Materials and Methods: Pubmed search. Results: Deep learning has demonstrated effectiveness in analyzing structural OCT images in patients with neovascular AMD. This review outlines the role of deep learning in identifying and measuring biomarkers linked to an elevated risk of transitioning to the neovascular form of AMD. Additionally, deep learning techniques can quantify critical OCT features associated with neovascular AMD, which have prognostic implications for these patients. Incorporating deep learning into the assessment of neovascular AMD eyes holds promise for enhancing clinical management strategies for affected individuals. Conclusion: Several studies have demonstrated effectiveness of deep learning in assessing neovascular AMD patients and this has a promising role in the assessment of these patients.
Collapse
Affiliation(s)
- Enrico Borrelli
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Sonia Serafino
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Federico Ricardi
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Andrea Coletto
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Giovanni Neri
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Chiara Olivieri
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Lorena Ulla
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Claudio Foti
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Paola Marolo
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| | - Mario Damiano Toro
- Eye Clinic, Public Health Department, University of Naples Federico II, 80138 Naples, Italy;
| | - Francesco Bandello
- Department of Ophthalmology, Vita-Salute San Raffaele University, 20132 Milan, Italy;
- IRCCS San Raffaele Scientific Institute, 20132 Milan, Italy
| | - Michele Reibaldi
- Division of Ophthalmology, Department of Surgical Sciences, University of Turin, Via Verdi, 8, 10124 Turin, Italy; (S.S.); (F.R.); (A.C.); (G.N.); (C.O.); (L.U.); (C.F.); (M.R.)
- Department of Ophthalmology, “City of Health and Science” Hospital, 10126 Turin, Italy
| |
Collapse
|
8
|
Waisberg E, Ong J, Kamran SA, Masalkhi M, Paladugu P, Zaman N, Lee AG, Tavakkoli A. Generative artificial intelligence in ophthalmology. Surv Ophthalmol 2024:S0039-6257(24)00044-4. [PMID: 38762072 DOI: 10.1016/j.survophthal.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/20/2024]
Abstract
Generative AI has revolutionized medicine over the past several years. A generative adversarial network (GAN) is a deep learning framework that has become a powerful technique in medicine, particularly in ophthalmology and image analysis. In this paper we review the current ophthalmic literature involving GANs, and highlight key contributions in the field. We briefly touch on ChatGPT, another application of generative AI, and its potential in ophthalmology. We also explore the potential uses for GANs in ocular imaging, with a specific emphasis on 3 primary domains: image enhancement, disease identification, and generating of synthetic data. PubMed, Ovid MEDLINE, Google Scholar were searched from inception to October 30, 2022 to identify applications of GAN in ophthalmology. A total of 40 papers were included in this review. We cover various applications of GANs in ophthalmic-related imaging including optical coherence tomography, orbital magnetic resonance imaging, fundus photography, and ultrasound; however, we also highlight several challenges, that resulted in the generation of inaccurate and atypical results during certain iterations. Finally, we examine future directions and considerations for generative AI in ophthalmology.
Collapse
Affiliation(s)
- Ethan Waisberg
- Department of Ophthalmology, University of Cambridge, Cambridge, United Kingdom.
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, United States
| | - Sharif Amit Kamran
- School of Medicine, University College Dublin, Belfield, Dublin, Ireland
| | - Mouayad Masalkhi
- School of Medicine, University College Dublin, Belfield, Dublin, Ireland
| | - Phani Paladugu
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania, United States; Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, Texas, United States; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas, United States; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, Texas, United States; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, New York, United States; Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas, United States; University of Texas MD Anderson Cancer Center, Houston, Texas, United States; Texas A&M College of Medicine, Texas, United States; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, United States
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, Nevada, United States
| |
Collapse
|
9
|
Bellemo V, Kumar Das A, Sreng S, Chua J, Wong D, Shah J, Jonas R, Tan B, Liu X, Xu X, Tan GSW, Agrawal R, Ting DSW, Yong L, Schmetterer L. Optical coherence tomography choroidal enhancement using generative deep learning. NPJ Digit Med 2024; 7:115. [PMID: 38704440 PMCID: PMC11069520 DOI: 10.1038/s41746-024-01119-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 04/23/2024] [Indexed: 05/06/2024] Open
Abstract
Spectral-domain optical coherence tomography (SDOCT) is the gold standard of imaging the eye in clinics. Penetration depth with such devices is, however, limited and visualization of the choroid, which is essential for diagnosing chorioretinal disease, remains limited. Whereas swept-source OCT (SSOCT) devices allow for visualization of the choroid these instruments are expensive and availability in praxis is limited. We present an artificial intelligence (AI)-based solution to enhance the visualization of the choroid in OCT scans and allow for quantitative measurements of choroidal metrics using generative deep learning (DL). Synthetically enhanced SDOCT B-scans with improved choroidal visibility were generated, leveraging matching images to learn deep anatomical features during the training. Using a single-center tertiary eye care institution cohort comprising a total of 362 SDOCT-SSOCT paired subjects, we trained our model with 150,784 images from 410 healthy, 192 glaucoma, and 133 diabetic retinopathy eyes. An independent external test dataset of 37,376 images from 146 eyes was deployed to assess the authenticity and quality of the synthetically enhanced SDOCT images. Experts' ability to differentiate real versus synthetic images was poor (47.5% accuracy). Measurements of choroidal thickness, area, volume, and vascularity index, from the reference SSOCT and synthetically enhanced SDOCT, showed high Pearson's correlations of 0.97 [95% CI: 0.96-0.98], 0.97 [0.95-0.98], 0.95 [0.92-0.98], and 0.87 [0.83-0.91], with intra-class correlation values of 0.99 [0.98-0.99], 0.98 [0.98-0.99], and 0.95 [0.96-0.98], 0.93 [0.91-0.95], respectively. Thus, our DL generative model successfully generated realistic enhanced SDOCT data that is indistinguishable from SSOCT images providing improved visualization of the choroid. This technology enabled accurate measurements of choroidal metrics previously limited by the imaging depth constraints of SDOCT. The findings open new possibilities for utilizing affordable SDOCT devices in studying the choroid in both healthy and pathological conditions.
Collapse
Affiliation(s)
- Valentina Bellemo
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
| | - Ankit Kumar Das
- Institute of High Performance Computing, Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Syna Sreng
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Janika Shah
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Rahul Jonas
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department Ophthalmology, Cologne, Germany
| | - Bingyao Tan
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department Ophthalmology, Cologne, Germany
| | - Xinyu Liu
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Rupesh Agrawal
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore School of Chemical and Biomedical Engineering, Nanyang Technological University (NTU), Singapore, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Liu Yong
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore.
- Institute of High Performance Computing, Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore.
| | - Leopold Schmetterer
- Singapore Eye Research Institute, National Eye Centre, Singapore, Singapore.
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore.
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore.
- Centre for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore.
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| |
Collapse
|
10
|
Kwon HJ, Heo J, Park SH, Park SW, Byon I. Accuracy of generative deep learning model for macular anatomy prediction from optical coherence tomography images in macular hole surgery. Sci Rep 2024; 14:6913. [PMID: 38519532 PMCID: PMC10959933 DOI: 10.1038/s41598-024-57562-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/19/2024] [Indexed: 03/25/2024] Open
Abstract
This study aims to propose a generative deep learning model (GDLM) based on a variational autoencoder that predicts macular optical coherence tomography (OCT) images following full-thickness macular hole (FTMH) surgery and evaluate its clinical accuracy. Preoperative and 6-month postoperative swept-source OCT data were collected from 150 patients with successfully closed FTMH using 6 × 6 mm2 macular volume scan datasets. Randomly selected and augmented 120,000 training and 5000 validation pairs of OCT images were used to train the GDLM. We assessed the accuracy and F1 score of concordance for neurosensory retinal areas, performed Bland-Altman analysis of foveolar height (FH) and mean foveal thickness (MFT), and predicted postoperative external limiting membrane (ELM) and ellipsoid zone (EZ) restoration accuracy between artificial intelligence (AI)-OCT and ground truth (GT)-OCT images. Accuracy and F1 scores were 94.7% and 0.891, respectively. Average FH (228.2 vs. 233.4 μm, P = 0.587) and MFT (271.4 vs. 273.3 μm, P = 0.819) were similar between AI- and GT-OCT images, within 30.0% differences of 95% limits of agreement. ELM and EZ recovery prediction accuracy was 88.0% and 92.0%, respectively. The proposed GDLM accurately predicted macular OCT images following FTMH surgery, aiding patient and surgeon understanding of postoperative macular features.
Collapse
Affiliation(s)
- Han Jo Kwon
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea
| | - Jun Heo
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea
| | - Su Hwan Park
- Department of Ophthalmology, Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Geumo-ro 20, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, 50612, South Korea
| | - Sung Who Park
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea
| | - Iksoo Byon
- Department of Ophthalmology, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Gudeok-ro 179, Seo-gu, Busan, 49241, South Korea.
| |
Collapse
|
11
|
Kim J, Chin HS. Deep learning-based prediction of the retinal structural alterations after epiretinal membrane surgery. Sci Rep 2023; 13:19275. [PMID: 37935769 PMCID: PMC10630279 DOI: 10.1038/s41598-023-46063-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/27/2023] [Indexed: 11/09/2023] Open
Abstract
To generate and evaluate synthesized postoperative OCT images of epiretinal membrane (ERM) based on preoperative OCT images using deep learning methodology. This study included a total 500 pairs of preoperative and postoperative optical coherence tomography (OCT) images for training a neural network. 60 preoperative OCT images were used to test the neural networks performance, and the corresponding postoperative OCT images were used to evaluate the synthesized images in terms of structural similarity index measure (SSIM). The SSIM was used to quantify how similar the synthesized postoperative OCT image was to the actual postoperative OCT image. The Pix2Pix GAN model was used to generate synthesized postoperative OCT images. Total 60 synthesized OCT images were generated with training values at 800 epochs. The mean SSIM of synthesized postoperative OCT to the actual postoperative OCT was 0.913. Pix2Pix GAN model has a possibility to generate predictive postoperative OCT images following ERM removal surgery.
Collapse
Affiliation(s)
- Joseph Kim
- Retina Division, Nune Eye Hospital, Seoul, Republic of Korea
| | - Hee Seung Chin
- Department of Ophthalmology, Inha University School of Medicine, Incheon, Republic of Korea.
| |
Collapse
|
12
|
Paladugu PS, Ong J, Nelson N, Kamran SA, Waisberg E, Zaman N, Kumar R, Dias RD, Lee AG, Tavakkoli A. Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence. Ann Biomed Eng 2023; 51:2130-2142. [PMID: 37488468 DOI: 10.1007/s10439-023-03304-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
The advent of artificial intelligence (AI) and machine learning (ML) has revolutionized the field of medicine. Although highly effective, the rapid expansion of this technology has created some anticipated and unanticipated bioethical considerations. With these powerful applications, there is a necessity for framework regulations to ensure equitable and safe deployment of technology. Generative Adversarial Networks (GANs) are emerging ML techniques that have immense applications in medical imaging due to their ability to produce synthetic medical images and aid in medical AI training. Producing accurate synthetic images with GANs can address current limitations in AI development for medical imaging and overcome current dataset type and size constraints. Offsetting these constraints can dramatically improve the development and implementation of AI medical imaging and restructure the practice of medicine. As observed with its other AI predecessors, considerations must be taken into place to help regulate its development for clinical use. In this paper, we discuss the legal, ethical, and technical challenges for future safe integration of this technology in the healthcare sector.
Collapse
Affiliation(s)
- Phani Srivatsav Paladugu
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Nicolas Nelson
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA, USA
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | | | - Roger Daglius Dias
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
- STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, MA, USA
| | - Andrew Go Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA.
| |
Collapse
|
13
|
Muntean GA, Marginean A, Groza A, Damian I, Roman SA, Hapca MC, Muntean MV, Nicoară SD. The Predictive Capabilities of Artificial Intelligence-Based OCT Analysis for Age-Related Macular Degeneration Progression-A Systematic Review. Diagnostics (Basel) 2023; 13:2464. [PMID: 37510207 PMCID: PMC10378064 DOI: 10.3390/diagnostics13142464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/16/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
The era of artificial intelligence (AI) has revolutionized our daily lives and AI has become a powerful force that is gradually transforming the field of medicine. Ophthalmology sits at the forefront of this transformation thanks to the effortless acquisition of an abundance of imaging modalities. There has been tremendous work in the field of AI for retinal diseases, with age-related macular degeneration being at the top of the most studied conditions. The purpose of the current systematic review was to identify and evaluate, in terms of strengths and limitations, the articles that apply AI to optical coherence tomography (OCT) images in order to predict the future evolution of age-related macular degeneration (AMD) during its natural history and after treatment in terms of OCT morphological structure and visual function. After a thorough search through seven databases up to 1 January 2022 using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 1800 records were identified. After screening, 48 articles were selected for full-text retrieval and 19 articles were finally included. From these 19 articles, 4 articles concentrated on predicting the anti-VEGF requirement in neovascular AMD (nAMD), 4 articles focused on predicting anti-VEGF efficacy in nAMD patients, 3 articles predicted the conversion from early or intermediate AMD (iAMD) to nAMD, 1 article predicted the conversion from iAMD to geographic atrophy (GA), 1 article predicted the conversion from iAMD to both nAMD and GA, 3 articles predicted the future growth of GA and 3 articles predicted the future outcome for visual acuity (VA) after anti-VEGF treatment in nAMD patients. Since using AI methods to predict future changes in AMD is only in its initial phase, a systematic review provides the opportunity of setting the context of previous work in this area and can present a starting point for future research.
Collapse
Affiliation(s)
- George Adrian Muntean
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Anca Marginean
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Adrian Groza
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Ioana Damian
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Sara Alexia Roman
- Faculty of Medicine, "Iuliu Hatieganu" University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
| | - Mădălina Claudia Hapca
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Maximilian Vlad Muntean
- Plastic Surgery Department, "Prof. Dr. I. Chiricuta" Institute of Oncology, 400015 Cluj-Napoca, Romania
| | - Simona Delia Nicoară
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| |
Collapse
|
14
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
15
|
Zbrzezny AM, Grzybowski AE. Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology. J Clin Med 2023; 12:jcm12093266. [PMID: 37176706 PMCID: PMC10179065 DOI: 10.3390/jcm12093266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. "Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems". A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis.
Collapse
Affiliation(s)
- Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznan, Poland
| |
Collapse
|
16
|
Moon S, Lee Y, Hwang J, Kim CG, Kim JW, Yoon WT, Kim JH. Prediction of anti-vascular endothelial growth factor agent-specific treatment outcomes in neovascular age-related macular degeneration using a generative adversarial network. Sci Rep 2023; 13:5639. [PMID: 37024576 PMCID: PMC10079864 DOI: 10.1038/s41598-023-32398-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/27/2023] [Indexed: 04/08/2023] Open
Abstract
To develop an artificial intelligence (AI) model that predicts anti-vascular endothelial growth factor (VEGF) agent-specific anatomical treatment outcomes in neovascular age-related macular degeneration (AMD), thereby assisting clinicians in selecting the most suitable anti-VEGF agent for each patient. This retrospective study included patients diagnosed with neovascular AMD who received three loading injections of either ranibizumab or aflibercept. Training was performed using optical coherence tomography (OCT) images with an attention generative adversarial network (GAN) model. To test the performance of the AI model, the sensitivity and specificity to predict the presence of retinal fluid after treatment were calculated for the AI model, an experienced (Examiner 1), and a less experienced (Examiner 2) human examiners. A total of 1684 OCT images from 842 patients (419 treated with ranibizumab and 423 treated with aflibercept) were used as the training set. Testing was performed using images from 98 patients. In patients treated with ranibizumab, the sensitivity and specificity, respectively, were 0.615 and 0.667 for the AI model, 0.385 and 0.861 for Examiner 1, and 0.231 and 0.806 for Examiner 2. In patients treated with aflibercept, the sensitivity and specificity, respectively, were 0.857 and 0.881 for the AI model, 0.429 and 0.976 for Examiner 1, and 0.429 and 0.857 for Examiner 2. In 18.5% of cases, the fluid status of synthetic posttreatment images differed between ranibizumab and aflibercept. The AI model using GAN might predict anti-VEGF agent-specific short-term treatment outcomes with relatively higher sensitivity than human examiners. Additionally, there was a difference in the efficacy in fluid resolution between the anti-VEGF agents. These results suggest the potential of AI in personalized medicine for patients with neovascular AMD.
Collapse
Affiliation(s)
- Sehwan Moon
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, South Korea
- MODULABS, Seoul, South Korea
| | - Youngsuk Lee
- INGRADIENT Inc., Seoul, South Korea
- MODULABS, Seoul, South Korea
| | - Jeongyoung Hwang
- AI Graduated School, Gwangju Institute of Science and Technology, Gwangju, South Korea
- MODULABS, Seoul, South Korea
| | - Chul Gu Kim
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea
| | - Jong Woo Kim
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea
| | - Won Tae Yoon
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea.
- Kim's Eye Hospital Data Center, Seoul, South Korea.
| | - Jae Hui Kim
- Department of Ophthalmology, Kim's Eye Hospital, #156 Youngdeungpo-dong 4ga, Youngdeungpo-gu, Seoul, 150-034, South Korea.
- Kim's Eye Hospital Data Center, Seoul, South Korea.
| |
Collapse
|
17
|
Zhang Y, Huang K, Li M, Yuan S, Chen Q. Learn Single-horizon Disease Evolution for Predictive Generation of Post-therapeutic Neovascular Age-related Macular Degeneration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107364. [PMID: 36716636 DOI: 10.1016/j.cmpb.2023.107364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 01/16/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Most of the existing disease prediction methods in the field of medical image processing fall into two classes, namely image-to-category predictions and image-to-parameter predictions.Few works have focused on image-to-image predictions. Different from multi-horizon predictions in other fields, ophthalmologists prefer to show more confidence in single-horizon predictions due to the low tolerance of predictive risk. METHODS We propose a single-horizon disease evolution network (SHENet) to predictively generate post-therapeutic SD-OCT images by inputting pre-therapeutic SD-OCT images with neovascular age-related macular degeneration (nAMD). In SHENet, a feature encoder converts the input SD-OCT images to deep features, then a graph evolution module predicts the process of disease evolution in high-dimensional latent space and outputs the predicted deep features, and lastly, feature decoder recovers the predicted deep features to SD-OCT images. We further propose an evolution reinforcement module to ensure the effectiveness of disease evolution learning and obtain realistic SD-OCT images by adversarial training. RESULTS SHENet is validated on 383 SD-OCT cubes of 22 nAMD patients based on three well-designed schemes (P-0, P-1 and P-M) based on the quantitative and qualitative evaluations. Three metrics (PSNR, SSIM, 1-LPIPS) are used here for quantitative evaluations. Compared with other generative methods, the generative SD-OCT images of SHENet have the highest image quality (P-0: 23.659, P-1: 23.875, P-M: 24.198) by PSNR. Besides, SHENet achieves the best structure protection (P-0: 0.326, P-1: 0.337, P-M: 0.349) by SSIM and content prediction (P-0: 0.609, P-1: 0.626, P-M: 0.642) by 1-LPIPS. Qualitative evaluations also demonstrate that SHENet has a better visual effect than other methods. CONCLUSIONS SHENet can generate post-therapeutic SD-OCT images with both high prediction performance and good image quality, which has great potential to help ophthalmologists forecast the therapeutic effect of nAMD.
Collapse
Affiliation(s)
- Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, 210094, China.
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China.
| |
Collapse
|
18
|
Prediction of OCT images of short-term response to anti-VEGF treatment for diabetic macular edema using different generative adversarial networks. Photodiagnosis Photodyn Ther 2023; 41:103272. [PMID: 36632873 DOI: 10.1016/j.pdpdt.2023.103272] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/03/2023] [Accepted: 01/03/2023] [Indexed: 01/11/2023]
Abstract
PURPOSE This study sought to assess the predictive performance of optical coherence tomography (OCT) images for the response of diabetic macular edema (DME) patients to anti-vascular endothelial growth factor (VEGF) therapy generated from baseline images using generative adversarial networks (GANs). METHODS Patient information, including clinical and imaging data, was obtained from inpatients at the Ophthalmology Department of Qilu Hospital. 715 and 103 pairs of pre-and post-treatment OCT images of DME patients were included in the training and validation sets, respectively. The post-treatment OCT images were used to assess the validity of the generated images. Six different GAN models (CycleGAN, PairGAN, Pix2pixHD, RegGAN, SPADE, UNIT) were applied to predict the efficacy of anti-VEGF treatment by generating OCT images. Independent screening and evaluation experiments were conducted to validate the quality and comparability of images generated by different GAN models. RESULTS OCT images generated f GAN models exhibited high comparability to the real images, especially for edema absorption. RegGAN exhibited the highest prediction accuracy over the CycleGAN, PairGAN, Pix2pixHD, SPADE, and UNIT models. Further analyses were conducted based on the RegGAN. Most post-therapeutic OCT images (95/103) were difficult to differentiate from the real OCT images by retinal specialists. A mean absolute error of 26.74 ± 21.28 μm was observed for central macular thickness (CMT) between the synthetic and real OCT images. CONCLUSION Different generative adversarial networks have different prognostic efficacy for DME, and RegGAN yielded the best performance in our study. Different GAN models yielded good accuracy in predicting the OCT-based response to anti-VEGF treatment at one month. Overall, the application of GAN models can assist clinicians in prognosis prediction of patients with DME to design better treatment strategies and follow-up schedules.
Collapse
|
19
|
Yang J, Wu S, Dai R, Yu W, Chen Y. Publication trends of artificial intelligence in retina in 10 years: Where do we stand? Front Med (Lausanne) 2022; 9:1001673. [PMID: 36405613 PMCID: PMC9666394 DOI: 10.3389/fmed.2022.1001673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 09/20/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Artificial intelligence (AI) has been applied in the field of retina. The purpose of this study was to analyze the study trends within AI in retina by reporting on publication trends, to identify journals, countries, authors, international collaborations, and keywords involved in AI in retina. MATERIALS AND METHODS A cross-sectional study. Bibliometric methods were used to evaluate global production and development trends in AI in retina since 2012 using Web of Science Core Collection. RESULTS A total of 599 publications were retrieved ultimately. We found that AI in retina is a very attractive topic in scientific and medical community. No journal was found to specialize in AI in retina. The USA, China, and India were the three most productive countries. Authors from Austria, Singapore, and England also had worldwide academic influence. China has shown the greatest rapid increase in publication numbers. International collaboration could increase influence in this field. Keywords revealed that diabetic retinopathy, optical coherence tomography on multiple diseases, algorithm were three popular topics in the field. Most of top journals and top publication on AI in retina were mainly focused on engineering and computing, rather than medicine. CONCLUSION These results helped clarify the current status and future trends in researches of AI in retina. This study may be useful for clinicians and scientists to have a general overview of this field, and better understand the main actors in this field (including authors, journals, and countries). Researches are supposed to focus on more retinal diseases, multiple modal imaging, and performance of AI models in real-world clinical application. Collaboration among countries and institutions is common in current research of AI in retina.
Collapse
Affiliation(s)
- Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shan Wu
- Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,*Correspondence: Youxin Chen,
| |
Collapse
|
20
|
Xu F, Yu X, Gao Y, Ning X, Huang Z, Wei M, Zhai W, Zhang R, Wang S, Li J. Predicting OCT images of short-term response to anti-VEGF treatment for retinal vein occlusion using generative adversarial network. Front Bioeng Biotechnol 2022; 10:914964. [PMID: 36312556 PMCID: PMC9596772 DOI: 10.3389/fbioe.2022.914964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/23/2022] [Indexed: 11/26/2022] Open
Abstract
To generate and evaluate post-therapeutic optical coherence tomography (OCT) images based on pre-therapeutic images with generative adversarial network (GAN) to predict the short-term response of patients with retinal vein occlusion (RVO) to anti-vascular endothelial growth factor (anti-VEGF) therapy. Real-world imaging data were retrospectively collected from 1 May 2017, to 1 June 2021. A total of 515 pairs of pre-and post-therapeutic OCT images of patients with RVO were included in the training set, while 68 pre-and post-therapeutic OCT images were included in the validation set. A pix2pixHD method was adopted to predict post-therapeutic OCT images in RVO patients after anti-VEGF therapy. The quality and similarity of synthetic OCT images were evaluated by screening and evaluation experiments. We quantitatively and qualitatively assessed the prognostic accuracy of the synthetic post-therapeutic OCT images. The post-therapeutic OCT images generated by the pix2pixHD algorithm were comparable to the actual images in edema resorption response. Retinal specialists found most synthetic images (62/68) difficult to differentiate from the real ones. The mean absolute error (MAE) of the central macular thickness (CMT) between the synthetic and real OCT images was 26.33 ± 15.81 μm. There was no statistical difference in CMT between the synthetic and the real images. In this retrospective study, the application of the pix2pixHD algorithm objectively predicted the short-term response of each patient to anti-VEGF therapy based on OCT images with high accuracy, suggestive of its clinical value, especially for screening patients with relatively poor prognosis and potentially guiding clinical treatment. Importantly, our artificial intelligence-based prediction approach's non-invasiveness, repeatability, and cost-effectiveness can improve compliance and follow-up management of this patient population.
Collapse
Affiliation(s)
- Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xuechen Yu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Yang Gao
- School of Physics, Beihang University, Beijing, China
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Xiaolin Ning
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
- Research Institute of Frontier Science, Beihang University, Beijing, China
| | - Ziyuan Huang
- Research Institute of Frontier Science, Beihang University, Beijing, China
| | - Min Wei
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Weibin Zhai
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Rui Zhang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Shaopeng Wang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| |
Collapse
|
21
|
Sohn A, Fine HF, Mantopoulos D. How Artificial Intelligence Aspires to Change the Diagnostic and Treatment Paradigm in Eyes With Age-Related Macular Degeneration. Ophthalmic Surg Lasers Imaging Retina 2022; 53:474-480. [PMID: 36107621 DOI: 10.3928/23258160-20220817-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
22
|
Yeh TC, Luo AC, Deng YS, Lee YH, Chen SJ, Chang PH, Lin CJ, Tai MC, Chou YB. Prediction of treatment outcome in neovascular age-related macular degeneration using a novel convolutional neural network. Sci Rep 2022; 12:5871. [PMID: 35393449 PMCID: PMC8989893 DOI: 10.1038/s41598-022-09642-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 03/21/2022] [Indexed: 12/16/2022] Open
Abstract
While prognosis and risk of progression are crucial in developing precise therapeutic strategy in neovascular age-related macular degeneration (nAMD), limited predictive tools are available. We proposed a novel deep convolutional neural network that enables feature extraction through image and non-image data integration to seize imperative information and achieve highly accurate outcome prediction. The Heterogeneous Data Fusion Net (HDF-Net) was designed to predict visual acuity (VA) outcome (improvement ≥ 2 line or not) at 12th months after anti-VEGF treatment. A set of pre-treatment optical coherence tomography (OCT) image and non-image demographic features were employed as input data and the corresponding 12th-month post-treatment VA as the target data to train, validate, and test the HDF-Net. This newly designed HDF-Net demonstrated an AUC of 0.989 (95% CI 0.970-0.999), accuracy of 0.936 [95% confidence interval (CI) 0.889-0.964], sensitivity of 0.933 (95% CI 0.841-0.974), and specificity of 0.938 (95% CI 0.877-0.969). By simulating the clinical decision process with mixed pre-treatment information from raw OCT images and numeric data, HDF-Net demonstrated promising performance in predicting individualized treatment outcome. The results highlight the potential of deep learning to simultaneously process a broad range of clinical data to weigh and leverage the complete information of the patient. This novel approach is an important step toward real-world personalized therapeutic strategy for typical nAMD.
Collapse
Affiliation(s)
- Tsai-Chu Yeh
- Department of Ophthalmology, Taipei Veterans General Hospital, No. 201, Sec.2, Shih-Pai Road, Taipei, 11217, Taiwan.,National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - An-Chun Luo
- Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yu-Shan Deng
- Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yu-Hsien Lee
- Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, No. 201, Sec.2, Shih-Pai Road, Taipei, 11217, Taiwan.,National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Po-Han Chang
- Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Chun-Ju Lin
- Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ming-Chi Tai
- Industrial Technology Research Institute, Hsinchu, Taiwan.,National Tsing-Hua University, Taipei, Taiwan
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital, No. 201, Sec.2, Shih-Pai Road, Taipei, 11217, Taiwan. .,National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
23
|
Gigon A, Iskandar A, Eandi CM, Mantel I. Fluid dynamics between injections in incomplete anti-VEGF responders within neovascular age-related macular degeneration: a prospective observational study. Int J Retina Vitreous 2022; 8:19. [PMID: 35260186 PMCID: PMC8902718 DOI: 10.1186/s40942-022-00363-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 02/14/2022] [Indexed: 12/01/2022] Open
Abstract
Background The purpose of the study was to investigate the short-term response profile after an intravitreal injection (IVI) of anti-vascular endothelial growth factor (VEGF) in patients with neovascular age-related macular degeneration (nAMD) and incomplete response to anti-VEGF. Methods In this monocentric prospective observational study, we recruited patients with incomplete response to anti-VEGF, defined as presence of subretinal fluid (SRF) and/or intraretinal fluid (IRF) on optical coherence tomography (OCT) for at least 6 months despite monthly anti-VEGF treatment. Each patient underwent complete ophthalmic exam and imaging study (including OCT, fluorescein angiography, indocyanine green angiography, OCT-angiography) the day of their scheduled monthly IVI. Intermediate visits were performed weekly thereafter (comprising ophthalmic exam and OCT), until week 4. Fluid metrics were quantified using an artificial intelligence-based algorithm at baseline and at each subsequent weekly visit. Main outcomes were residual fluid volumes of SRF and IRF for each time point, and its relative change after treatment. Particular interest was given to each patients’ nadir point, which was used for association analysis with imaging parameters. Results A total of 28 eyes of 26 patients were included into the study. The maximal response was reached at 1.93 weeks on average. The relative fluid resolution at nadir point was 66 ± 36.7%, with quartile limits at 49.1%, 83%, and 96.1%, respectively. Mean residual fluid volume was 64.9 ± 128.8 µl at nadir point. Residual fluid was positively correlated with baseline SRF (r = 0.76, p < 0.0001) and larger pigment epithelium detachment (r = 0.65, p = 0.0001). Polypoidal choroidal vasculopathy was associated with larger residual fluid (p = 0.0013). Conclusions Incomplete anti-VEGF responders in nAMD showed significant mean fluid resolution between injections, typically after 2 weeks. However, complete resolution was the exception, and the amount of residual fluid varied greatly. To understand the role of the unresponsive fluid, further studies are needed.
Collapse
|
24
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
25
|
Xu F, Wan C, Zhao L, Liu S, Hong J, Xiang Y, You Q, Zhou L, Li Z, Gong S, Zhu Y, Chen C, Zhang L, Gong Y, Li L, Li C, Zhang X, Guo C, Lai K, Huang C, Ting D, Lin H, Jin C. Predicting Post-Therapeutic Visual Acuity and OCT Images in Patients With Central Serous Chorioretinopathy by Artificial Intelligence. Front Bioeng Biotechnol 2021; 9:649221. [PMID: 34888298 PMCID: PMC8650495 DOI: 10.3389/fbioe.2021.649221] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 10/28/2021] [Indexed: 12/02/2022] Open
Abstract
To predict visual acuity (VA) and post-therapeutic optical coherence tomography (OCT) images 1, 3, and 6 months after laser treatment in patients with central serous chorioretinopathy (CSC) by artificial intelligence (AI). Real-world clinical and imaging data were collected at Zhongshan Ophthalmic Center (ZOC) and Xiamen Eye Center (XEC). The data obtained from ZOC (416 eyes of 401 patients) were used as the training set; the data obtained from XEC (64 eyes of 60 patients) were used as the test set. Six different machine learning algorithms and a blending algorithm were used to predict VA, and a pix2pixHD method was adopted to predict post-therapeutic OCT images in patients after laser treatment. The data for VA predictions included clinical features obtained from electronic medical records (20 features) and measured features obtained from fundus fluorescein angiography, indocyanine green angiography, and OCT (145 features). The data for OCT predictions included 480 pairs of pre- and post-therapeutic OCT images. The VA and OCT images predicted by AI were compared with the ground truth. In the VA predictions of XEC dataset, the mean absolute errors (MAEs) were 0.074–0.098 logMAR (within four to five letters), and the root mean square errors were 0.096–0.127 logMAR (within five to seven letters) for the 1-, 3-, and 6-month predictions, respectively; in the post-therapeutic OCT predictions, only about 5.15% (5 of 97) of synthetic OCT images could be accurately identified as synthetic images. The MAEs of central macular thickness of synthetic OCT images were 30.15 ± 13.28 μm and 22.46 ± 9.71 μm for the 1- and 3-month predictions, respectively. This is the first study to apply AI to predict VA and post-therapeutic OCT of patients with CSC. This work establishes a reliable method of predicting prognosis 6 months in advance; the application of AI has the potential to help reduce patient anxiety and serve as a reference for ophthalmologists when choosing optimal laser treatments.
Collapse
Affiliation(s)
- Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Shaopeng Liu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Jiaming Hong
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Qijing You
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Lijun Zhou
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Songjian Gong
- Xiamen Eye Center, Affiliated with Xiamen University, Xiamen, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School, Miami, FL, United States
| | - Chuan Chen
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School, Miami, FL, United States
| | - Li Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Department of Ophthalmology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yajun Gong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Cong Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Kunbei Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chuangxin Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Daniel Ting
- Singapore National Eye Center, Department of Ophthalmology, Singapore, Singapore
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center of Precision Medicine, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
26
|
Chen JS, Coyner AS, Chan RP, Hartnett ME, Moshfeghi DM, Owen LA, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deepfakes in Ophthalmology. OPHTHALMOLOGY SCIENCE 2021; 1:100079. [PMID: 36246951 PMCID: PMC9562356 DOI: 10.1016/j.xops.2021.100079] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 10/01/2021] [Accepted: 10/29/2021] [Indexed: 02/06/2023]
Abstract
Purpose Generative adversarial networks (GANs) are deep learning (DL) models that can create and modify realistic-appearing synthetic images, or deepfakes, from real images. The purpose of our study was to evaluate the ability of experts to discern synthesized retinal fundus images from real fundus images and to review the current uses and limitations of GANs in ophthalmology. Design Development and expert evaluation of a GAN and an informal review of the literature. Participants A total of 4282 image pairs of fundus images and retinal vessel maps acquired from a multicenter ROP screening program. Methods Pix2Pix HD, a high-resolution GAN, was first trained and validated on fundus and vessel map image pairs and subsequently used to generate 880 images from a held-out test set. Fifty synthetic images from this test set and 50 different real images were presented to 4 expert ROP ophthalmologists using a custom online system for evaluation of whether the images were real or synthetic. Literature was reviewed on PubMed and Google Scholars using combinations of the terms ophthalmology, GANs, generative adversarial networks, ophthalmology, images, deepfakes, and synthetic. Ancestor search was performed to broaden results. Main Outcome Measures Expert ability to discern real versus synthetic images was evaluated using percent accuracy. Statistical significance was evaluated using a Fisher exact test, with P values ≤ 0.05 thresholded for significance. Results The expert majority correctly identified 59% of images as being real or synthetic (P = 0.1). Experts 1 to 4 correctly identified 54%, 58%, 49%, and 61% of images (P = 0.505, 0.158, 1.000, and 0.043, respectively). These results suggest that the majority of experts could not discern between real and synthetic images. Additionally, we identified 20 implementations of GANs in the ophthalmology literature, with applications in a variety of imaging modalities and ophthalmic diseases. Conclusions Generative adversarial networks can create synthetic fundus images that are indiscernible from real fundus images by expert ROP ophthalmologists. Synthetic images may improve dataset augmentation for DL, may be used in trainee education, and may have implications for patient privacy.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois
| | - M. Elizabeth Hartnett
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Leah A. Owen
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
- Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
- Correspondence: J. Peter Campbell, MD, MPH, Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, 515 SW Campus Drive, Portland, OR 97239.
| |
Collapse
|
27
|
Zhang Y, Ma X, Li M, Ji Z, Yuan S, Chen Q. LamNet: A Lesion Attention Maps-Guided Network for the Prediction of Choroidal Neovascularization Volume in SD-OCT Images. IEEE J Biomed Health Inform 2021; 26:1660-1671. [PMID: 34797769 DOI: 10.1109/jbhi.2021.3129462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Choroidal neovascularization (CNV) volume prediction has an important clinical significance to predict the therapeutic effect and schedule the follow-up. In this paper, we propose a Lesion Attention Maps-Guided Network (LamNet) to automatically predict the CNV volume of next follow-up visit after therapy based on 3-dimentional spectral-domain optical coherence tomography (SD-OCT) images. In particular, the backbone of LamNet is a 3D convolutional neural network (3D-CNN). In order to guide the network to focus on the local CNV lesion regions, we use CNV attention maps generated by an attention map generator to produce the multi-scale local context features. Then, the multi-scale of both local and global feature maps are fused to achieve the high-precision CNV volume prediction. In addition, we also design a synergistic multi-task predictor, in which a trend-consistent loss ensures that the change trend of the predicted CNV volume is consistent with the real change trend of the CNV volume. The experiments include a total of 541 SD-OCT cubes from 68 patients with two types of CNV captured by two different SD-OCT devices. The results demonstrate that LamNet can provide the reliable and accurate CNV volume prediction, which would further assist the clinical diagnosis and design the treatment options.
Collapse
|
28
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
29
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|
30
|
Zhao X, Zhang X, Lv B, Meng L, Zhang C, Liu Y, Lv C, Xie G, Chen Y. Optical coherence tomography-based short-term effect prediction of anti-vascular endothelial growth factor treatment in neovascular age-related macular degeneration using sensitive structure guided network. Graefes Arch Clin Exp Ophthalmol 2021; 259:3261-3269. [PMID: 34097114 DOI: 10.1007/s00417-021-05247-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/04/2021] [Accepted: 05/12/2021] [Indexed: 01/04/2023] Open
Abstract
PURPOSE To predict short-term anti-vascular endothelial growth factor (anti-VEGF) treatment responder/non-responder for neovascular age-related macular degeneration (nAMD) patients based on optical coherence tomography (OCT) images. METHODS A total of 4944 OCT scans from 206 patients with nAMD were involved to develop and evaluate a responder/non-responder prediction method for the short-term effect of anti-VEGF therapy. A deep learning architecture named sensitive structure guided network (SSG-Net) was proposed to make the prediction leveraging a sensitive structure guidance module trained from pre- and post-treatment images. To verify its clinical efficiency, other 2 deep learning methods and 4 experienced ophthalmologists were involved to evaluate the performance of the developed model. RESULTS For the testing dataset, SSG-Net could predict the response by an accuracy of 84.6% and an area under the receiver curve (AUC) of 0.83, with a sensitivity of 0.692 and specificity of 1. In contrast, the 2 compared deep learning methods achieved an accuracy of 65.4% with a sensitivity of 0.461 and specificity of 0.846, and an accuracy of 73.1% with a sensitivity of 0.692 and specificity of 0.846, respectively. The predicted accuracy for 4 experienced ophthalmologists was 53.8 to 76.9%, with sensitivity of 0.538 to 0.923 and specificity of 0.385 to 0.846, respectively. CONCLUSION Our proposed SSG-Net shows effective prediction on the short-term efficacy of anti-VEGF treatment for nAMD patients. This technique could potentially help clinicians explain the necessity of anti-VEGF treatment to the potential responder and avoid unnecessary treatment for the non-responder.
Collapse
Affiliation(s)
- Xinyu Zhao
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China.,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | | | - Bin Lv
- Ping An Healthcare Technology, Beijing, China
| | - Lihui Meng
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China.,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | | | - Yang Liu
- Ping An Healthcare Technology, Beijing, China
| | | | - Guotong Xie
- Ping An Healthcare Technology, Beijing, China. .,Ping An Health and Technology Company Limited, Shanghai, China. .,Ping An International Smart City Technology Company Limited, Shenzhen, China.
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China. .,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| |
Collapse
|
31
|
Tavakkoli A, Kamran SA, Hossain KF, Zuckerbrod SL. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep 2020; 10:21580. [PMID: 33299065 PMCID: PMC7725777 DOI: 10.1038/s41598-020-78696-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 11/26/2020] [Indexed: 12/20/2022] Open
Abstract
Fluorescein angiography (FA) is a procedure used to image the vascular structure of the retina and requires the insertion of an exogenous dye with potential adverse side effects. Currently, there is only one alternative non-invasive system based on Optical coherence tomography (OCT) technology, called OCT angiography (OCTA), capable of visualizing retina vasculature. However, due to its cost and limited view, OCTA technology is not widely used. Retinal fundus photography is a safe imaging technique used for capturing the overall structure of the retina. In order to visualize retinal vasculature without the need for FA and in a cost-effective, non-invasive, and accurate manner, we propose a deep learning conditional generative adversarial network (GAN) capable of producing FA images from fundus photographs. The proposed GAN produces anatomically accurate angiograms, with similar fidelity to FA images, and significantly outperforms two other state-of-the-art generative algorithms ([Formula: see text] and [Formula: see text]). Furthermore, evaluations by experts shows that our proposed model produces such high quality FA images that are indistinguishable from real angiograms. Our model as the first application of artificial intelligence and deep learning to medical image translation, by employing a theoretical framework capable of establishing a shared feature-space between two domains (i.e. funduscopy and fluorescein angiography) provides an unrivaled way for the translation of images from one domain to the other.
Collapse
Affiliation(s)
- Alireza Tavakkoli
- Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, 89557, USA.
| | - Sharif Amit Kamran
- Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, 89557, USA
| | | | | |
Collapse
|