1
|
Christopher M, Hallaj S, Jiravarnsirikul A, Baxter SL, Zangwill LM. Novel Technologies in Artificial Intelligence and Telemedicine for Glaucoma Screening. J Glaucoma 2024; 33:S26-S32. [PMID: 38506792 DOI: 10.1097/ijg.0000000000002367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 01/22/2024] [Indexed: 03/21/2024]
Abstract
PURPOSE To provide an overview of novel technologies in telemedicine and artificial intelligence (AI) approaches for cost-effective glaucoma screening. METHODS/RESULTS A narrative review was performed by summarizing research results, recent developments in glaucoma detection and care, and considerations related to telemedicine and AI in glaucoma screening. Telemedicine and AI approaches provide the opportunity for novel glaucoma screening programs in primary care, optometry, portable, and home-based settings. These approaches offer several advantages for glaucoma screening, including increasing access to care, lowering costs, identifying patients in need of urgent treatment, and enabling timely diagnosis and early intervention. However, challenges remain in implementing these systems, including integration into existing clinical workflows, ensuring equity for patients, and meeting ethical and regulatory requirements. Leveraging recent work towards standardized data acquisition as well as tools and techniques developed for automated diabetic retinopathy screening programs may provide a model for a cost-effective approach to glaucoma screening. CONCLUSION Leveraging novel technologies and advances in telemedicine and AI-based approaches to glaucoma detection show promise for improving our ability to detect moderate and advanced glaucoma in primary care settings and target higher individuals at high risk for having the disease.
Collapse
Affiliation(s)
- Mark Christopher
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Shahin Hallaj
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Anuwat Jiravarnsirikul
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Sally L Baxter
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Linda M Zangwill
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| |
Collapse
|
2
|
Feng X, Xu K, Luo MJ, Chen H, Yang Y, He Q, Song C, Li R, Wu Y, Wang H, Tham YC, Ting DSW, Lin H, Wong TY, Lam DSC. Latest developments of generative artificial intelligence and applications in ophthalmology. Asia Pac J Ophthalmol (Phila) 2024; 13:100090. [PMID: 39128549 DOI: 10.1016/j.apjo.2024.100090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/13/2024] Open
Abstract
The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.
Collapse
Affiliation(s)
- Xiaoru Feng
- School of Biomedical Engineering, Tsinghua Medicine, Tsinghua University, Beijing, China; Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kezheng Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Ming-Jie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haichao Chen
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Yangfan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qi He
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Chenxin Song
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Ruiyao Li
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - You Wu
- Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China; School of Basic Medical Sciences, Tsinghua Medicine, Tsinghua University, Beijing, China; Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA.
| | - Haibo Wang
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
| | - Tien Yin Wong
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dennis Shun-Chiu Lam
- The International Eye Research Institute, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER International Eye Care Group, Hong Kong, Hong Kong, China
| |
Collapse
|
3
|
Ma D, Deng W, Khera Z, Sajitha TA, Wang X, Wollstein G, Schuman JS, Lee S, Shi H, Ju MJ, Matsubara J, Beg MF, Sarunic M, Sappington RM, Chan KC. Early inner plexiform layer thinning and retinal nerve fiber layer thickening in excitotoxic retinal injury using deep learning-assisted optical coherence tomography. Acta Neuropathol Commun 2024; 12:19. [PMID: 38303097 PMCID: PMC10835918 DOI: 10.1186/s40478-024-01732-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 01/14/2024] [Indexed: 02/03/2024] Open
Abstract
Excitotoxicity from the impairment of glutamate uptake constitutes an important mechanism in neurodegenerative diseases such as Alzheimer's, multiple sclerosis, and Parkinson's disease. Within the eye, excitotoxicity is thought to play a critical role in retinal ganglion cell death in glaucoma, diabetic retinopathy, retinal ischemia, and optic nerve injury, yet how excitotoxic injury impacts different retinal layers is not well understood. Here, we investigated the longitudinal effects of N-methyl-D-aspartate (NMDA)-induced excitotoxic retinal injury in a rat model using deep learning-assisted retinal layer thickness estimation. Before and after unilateral intravitreal NMDA injection in nine adult Long Evans rats, spectral-domain optical coherence tomography (OCT) was used to acquire volumetric retinal images in both eyes over 4 weeks. Ten retinal layers were automatically segmented from the OCT data using our deep learning-based algorithm. Retinal degeneration was evaluated using layer-specific retinal thickness changes at each time point (before, and at 3, 7, and 28 days after NMDA injection). Within the inner retina, our OCT results showed that retinal thinning occurred first in the inner plexiform layer at 3 days after NMDA injection, followed by the inner nuclear layer at 7 days post-injury. In contrast, the retinal nerve fiber layer exhibited an initial thickening 3 days after NMDA injection, followed by normalization and thinning up to 4 weeks post-injury. Our results demonstrated the pathological cascades of NMDA-induced neurotoxicity across different layers of the retina. The early inner plexiform layer thinning suggests early dendritic shrinkage, whereas the initial retinal nerve fiber layer thickening before subsequent normalization and thinning indicates early inflammation before axonal loss and cell death. These findings implicate the inner plexiform layer as an early imaging biomarker of excitotoxic retinal degeneration, whereas caution is warranted when interpreting the ganglion cell complex combining retinal nerve fiber layer, ganglion cell layer, and inner plexiform layer thicknesses in conventional OCT measures. Deep learning-assisted retinal layer segmentation and longitudinal OCT monitoring can help evaluate the different phases of retinal layer damage upon excitotoxicity.
Collapse
Affiliation(s)
- Da Ma
- Wake Forest University School of Medicine, 1 Medical Center Blvd, Winston-Salem, NC, 27157, USA.
- Wake Forest University Health Sciences, Winston-Salem, NC, USA.
- Translational Eye and Vision Research Center, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada.
| | - Wenyu Deng
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, USA
| | - Zain Khera
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Thajunnisa A Sajitha
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Xinlei Wang
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Joel S Schuman
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
- Wills Eye Hospital, Philadelphia, PA, USA
- Department of Biomedical Engineering, Drexel University, Philadelphia, PA, USA
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Sieun Lee
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
- Department of Ophthalmology and Visual Sciences, The University of British Columbia, Vancouver, BC, Canada
- Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Haolun Shi
- Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, BC, Canada
| | - Myeong Jin Ju
- Department of Ophthalmology and Visual Sciences, The University of British Columbia, Vancouver, BC, Canada
| | - Joanne Matsubara
- Department of Ophthalmology and Visual Sciences, The University of British Columbia, Vancouver, BC, Canada
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Marinko Sarunic
- Institute of Ophthalmology, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Rebecca M Sappington
- Wake Forest University School of Medicine, 1 Medical Center Blvd, Winston-Salem, NC, 27157, USA
- Wake Forest University Health Sciences, Winston-Salem, NC, USA
- Translational Eye and Vision Research Center, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Kevin C Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA.
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA.
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA.
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA.
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA.
| |
Collapse
|
4
|
Liu K, Zhang J. Cost-efficient and glaucoma-specifical model by exploiting normal OCT images with knowledge transfer learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:6151-6171. [PMID: 38420316 PMCID: PMC10898582 DOI: 10.1364/boe.500917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/17/2023] [Accepted: 10/21/2023] [Indexed: 03/02/2024]
Abstract
Monitoring the progression of glaucoma is crucial for preventing further vision loss. However, deep learning-based models emphasize early glaucoma detection, resulting in a significant performance gap to glaucoma-confirmed subjects. Moreover, developing a fully-supervised model is suffering from insufficient annotated glaucoma datasets. Currently, sufficient and low-cost normal OCT images with pixel-level annotations can serve as valuable resources, but effectively transferring shared knowledge from normal datasets is a challenge. To alleviate the issue, we propose a knowledge transfer learning model for exploiting shared knowledge from low-cost and sufficient annotated normal OCT images by explicitly establishing the relationship between the normal domain and the glaucoma domain. Specifically, we directly introduce glaucoma domain information to the training stage through a three-step adversarial-based strategy. Additionally, our proposed model exploits different level shared features in both output space and encoding space with a suitable output size by a multi-level strategy. We have collected and collated a dataset called the TongRen OCT glaucoma dataset, including pixel-level annotated glaucoma OCT images and diagnostic information. The results on the dataset demonstrate our proposed model outperforms the un-supervised model and the mixed training strategy, achieving an increase of 5.28% and 5.77% on mIoU, respectively. Moreover, our proposed model narrows performance gap to the fully-supervised model decreased by only 1.01% on mIoU. Therefore, our proposed model can serve as a valuable tool for extracting glaucoma-related features, facilitating the tracking progression of glaucoma.
Collapse
Affiliation(s)
- Kai Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
- Department of Computer Science, City University of Hong Kong, Hong Kong, 98121, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
- Hefei Innovation Research Institute, Beihang University, Hefei, 230012, China
| |
Collapse
|
5
|
Zhang T, Wei Q, Li Z, Meng W, Zhang M, Zhang Z. Segmentation of paracentral acute middle maculopathy lesions in spectral-domain optical coherence tomography images through weakly supervised deep convolutional networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107632. [PMID: 37329802 DOI: 10.1016/j.cmpb.2023.107632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/23/2023] [Accepted: 05/28/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVES Spectral-domain optical coherence tomography (SD-OCT) is a valuable tool for non-invasive imaging of the retina, allowing the discovery and visualization of localized lesions, the presence of which is associated with eye diseases. The present study introduces X-Net, a weakly supervised deep-learning framework for automated segmentation of paracentral acute middle maculopathy (PAMM) lesions in retinal SD-OCT images. Despite recent advances in the development of automatic methods for clinical analysis of OCT scans, there remains a scarcity of studies focusing on the automated detection of small retinal focal lesions. Additionally, most existing solutions depend on supervised learning, which can be time-consuming and require extensive image labeling, whereas X-Net offers a solution to these challenges. As far as we can determine, no prior study has addressed the segmentation of PAMM lesions in SD-OCT images. METHODS This study leverages 133 SD-OCT retinal images, each containing instances of paracentral acute middle maculopathy lesions. A team of eye experts annotated the PAMM lesions in these images using bounding boxes. Then, labeled data were used to train a U-Net that performs pre-segmentation, producing region labels of pixel-level accuracy. To attain a highly-accurate final segmentation, we introduced X-Net, a novel neural network made up of a master and a slave U-Net. During training, it takes the expert annotated, and pixel-level pre-segment annotated images and employs sophisticated strategies to ensure the highest segmentation accuracy. RESULTS The proposed method was rigorously evaluated on clinical retinal images excluded from training and achieved an accuracy of 99% with a high level of similarity between the automatic segmentation and expert annotation, as demonstrated by a mean Intersection-over-Union of 0.8. Alternative methods were tested on the same data. Single-stage neural networks proved insufficient for achieving satisfactory results, confirming that more advanced solutions, such as the proposed method, are necessary. We also found that X-Net using Attention U-net for both the pre-segmentation and X-Net arms for the final segmentation shows comparable performance to the proposed method, suggesting that the proposed approach remains a viable solution even when implemented with variants of the classic U-Net. CONCLUSIONS The proposed method exhibits reasonably high performance, validated through quantitative and qualitative evaluations. Medical eye specialists have also verified its validity and accuracy. Thus, it could be a viable tool in the clinical assessment of the retina. Additionally, the demonstrated approach for annotating the training set has proven to be effective in reducing the expert workload.
Collapse
Affiliation(s)
- Tianqiao Zhang
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin, China
| | - Qiaoqian Wei
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin, China
| | - Zhenzhen Li
- School of Information Engineering, Nanchang Institute of Technology, Nanchang, China
| | - Wenjing Meng
- Department of Library Services, Guilin University of Electronic Technology, Guilin, China
| | - Mengjiao Zhang
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin, China
| | - Zhengwei Zhang
- Department of Ophthalmology, Jiangnan University Medical Center, Wuxi, China; Department of Ophthalmology, Wuxi No.2 People's Hospital, Affiliated Wuxi Clinical College of Nantong University, Wuxi, China.
| |
Collapse
|
6
|
Nawaz M, Uvaliyev A, Bibi K, Wei H, Abaxi SMD, Masood A, Shi P, Ho HP, Yuan W. Unraveling the complexity of Optical Coherence Tomography image segmentation using machine and deep learning techniques: A review. Comput Med Imaging Graph 2023; 108:102269. [PMID: 37487362 DOI: 10.1016/j.compmedimag.2023.102269] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 07/26/2023]
Abstract
Optical Coherence Tomography (OCT) is an emerging technology that provides three-dimensional images of the microanatomy of biological tissue in-vivo and at micrometer-scale resolution. OCT imaging has been widely used to diagnose and manage various medical diseases, such as macular degeneration, glaucoma, and coronary artery disease. Despite its wide range of applications, the segmentation of OCT images remains difficult due to the complexity of tissue structures and the presence of artifacts. In recent years, different approaches have been used for OCT image segmentation, such as intensity-based, region-based, and deep learning-based methods. This paper reviews the major advances in state-of-the-art OCT image segmentation techniques. It provides an overview of the advantages and limitations of each method and presents the most relevant research works related to OCT image segmentation. It also provides an overview of existing datasets and discusses potential clinical applications. Additionally, this review gives an in-depth analysis of machine learning and deep learning approaches for OCT image segmentation. It outlines challenges and opportunities for further research in this field.
Collapse
Affiliation(s)
- Mehmood Nawaz
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Adilet Uvaliyev
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Khadija Bibi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Hao Wei
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Sai Mu Dalike Abaxi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Anum Masood
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Peilun Shi
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Ho-Pui Ho
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Wu Yuan
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China.
| |
Collapse
|
7
|
Ma D, Pasquale LR, Girard MJA, Leung CKS, Jia Y, Sarunic MV, Sappington RM, Chan KC. Reverse translation of artificial intelligence in glaucoma: Connecting basic science with clinical applications. FRONTIERS IN OPHTHALMOLOGY 2023; 2:1057896. [PMID: 36866233 PMCID: PMC9976697 DOI: 10.3389/fopht.2022.1057896] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/05/2022] [Indexed: 04/16/2023]
Abstract
Artificial intelligence (AI) has been approved for biomedical research in diverse areas from bedside clinical studies to benchtop basic scientific research. For ophthalmic research, in particular glaucoma, AI applications are rapidly growing for potential clinical translation given the vast data available and the introduction of federated learning. Conversely, AI for basic science remains limited despite its useful power in providing mechanistic insight. In this perspective, we discuss recent progress, opportunities, and challenges in the application of AI in glaucoma for scientific discoveries. Specifically, we focus on the research paradigm of reverse translation, in which clinical data are first used for patient-centered hypothesis generation followed by transitioning into basic science studies for hypothesis validation. We elaborate on several distinctive areas of research opportunities for reverse translation of AI in glaucoma including disease risk and progression prediction, pathology characterization, and sub-phenotype identification. We conclude with current challenges and future opportunities for AI research in basic science for glaucoma such as inter-species diversity, AI model generalizability and explainability, as well as AI applications using advanced ocular imaging and genomic data.
Collapse
Affiliation(s)
- Da Ma
- School of Medicine, Wake Forest University, Winston-Salem, NC, United States
- Atrium Health Wake Forest Baptist Medical Center, Winston-Salem, NC, United States
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Louis R. Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Michaël J. A. Girard
- Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Institute for Molecular and Clinical Ophthalmology, Basel, Switzerland
| | | | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, United States
| | - Marinko V. Sarunic
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Rebecca M. Sappington
- School of Medicine, Wake Forest University, Winston-Salem, NC, United States
- Atrium Health Wake Forest Baptist Medical Center, Winston-Salem, NC, United States
| | - Kevin C. Chan
- Departments of Ophthalmology and Radiology, Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, United States
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, New York, NY, United States
| |
Collapse
|
8
|
Heikka T, Jansonius NM. Influence of signal‐to‐noise ratio, glaucoma stage and segmentation algorithm on
OCT
usability for quantifying layer thicknesses in the peripapillary retina. Acta Ophthalmol 2022; 101:251-260. [PMID: 36331147 DOI: 10.1111/aos.15279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 10/10/2022] [Accepted: 10/18/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE OCT can be used for glaucoma assessment, but its usefulness may depend on image quality, disease stage and segmentation algorithm. We aimed to determine how layer thicknesses as assessed with different algorithms depend on image quality and disease stage, how reproducible the algorithms are, and if the algorithms (dis)agree. METHODS Optic disc OCT data (Canon OCT-HS100) from 20 healthy subjects and 28 early, 29 moderate, and 23 severe glaucoma patients were assessed with four different algorithms (CANON, IOWA, FWHM, DOCTRAP). We measured retinal nerve fibre layer thickness (RNFLT) and total retinal thickness (TRT) along the 1.7-mm-radius OCT measurement circle centred at the optic disc. In healthy subjects, image quality was degraded with neutral density filters (0.3-0.9 optical density [OD]); three scans were made to assess repeatability. Results were analysed with ANOVA with Bonferroni corrected t-tests for post hoc analysis and with intraclass correlation coefficient (ICC) analysis. RESULTS For all algorithms, RNFLT was more sensitive to image quality than TRT. Both RNFLT and TRT showed differences between healthy and glaucoma (all algorithms p < 0.001 for both RNFLT and TRT) and between early and moderate glaucoma (RNFLT: p = 0.001 to p = 0.09; TRT: p < 0.001 to p = 0.03); neither was able to discriminate between moderate and severe glaucoma (p = 0.08 to p = 1.0). Generally, repeatability was excellent (ICC >0.75); agreement between algorithms varied from moderate to excellent. CONCLUSIONS OCT becomes less informative with glaucoma progression, irrespective of the algorithm. For good-quality scans, RNFLT and TRT perform similarly; TRT may be advantageous with poor image quality.
Collapse
Affiliation(s)
- Tuomas Heikka
- Department of Ophthalmology, University of Groningen University Medical Center Groningen Groningen The Netherlands
| | - Nomdo M. Jansonius
- Department of Ophthalmology, University of Groningen University Medical Center Groningen Groningen The Netherlands
- Graduate School of Medical Sciences (Research School of Behavioural and Cognitive Neurosciences) University of Groningen Groningen The Netherlands
| |
Collapse
|
9
|
Thompson AC, Falconi A, Sappington RM. Deep learning and optical coherence tomography in glaucoma: Bridging the diagnostic gap on structural imaging. FRONTIERS IN OPHTHALMOLOGY 2022; 2:937205. [PMID: 38983522 PMCID: PMC11182271 DOI: 10.3389/fopht.2022.937205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/22/2022] [Indexed: 07/11/2024]
Abstract
Glaucoma is a leading cause of progressive blindness and visual impairment worldwide. Microstructural evidence of glaucomatous damage to the optic nerve head and associated tissues can be visualized using optical coherence tomography (OCT). In recent years, development of novel deep learning (DL) algorithms has led to innovative advances and improvements in automated detection of glaucomatous damage and progression on OCT imaging. DL algorithms have also been trained utilizing OCT data to improve detection of glaucomatous damage on fundus photography, thus improving the potential utility of color photos which can be more easily collected in a wider range of clinical and screening settings. This review highlights ten years of contributions to glaucoma detection through advances in deep learning models trained utilizing OCT structural data and posits future directions for translation of these discoveries into the field of aging and the basic sciences.
Collapse
Affiliation(s)
- Atalie C. Thompson
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Internal Medicine, Gerontology, and Geriatric Medicine, Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Aurelio Falconi
- Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Rebecca M. Sappington
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston Salem, NC, United States
| |
Collapse
|
10
|
Zhang Z, Cheng N, Liu Y, Song J, Liu X, Zhang S, Zhang G. Prediction of corneal astigmatism based on corneal tomography after femtosecond laser arcuate keratotomy using a pix2pix conditional generative adversarial network. Front Public Health 2022; 10:1012929. [PMID: 36187623 PMCID: PMC9523441 DOI: 10.3389/fpubh.2022.1012929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 08/29/2022] [Indexed: 01/27/2023] Open
Abstract
Purpose This study aimed to develop a deep learning model to generate a postoperative corneal axial curvature map of femtosecond laser arcuate keratotomy (FLAK) based on corneal tomography using a pix2pix conditional generative adversarial network (pix2pix cGAN) for surgical planning. Methods A total of 451 eyes of 318 nonconsecutive patients were subjected to FLAK for corneal astigmatism correction during cataract surgery. Paired or single anterior penetrating FLAKs were performed at an 8.0-mm optical zone with a depth of 90% using a femtosecond laser (LenSx laser, Alcon Laboratories, Inc.). Corneal tomography images were acquired from Oculus Pentacam HR (Optikgeräte GmbH, Wetzlar, Germany) before and 3 months after the surgery. The raw data required for analysis consisted of the anterior corneal curvature for a range of ± 3.5 mm around the corneal apex in 0.1-mm steps, which the pseudo-color corneal curvature map synthesized was based on. The deep learning model used was a pix2pix conditional generative adversarial network. The prediction accuracy of synthetic postoperative corneal astigmatism in zones of different diameters centered on the corneal apex was assessed using vector analysis. The synthetic postoperative corneal axial curvature maps were compared with the real postoperative corneal axial curvature maps using the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). Results A total of 386 pairs of preoperative and postoperative corneal tomography data were included in the training set, whereas 65 preoperative data were retrospectively included in the test set. The correlation coefficient between synthetic and real postoperative astigmatism (difference vector) in the 3-mm zone was 0.89, and that between surgically induced astigmatism (SIA) was 0.93. The mean absolute errors of SIA for real and synthetic postoperative corneal axial curvature maps in the 1-, 3-, and 5-mm zone were 0.20 ± 0.25, 0.12 ± 0.17, and 0.09 ± 0.13 diopters, respectively. The average SSIM and PSNR of the 3-mm zone were 0.86 ± 0.04 and 18.24 ± 5.78, respectively. Conclusion Our results showed that the application of pix2pix cGAN can synthesize plausible postoperative corneal tomography for FLAK, showing the possibility of using GAN to predict corneal tomography, with the potential of applying artificial intelligence to construct surgical planning models.
Collapse
Affiliation(s)
- Zhe Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China,Department of Cataract, Shanxi Eye Hospital, Taiyuan, China,First Hospital of Shanxi Medical University, Taiyuan, China
| | - Nan Cheng
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China
| | - Junyang Song
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Junyang Song
| | - Xinhua Liu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Suhua Zhang
- Department of Cataract, Shanxi Eye Hospital, Taiyuan, China,Taiyuan Central Hospital of Shanxi Medical University, Taiyuan, China,Suhua Zhang
| | - Guanghua Zhang
- Department of Intelligence and Automation, Taiyuan University, Taiyuan, China,Graphics and Imaging Laboratory, University of Girona, Girona, Spain,Guanghua Zhang
| |
Collapse
|
11
|
Sreejith Kumar AJ, Chong RS, Crowston JG, Chua J, Bujor I, Husain R, Vithana EN, Girard MJA, Ting DSW, Cheng CY, Aung T, Popa-Cherecheanu A, Schmetterer L, Wong D. Evaluation of Generative Adversarial Networks for High-Resolution Synthetic Image Generation of Circumpapillary Optical Coherence Tomography Images for Glaucoma. JAMA Ophthalmol 2022; 140:974-981. [PMID: 36048435 PMCID: PMC9437828 DOI: 10.1001/jamaophthalmol.2022.3375] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance Deep learning (DL) networks require large data sets for training, which can be challenging to collect clinically. Generative models could be used to generate large numbers of synthetic optical coherence tomography (OCT) images to train such DL networks for glaucoma detection. Objective To assess whether generative models can synthesize circumpapillary optic nerve head OCT images of normal and glaucomatous eyes and determine the usability of synthetic images for training DL models for glaucoma detection. Design, Setting, and Participants Progressively growing generative adversarial network models were trained to generate circumpapillary OCT scans. Image gradeability and authenticity were evaluated on a clinical set of 100 real and 100 synthetic images by 2 clinical experts. DL networks for glaucoma detection were trained with real or synthetic images and evaluated on independent internal and external test data sets of 140 and 300 real images, respectively. Main Outcomes and Measures Evaluations of the clinical set between the experts were compared. Glaucoma detection performance of the DL networks was assessed using area under the curve (AUC) analysis. Class activation maps provided visualizations of the regions contributing to the respective classifications. Results A total of 990 normal and 862 glaucomatous eyes were analyzed. Evaluations of the clinical set were similar for gradeability (expert 1: 92.0%; expert 2: 93.0%) and authenticity (expert 1: 51.8%; expert 2: 51.3%). The best-performing DL network trained on synthetic images had AUC scores of 0.97 (95% CI, 0.95-0.99) on the internal test data set and 0.90 (95% CI, 0.87-0.93) on the external test data set, compared with AUCs of 0.96 (95% CI, 0.94-0.99) on the internal test data set and 0.84 (95% CI, 0.80-0.87) on the external test data set for the network trained with real images. An increase in the AUC for the synthetic DL network was observed with the use of larger synthetic data set sizes. Class activation maps showed that the regions of the synthetic images contributing to glaucoma detection were generally similar to that of real images. Conclusions and Relevance DL networks trained with synthetic OCT images for glaucoma detection were comparable with networks trained with real images. These results suggest potential use of generative models in the training of DL networks and as a means of data sharing across institutions without patient information confidentiality issues.
Collapse
Affiliation(s)
- Ashish Jith Sreejith Kumar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore.,Institute for Infocomm Research, A*STAR, Singapore
| | - Rachel S Chong
- Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Jonathan G Crowston
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Inna Bujor
- Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Eranga N Vithana
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Michaël J A Girard
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore.,Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore.,Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Alina Popa-Cherecheanu
- Carol Davila University of Medicine and Pharmacy, Bucharest, Romania.,Emergency University Hospital, Department of Ophthalmology, Bucharest, Romania
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Academic Clinical Program, Duke-NUS Medical School, Singapore.,SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore.,Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.,Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria.,School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore.,Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria.,Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore.,School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
| |
Collapse
|
12
|
Marques R, Andrade De Jesus D, Barbosa-Breda J, Van Eijgen J, Stalmans I, van Walsum T, Klein S, G Vaz P, Sánchez Brea L. Automatic Segmentation of the Optic Nerve Head Region in Optical Coherence Tomography: A Methodological Review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106801. [PMID: 35429812 DOI: 10.1016/j.cmpb.2022.106801] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 03/07/2022] [Accepted: 04/01/2022] [Indexed: 06/14/2023]
Abstract
The optic nerve head (ONH) represents the intraocular section of the optic nerve, which is prone to damage by intraocular pressure (IOP). The advent of optical coherence tomography (OCT) has enabled the evaluation of novel ONH parameters, namely the depth and curvature of the lamina cribrosa (LC). Together with the Bruch's membrane minimum-rim-width (BMO-MRW), these seem to be promising ONH parameters for diagnosis and monitoring of retinal diseases such as glaucoma. Nonetheless, these OCT derived biomarkers are mostly extracted through manual segmentation, which is time-consuming and prone to bias, thus limiting their usability in clinical practice. The automatic segmentation of ONH in OCT scans could further improve the current clinical management of glaucoma and other diseases. This review summarizes the current state-of-the-art in automatic segmentation of the ONH in OCT. PubMed and Scopus were used to perform a systematic review. Additional works from other databases (IEEE, Google Scholar and ARVO IOVS) were also included, resulting in a total of 29 reviewed studies. For each algorithm, the methods, the size and type of dataset used for validation, and the respective results were carefully analysed. The results show a lack of consensus regarding the definition of segmented regions, extracted parameters and validation approaches, highlighting the importance and need of standardized methodologies for ONH segmentation. Only with a concrete set of guidelines, these automatic segmentation algorithms will build trust in data-driven segmentation models and be able to enter clinical practice.
Collapse
Affiliation(s)
- Rita Marques
- Laboratory for Instrumentation, Biomedical Engineering and Radiation Physics (LIBPhys-UC), Department of Physics, University of Coimbra, Coimbra, Portugal; Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Danilo Andrade De Jesus
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands.
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Porto, Portugal; Ophthalmology Department, São João Universitary Hospital Center, Porto, Portugal
| | - Jan Van Eijgen
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Pedro G Vaz
- Laboratory for Instrumentation, Biomedical Engineering and Radiation Physics (LIBPhys-UC), Department of Physics, University of Coimbra, Coimbra, Portugal
| | - Luisa Sánchez Brea
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| |
Collapse
|
13
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
14
|
Li J, Jin P, Zhu J, Zou H, Xu X, Tang M, Zhou M, Gan Y, He J, Ling Y, Su Y. Multi-scale GCN-assisted two-stage network for joint segmentation of retinal layers and discs in peripapillary OCT images. BIOMEDICAL OPTICS EXPRESS 2021; 12:2204-2220. [PMID: 33996224 PMCID: PMC8086482 DOI: 10.1364/boe.417212] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 05/03/2023]
Abstract
An accurate and automated tissue segmentation algorithm for retinal optical coherence tomography (OCT) images is crucial for the diagnosis of glaucoma. However, due to the presence of the optic disc, the anatomical structure of the peripapillary region of the retina is complicated and is challenging for segmentation. To address this issue, we develop a novel graph convolutional network (GCN)-assisted two-stage framework to simultaneously label the nine retinal layers and the optic disc. Specifically, a multi-scale global reasoning module is inserted between the encoder and decoder of a U-shape neural network to exploit anatomical prior knowledge and perform spatial reasoning. We conduct experiments on human peripapillary retinal OCT images. We also provide public access to the collected dataset, which might contribute to the research in the field of biomedical image processing. The Dice score of the proposed segmentation network is 0.820 ± 0.001 and the pixel accuracy is 0.830 ± 0.002, both of which outperform those from other state-of-the-art techniques.
Collapse
Affiliation(s)
- Jiaxuan Li
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Peiyao Jin
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Jianfeng Zhu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Haidong Zou
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Xun Xu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Min Tang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Minwen Zhou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Yu Gan
- Department of Electrical and Computer Engineering, The University of Alabama, AL 35487, USA
| | - Jiangnan He
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Yuye Ling
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yikai Su
- State Key Lab of Advanced Optical Communication Systems and Networks, Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|