1
|
Wang JK, Johnson BA, Chen Z, Zhang H, Szanto D, Woods B, Wall M, Kwon YH, Linton EF, Pouw A, Kupersmith MJ, Garvin MK, Kardon RH. Quantifying the spatial patterns of retinal ganglion cell loss and progression in optic neuropathy by applying a deep learning variational autoencoder approach to optical coherence tomography. FRONTIERS IN OPHTHALMOLOGY 2025; 4:1497848. [PMID: 39963427 PMCID: PMC11830743 DOI: 10.3389/fopht.2024.1497848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 12/16/2024] [Indexed: 02/20/2025]
Abstract
Introduction Glaucoma, optic neuritis (ON), and non-arteritic anterior ischemic optic neuropathy (NAION) produce distinct patterns of retinal ganglion cell (RGC) damage. We propose a booster Variational Autoencoder (bVAE) to capture spatial variations in RGC loss and generate latent space (LS) montage maps that visualize different degrees and spatial patterns of optic nerve bundle injury. Furthermore, the bVAE model is capable of tracking the spatial pattern of RGC thinning over time and classifying the underlying cause. Methods The bVAE model consists of an encoder, a display decoder, and a booster decoder. The encoder decomposes input ganglion cell layer (GCL) thickness maps into two display latent variables (dLVs) and eight booster latent variables (bLVs). The dLVs capture primary spatial patterns of RGC thinning, while the display decoder reconstructs the GCL map and creates the LS montage map. The bLVs add finer spatial details, improving reconstruction accuracy. XGBoost was used to analyze the dLVs and bLVs, estimating normal/abnormal GCL thinning and classifying diseases (glaucoma, ON, and NAION). A total of 10,701 OCT macular scans from 822 subjects were included in this study. Results Incorporating bLVs improved reconstruction accuracy, with the image-based root-mean-square error (RMSE) between input and reconstructed GCL thickness maps decreasing from 5.55 ± 2.29 µm (two dLVs only) to 4.02 ± 1.61 µm (two dLVs and eight bLVs). However, the image-based structural similarity index (SSIM) remained similar (0.91 ± 0.04), indicating that just two dLVs effectively capture the main GCL spatial patterns. For classification, the XGBoost model achieved an AUC of 0.98 for identifying abnormal spatial patterns of GCL thinning over time using the dLVs. Disease classification yielded AUCs of 0.95 for glaucoma, 0.84 for ON, and 0.93 for NAION, with bLVs further increasing the AUCs to 0.96 for glaucoma, 0.93 for ON, and 0.99 for NAION. Conclusion This study presents a novel approach to visualizing and quantifying GCL thinning patterns in optic neuropathies using the bVAE model. The combination of dLVs and bLVs enhances the model's ability to capture key spatial features and predict disease progression. Future work will focus on integrating additional image modalities to further refine the model's diagnostic capabilities.
Collapse
Affiliation(s)
- Jui-Kai Wang
- Center for the Prevention and Treatment of Visual Loss, Iowa City VA Health Care System, Iowa City, IA, United States
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Brett A. Johnson
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Zhi Chen
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, United States
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA, United States
| | - Honghai Zhang
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, United States
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA, United States
| | - David Szanto
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Brian Woods
- Department of Ophthalmology, University Hospital Galway, Galway, Ireland
- Department of Physics, School of Natural Sciences, University of Galway, Galway, Ireland
| | - Michael Wall
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Young H. Kwon
- Center for the Prevention and Treatment of Visual Loss, Iowa City VA Health Care System, Iowa City, IA, United States
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Edward F. Linton
- Center for the Prevention and Treatment of Visual Loss, Iowa City VA Health Care System, Iowa City, IA, United States
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Andrew Pouw
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| | - Mark J. Kupersmith
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Mona K. Garvin
- Center for the Prevention and Treatment of Visual Loss, Iowa City VA Health Care System, Iowa City, IA, United States
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, United States
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA, United States
| | - Randy H. Kardon
- Center for the Prevention and Treatment of Visual Loss, Iowa City VA Health Care System, Iowa City, IA, United States
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, United States
| |
Collapse
|
2
|
Huang K, Ma X, Zhang Z, Zhang Y, Yuan S, Fu H, Chen Q. Diverse Data Generation for Retinal Layer Segmentation With Potential Structure Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3584-3595. [PMID: 38587957 DOI: 10.1109/tmi.2024.3384484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
Accurate retinal layer segmentation on optical coherence tomography (OCT) images is hampered by the challenges of collecting OCT images with diverse pathological characterization and balanced distribution. Current generative models can produce high-realistic images and corresponding labels without quantitative limitations by fitting distributions of real collected data. Nevertheless, the diversity of their generated data is still limited due to the inherent imbalance of training data. To address these issues, we propose an image-label pair generation framework that generates diverse and balanced potential data from imbalanced real samples. Specifically, the framework first generates diverse layer masks, and then generates plausible OCT images corresponding to these layer masks using two customized diffusion probabilistic models respectively. To learn from imbalanced data and facilitate balanced generation, we introduce pathological-related conditions to guide the generation processes. To enhance the diversity of the generated image-label pairs, we propose a potential structure modeling technique that transfers the knowledge of diverse sub-structures from lowly- or non-pathological samples to highly pathological samples. We conducted extensive experiments on two public datasets for retinal layer segmentation. Firstly, our method generates OCT images with higher image quality and diversity compared to other generative methods. Furthermore, based on the extensive training with the generated OCT images, downstream retinal layer segmentation tasks demonstrate improved results. The code is publicly available at: https://github.com/nicetomeetu21/GenPSM.
Collapse
|
3
|
Mani P, Ramachandran N, Paul SJ, Ramesh PV. Laceration assessment: advanced segmentation and classification framework for retinal disease categorization in optical coherence tomography images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:1786-1793. [PMID: 39889044 DOI: 10.1364/josaa.526142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 07/31/2024] [Indexed: 02/02/2025]
Abstract
Disorders affecting the retina pose a considerable risk to human vision, with an array of factors including aging, diabetes, hypertension, obesity, ocular trauma, and tobacco use exacerbating this issue in contemporary times. Optical coherence tomography (OCT) is a rapidly developing imaging modality that is capable of identifying early signs of vascular, ocular, and central nervous system abnormalities. OCT can diagnose retinal diseases through image classification, but quantifying the laceration area requires image segmentation. To overcome this obstacle, we have developed an innovative deep learning framework that can perform both tasks simultaneously. The suggested framework employs a parallel mask-guided convolutional neural network (PM-CNN) for the classification of OCT B-scans and a grade activation map (GAM) output from the PM-CNN to help a V-Net network (GAM V-Net) to segment retinal lacerations. The guiding mask for the PM-CNN is obtained from the auxiliary segmentation job. The effectiveness of the dual framework was evaluated using a combined dataset that encompassed four publicly accessible datasets along with an additional real-time dataset. This compilation included 11 categories of retinal diseases. The four publicly available datasets provided a robust foundation for the validation of the dual framework, while the real-time dataset enabled the framework's performance to be assessed on a broader range of retinal disease categories. The segmentation Dice coefficient was 78.33±0.15%, while the classification accuracy was 99.10±0.10%. The model's ability to effectively segment retinal fluids and identify retinal lacerations on a different dataset was an excellent demonstration of its generalizability.
Collapse
|
4
|
Chen Z, Zhang H, Linton EF, Johnson BA, Choi YJ, Kupersmith MJ, Sonka M, Garvin MK, Kardon RH, Wang JK. Hybrid deep learning and optimal graph search method for optical coherence tomography layer segmentation in diseases affecting the optic nerve. BIOMEDICAL OPTICS EXPRESS 2024; 15:3681-3698. [PMID: 38867777 PMCID: PMC11166436 DOI: 10.1364/boe.516045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 03/09/2024] [Accepted: 05/02/2024] [Indexed: 06/14/2024]
Abstract
Accurate segmentation of retinal layers in optical coherence tomography (OCT) images is critical for assessing diseases that affect the optic nerve, but existing automated algorithms often fail when pathology causes irregular layer topology, such as extreme thinning of the ganglion cell-inner plexiform layer (GCIPL). Deep LOGISMOS, a hybrid approach that combines the strengths of deep learning and 3D graph search to overcome their limitations, was developed to improve the accuracy, robustness and generalizability of retinal layer segmentation. The method was trained on 124 OCT volumes from both eyes of 31 non-arteritic anterior ischemic optic neuropathy (NAION) patients and tested on three cross-sectional datasets with available reference tracings: Test-NAION (40 volumes from both eyes of 20 NAION subjects), Test-G (29 volumes from 29 glaucoma subjects/eyes), and Test-JHU (35 volumes from 21 multiple sclerosis and 14 control subjects/eyes) and one longitudinal dataset without reference tracings: Test-G-L (155 volumes from 15 glaucoma patients/eyes). In the three test datasets with reference tracings (Test-NAION, Test-G, and Test-JHU), Deep LOGISMOS achieved very high Dice similarity coefficients (%) on GCIPL: 89.97±3.59, 90.63±2.56, and 94.06±1.76, respectively. In the same context, Deep LOGISMOS outperformed the Iowa reference algorithms by improving the Dice score by 17.5, 5.4, and 7.5, and also surpassed the deep learning framework nnU-Net with improvements of 4.4, 3.7, and 1.0. For the 15 severe glaucoma eyes with marked GCIPL thinning (Test-G-L), it demonstrated reliable regional GCIPL thickness measurement over five years. The proposed Deep LOGISMOS approach has potential to enhance precise quantification of retinal structures, aiding diagnosis and treatment management of optic nerve diseases.
Collapse
Affiliation(s)
- Zhi Chen
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
- Department of Electrical and Computer
Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Honghai Zhang
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
- Department of Electrical and Computer
Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Edward F. Linton
- Department of Ophthalmology and Visual
Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Brett A. Johnson
- Department of Ophthalmology and Visual
Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Yun Jae Choi
- Department of Ophthalmology and Visual
Sciences, University of Iowa, Iowa City, IA 52242, USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Mark J. Kupersmith
- Departments of Neurology, Ophthalmology and
Neurosurgery, Icahn School of Medicine at Mount
Sinai, New York, NY 10029, USA
| | - Milan Sonka
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
- Department of Electrical and Computer
Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Mona K. Garvin
- Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, IA 52242, USA
- Department of Electrical and Computer
Engineering, University of Iowa, Iowa City, IA 52242, USA
- Center for the Prevention and
Treatment of Visual Loss, Iowa City VA Health Care
System, Iowa City, IA 52242, USA
| | - Randy H. Kardon
- Department of Ophthalmology and Visual
Sciences, University of Iowa, Iowa City, IA 52242, USA
- Center for the Prevention and
Treatment of Visual Loss, Iowa City VA Health Care
System, Iowa City, IA 52242, USA
| | - Jui-Kai Wang
- Department of Electrical and Computer
Engineering, University of Iowa, Iowa City, IA 52242, USA
- Department of Ophthalmology and Visual
Sciences, University of Iowa, Iowa City, IA 52242, USA
- Center for the Prevention and
Treatment of Visual Loss, Iowa City VA Health Care
System, Iowa City, IA 52242, USA
| |
Collapse
|
5
|
Demuth S, Paris J, Faddeenkov I, De Sèze J, Gourraud PA. Clinical applications of deep learning in neuroinflammatory diseases: A scoping review. Rev Neurol (Paris) 2024:S0035-3787(24)00522-8. [PMID: 38772806 DOI: 10.1016/j.neurol.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 03/26/2024] [Accepted: 04/09/2024] [Indexed: 05/23/2024]
Abstract
BACKGROUND Deep learning (DL) is an artificial intelligence technology that has aroused much excitement for predictive medicine due to its ability to process raw data modalities such as images, text, and time series of signals. OBJECTIVES Here, we intend to give the clinical reader elements to understand this technology, taking neuroinflammatory diseases as an illustrative use case of clinical translation efforts. We reviewed the scope of this rapidly evolving field to get quantitative insights about which clinical applications concentrate the efforts and which data modalities are most commonly used. METHODS We queried the PubMed database for articles reporting DL algorithms for clinical applications in neuroinflammatory diseases and the radiology.healthairegister.com website for commercial algorithms. RESULTS The review included 148 articles published between 2018 and 2024 and five commercial algorithms. The clinical applications could be grouped as computer-aided diagnosis, individual prognosis, functional assessment, the segmentation of radiological structures, and the optimization of data acquisition. Our review highlighted important discrepancies in efforts. The segmentation of radiological structures and computer-aided diagnosis currently concentrate most efforts with an overrepresentation of imaging. Various model architectures have addressed different applications, relatively low volume of data, and diverse data modalities. We report the high-level technical characteristics of the algorithms and synthesize narratively the clinical applications. Predictive performances and some common a priori on this topic are finally discussed. CONCLUSION The currently reported efforts position DL as an information processing technology, enhancing existing modalities of paraclinical investigations and bringing perspectives to make innovative ones actionable for healthcare.
Collapse
Affiliation(s)
- S Demuth
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France; Inserm U1119 : biopathologie de la myéline, neuroprotection et stratégies thérapeutiques, University of Strasbourg, 1, rue Eugène-Boeckel - CS 60026, 67084 Strasbourg, France.
| | - J Paris
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France
| | - I Faddeenkov
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France
| | - J De Sèze
- Inserm U1119 : biopathologie de la myéline, neuroprotection et stratégies thérapeutiques, University of Strasbourg, 1, rue Eugène-Boeckel - CS 60026, 67084 Strasbourg, France; Department of Neurology, University Hospital of Strasbourg, 1, avenue Molière, 67200 Strasbourg, France; Inserm CIC 1434 Clinical Investigation Center, University Hospital of Strasbourg, 1, avenue Molière, 67200 Strasbourg, France
| | - P-A Gourraud
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France; "Data clinic", Department of Public Health, University Hospital of Nantes, Nantes, France
| |
Collapse
|
6
|
Chen C, Fu Z, Ye S, Zhao C, Golovko V, Ye S, Bai Z. Study on high-precision three-dimensional reconstruction of pulmonary lesions and surrounding blood vessels based on CT images. OPTICS EXPRESS 2024; 32:1371-1390. [PMID: 38297691 DOI: 10.1364/oe.510398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/15/2023] [Indexed: 02/02/2024]
Abstract
The adoption of computerized tomography (CT) technology has significantly elevated the role of pulmonary CT imaging in diagnosing and treating pulmonary diseases. However, challenges persist due to the complex relationship between lesions within pulmonary tissue and the surrounding blood vessels. These challenges involve achieving precise three-dimensional reconstruction while maintaining accurate relative positioning of these elements. To effectively address this issue, this study employs a semi-automatic precise labeling process for the target region. This procedure ensures a high level of consistency in the relative positions of lesions and the surrounding blood vessels. Additionally, a morphological gradient interpolation algorithm, combined with Gaussian filtering, is applied to facilitate high-precision three-dimensional reconstruction of both lesions and blood vessels. Furthermore, this technique enables post-reconstruction slicing at any layer, facilitating intuitive exploration of the correlation between blood vessels and lesion layers. Moreover, the study utilizes physiological knowledge to simulate real-world blood vessel intersections, determining the range of blood vessel branch angles and achieving seamless continuity at internal blood vessel branch points. The experimental results achieved a satisfactory reconstruction with an average Hausdorff distance of 1.5 mm and an average Dice coefficient of 92%, obtained by comparing the reconstructed shape with the original shape,the approach also achieves a high level of accuracy in three-dimensional reconstruction and visualization. In conclusion, this study is a valuable source of technical support for the diagnosis and treatment of pulmonary diseases and holds promising potential for widespread adoption in clinical practice.
Collapse
|
7
|
Lu J, Cheng Y, Hiya FE, Shen M, Herrera G, Zhang Q, Gregori G, Rosenfeld PJ, Wang RK. Deep-learning-based automated measurement of outer retinal layer thickness for use in the assessment of age-related macular degeneration, applicable to both swept-source and spectral-domain OCT imaging. BIOMEDICAL OPTICS EXPRESS 2024; 15:413-427. [PMID: 38223170 PMCID: PMC10783897 DOI: 10.1364/boe.512359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/17/2023] [Accepted: 12/17/2023] [Indexed: 01/16/2024]
Abstract
Effective biomarkers are required for assessing the progression of age-related macular degeneration (AMD), a prevalent and progressive eye disease. This paper presents a deep learning-based automated algorithm, applicable to both swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT) scans, for measuring outer retinal layer (ORL) thickness as a surrogate biomarker for outer retinal degeneration, e.g., photoreceptor disruption, to assess AMD progression. The algorithm was developed based on a modified TransUNet model with clinically annotated retinal features manifested in the progression of AMD. The algorithm demonstrates a high accuracy with an intersection of union (IoU) of 0.9698 in the testing dataset for segmenting ORL using both SS-OCT and SD-OCT datasets. The robustness and applicability of the algorithm are indicated by strong correlation (r = 0.9551, P < 0.0001 in the central-fovea 3 mm-circle, and r = 0.9442, P < 0.0001 in the 5 mm-circle) and agreement (the mean bias = 0.5440 um in the 3-mm circle, and 1.392 um in the 5-mm circle) of the ORL thickness measurements between SS-OCT and SD-OCT scans. Comparative analysis reveals significant differences (P < 0.0001) in ORL thickness among 80 normal eyes, 30 intermediate AMD eyes with reticular pseudodrusen, 49 intermediate AMD eyes with drusen, and 40 late AMD eyes with geographic atrophy, highlighting its potential as an independent biomarker for predicting AMD progression. The findings provide valuable insights into the ORL alterations associated with different stages of AMD and emphasize the potential of ORL thickness as a sensitive indicator of AMD severity and progression.
Collapse
Affiliation(s)
- Jie Lu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Farhan E. Hiya
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Gissel Herrera
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Qinqin Zhang
- Research and Development, Carl Zeiss Meditec, Inc., Dublin, CA, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
8
|
Wang Y, Galang C, Freeman WR, Warter A, Heinke A, Bartsch DUG, Nguyen TQ, An C. Retinal OCT Layer Segmentation via Joint Motion Correction and Graph-Assisted 3D Neural Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:103319-103332. [PMID: 39737086 PMCID: PMC11684756 DOI: 10.1109/access.2023.3317011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/01/2025]
Abstract
Optical Coherence Tomography (OCT) is a widely used 3D imaging technology in ophthalmology. Segmentation of retinal layers in OCT is important for diagnosis and evaluation of various retinal and systemic diseases. While 2D segmentation algorithms have been developed, they do not fully utilize contextual information and suffer from inconsistency in 3D. We propose neural networks to combine motion correction and segmentation in 3D. The proposed segmentation network utilizes 3D convolution and a novel graph pyramid structure with graph-inspired building blocks. We also collected one of the largest OCT segmentation dataset with manually corrected segmentation covering both normal examples and various diseases. The experimental results on three datasets with multiple instruments and various diseases show the proposed method can achieve improved segmentation accuracy compared with commercial softwares and conventional or deep learning methods in literature. Specifically, the proposed method reduced the average error from 38.47% to 11.43% compared to clinically available commercial software for severe deformations caused by diseases. The diagnosis and evaluation of diseases with large deformation such as DME, wet AMD and CRVO would greatly benefit from the improved accuracy, which impacts tens of millions of patients.
Collapse
Affiliation(s)
- Yiqian Wang
- Department of Electrical and Computer Engineering, University of California, San Diego, CA 92093, USA
| | - Carlo Galang
- Jacobs Retina Center, Shiley Eye Institute, University of California, San Diego, CA 92093, USA
| | - William R Freeman
- Jacobs Retina Center, Shiley Eye Institute, University of California, San Diego, CA 92093, USA
| | - Alexandra Warter
- Jacobs Retina Center, Shiley Eye Institute, University of California, San Diego, CA 92093, USA
| | - Anna Heinke
- Jacobs Retina Center, Shiley Eye Institute, University of California, San Diego, CA 92093, USA
| | - Dirk-Uwe G Bartsch
- Jacobs Retina Center, Shiley Eye Institute, University of California, San Diego, CA 92093, USA
| | - Truong Q Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA 92093, USA
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego, CA 92093, USA
| |
Collapse
|
9
|
Xie H, Xu W, Wang YX, Wu X. Deep learning network with differentiable dynamic programming for retina OCT surface segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3190-3202. [PMID: 37497505 PMCID: PMC10368040 DOI: 10.1364/boe.492670] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/19/2023] [Accepted: 05/23/2023] [Indexed: 07/28/2023]
Abstract
Multiple-surface segmentation in optical coherence tomography (OCT) images is a challenging problem, further complicated by the frequent presence of weak image boundaries. Recently, many deep learning-based methods have been developed for this task and yield remarkable performance. Unfortunately, due to the scarcity of training data in medical imaging, it is challenging for deep learning networks to learn the global structure of the target surfaces, including surface smoothness. To bridge this gap, this study proposes to seamlessly unify a U-Net for feature learning with a constrained differentiable dynamic programming module to achieve end-to-end learning for retina OCT surface segmentation to explicitly enforce surface smoothness. It effectively utilizes the feedback from the downstream model optimization module to guide feature learning, yielding better enforcement of global structures of the target surfaces. Experiments on Duke AMD (age-related macular degeneration) and JHU MS (multiple sclerosis) OCT data sets for retinal layer segmentation demonstrated that the proposed method was able to achieve subvoxel accuracy on both datasets, with the mean absolute surface distance (MASD) errors of 1.88 ± 1.96μm and 2.75 ± 0.94μm, respectively, over all the segmented surfaces.
Collapse
Affiliation(s)
- Hui Xie
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Weiyu Xu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital University of Medical Science, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
10
|
Mellak Y, Achim A, Ward A, Nicholson L, Descombes X. A machine learning framework for the quantification of experimental uveitis in murine OCT. BIOMEDICAL OPTICS EXPRESS 2023; 14:3413-3432. [PMID: 37497491 PMCID: PMC10368067 DOI: 10.1364/boe.489271] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/11/2023] [Accepted: 05/22/2023] [Indexed: 07/28/2023]
Abstract
This paper presents methods for the detection and assessment of non-infectious uveitis, a leading cause of vision loss in working age adults. In the first part, we propose a classification model that can accurately predict the presence of uveitis and differentiate between different stages of the disease using optical coherence tomography (OCT) images. We utilize the Grad-CAM visualization technique to elucidate the decision-making process of the classifier and gain deeper insights into the results obtained. In the second part, we apply and compare three methods for the detection of detached particles in the retina that are indicative of uveitis. The first is a fully supervised detection method, the second is a marked point process (MPP) technique, and the third is a weakly supervised segmentation that produces per-pixel masks as output. The segmentation model is used as a backbone for a fully automated pipeline that can segment small particles of uveitis in two-dimensional (2-D) slices of the retina, reconstruct the volume, and produce centroids as points distribution in space. The number of particles in retinas is used to grade the disease, and point process analysis on centroids in three-dimensional (3-D) shows clustering patterns in the distribution of the particles on the retina.
Collapse
Affiliation(s)
- Youness Mellak
- Université Côte d’Azur, INRIA, CNRS, I3S, Sophia Antipolis, France
| | - Alin Achim
- University of Bristol, Bristol, United Kingdom
| | - Amy Ward
- University of Bristol, Bristol, United Kingdom
| | | | - Xavier Descombes
- Université Côte d’Azur, INRIA, CNRS, I3S, Sophia Antipolis, France
| |
Collapse
|