1
|
Tanikawa A, Maruyama K, Liu S, Mao Z, Wang Z, Shiraki N, Hashida N, Kawasaki R, Chan K, Nishida K. Unveiling Key Pathological Indicators for Disease Progression in Vogt Koyanagi Harada Disease and Sympathetic Ophthalmia Through Advanced Choroidal Volume Analysis. Ocul Immunol Inflamm 2024:1-9. [PMID: 38709183 DOI: 10.1080/09273948.2024.2337836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 03/27/2024] [Indexed: 05/07/2024]
Abstract
PURPOSE To evaluate the association between quantitative parameters derived from volume analysis of optical coherence tomography (OCT) data and disease worsening in Vogt-Koyanagi-Harada disease (VKHD) and sympathetic ophthalmia (SO). METHODS This retrospective study, conducted at Osaka University Hospital, employed swept-source OCT scans from patients diagnosed with VKHD or SO between October 2012 and January 2021. The choroidal vessel structure was segmented and visualized in three dimensions, generating quantitative vessel volume maps. Region-specific choroidal vessel volume (CVV), choroidal volume (CV), and vessel index (VI) were scrutinized for their potential correlation with disease severity. RESULTS Thirty-five eyes of 18 VKHD and 2 SO patient (8 females, 10 males) were evaluated. OCT-derived CVV maps revealed regional CV alterations in VKHD and SO patients. Two parameters, i.e. CV at 3- and 6-month follow-ups (p = 0.044, p = 0.040, respectively, with area under the ROC curve of 0.70) and CVV at 6 months (p = 0.046, area under the ROC curve of 0.71), were significantly higher in recurrent VKHD and SO compared to effectively treated cases. CONCLUSIONS The volume analysis of OCT images facilitates a three-dimensional visualization of choroidal alterations, which may serve as a reflection of disease severity in VKHD and SO patients. Furthermore, noninvasive initial CVV or CV measurements may serve as potential biomarkers for predicting disease recurrence in VKHD and SO.
Collapse
Affiliation(s)
- Akira Tanikawa
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Kazuichi Maruyama
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Vision Informatics, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, Japan
| | - Shiyi Liu
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey
| | - Zaixing Mao
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey
| | - Zhenguo Wang
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey
| | - Nobuhiko Shiraki
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Noriyasu Hashida
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Suita, Osaka, Japan
- Graduate Scholl of Medicine/Division of Environmental Medicine and Population Science/Department of Social and Environmental Medicine, Osaka University, Suita, Osaka, Japan
| | - Kinpui Chan
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey
| | - Kohji Nishida
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, Japan
| |
Collapse
|
2
|
Mehdizadeh M, Saha S, Alonso-Caneiro D, Kugelman J, MacNish C, Chen F. Employing texture loss to denoise OCT images using generative adversarial networks. BIOMEDICAL OPTICS EXPRESS 2024; 15:2262-2280. [PMID: 38633090 PMCID: PMC11019688 DOI: 10.1364/boe.503868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/29/2023] [Accepted: 11/29/2023] [Indexed: 04/19/2024]
Abstract
OCT is a widely used clinical ophthalmic imaging technique, but the presence of speckle noise can obscure important pathological features and hinder accurate segmentation. This paper presents a novel method for denoising optical coherence tomography (OCT) images using a combination of texture loss and generative adversarial networks (GANs). Previous approaches have integrated deep learning techniques, starting with denoising Convolutional Neural Networks (CNNs) that employed pixel-wise losses. While effective in reducing noise, these methods often introduced a blurring effect in the denoised OCT images. To address this, perceptual losses were introduced, improving denoising performance and overall image quality. Building on these advancements, our research focuses on designing an image reconstruction GAN that generates OCT images with textural similarity to the gold standard, the averaged OCT image. We utilize the PatchGAN discriminator approach as a texture loss to enhance the quality of the reconstructed OCT images. We also compare the performance of UNet and ResNet as generators in the conditional GAN (cGAN) setting, as well as compare PatchGAN with the Wasserstein GAN. Using real clinical foveal-centered OCT retinal scans of children with normal vision, our experiments demonstrate that the combination of PatchGAN and UNet achieves superior performance (PSNR = 32.50) compared to recently proposed methods such as SiameseGAN (PSNR = 31.02). Qualitative experiments involving six masked clinical ophthalmologists also favor the reconstructed OCT images with PatchGAN texture loss. In summary, this paper introduces a novel method for denoising OCT images by incorporating texture loss within a GAN framework. The proposed approach outperforms existing methods and is well-received by clinical experts, offering promising advancements in OCT image reconstruction and facilitating accurate clinical interpretation.
Collapse
Affiliation(s)
- Maryam Mehdizadeh
- The Australian e-Health Research Centre (AEHRC), CSIRO, WA, Australia
- School of Physics, Mathematics and Computing, University of Western Australia (UWA), WA, Australia
| | - Sajib Saha
- The Australian e-Health Research Centre (AEHRC), CSIRO, WA, Australia
| | - David Alonso-Caneiro
- School of Science, Technology, and Engineering, University of Sunshine Coast, Sunshine Coast, QLD, Australia
| | - Jason Kugelman
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), QLD, Australia
| | - Cara MacNish
- School of Physics, Mathematics and Computing, University of Western Australia (UWA), WA, Australia
| | - Fred Chen
- Centre for Ophthalmology and Visual Science, Medical School, University of Western Australia (UWA), WA, Australia
| |
Collapse
|
3
|
Otani T, Miyata K, Miki A, Wada S. Computational study on the effects of central retinal blood vessels with asymmetric geometries on optic nerve head biomechanics. Med Eng Phys 2024; 123:104086. [PMID: 38365339 DOI: 10.1016/j.medengphy.2023.104086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/28/2023] [Accepted: 12/10/2023] [Indexed: 02/18/2024]
Abstract
Optic nerve head (ONH) biomechanics are associated with glaucoma progression and have received considerable attention. Central retinal vessels (CRVs) oriented asymmetrically in the ONH are the single blood supply source to the retina and are believed to act as mechanically stable elements in the ONH in response to intraocular pressure (IOP). However, these mechanical effects are considered negligible in ONH biomechanical studies and received less attention. This study investigated the effects of CRVs on ONH biomechanics taking into consideration three-dimensional asymmetric CRV geometries. A CRV geometry was constructed based on CRV centerlines extracted from optical coherence tomography ONH images in eight healthy subjects and superimposed in the idealized ONH geometry established in previous studies. Mechanical analyses of the ONH in response to the IOP were conducted in the cases with and without CRVs for comparison. Obtained results demonstrated that the CRVs induced anisotropic ONH deformation, particularly in the lamina cribrosa and the associated upper neural tissues (prelamina) with wide ranges of spatial strain distributions. These results indicated that the CRVs result in anisotropic deformation with local strain concentration, rather than function to mechanically support in response to the IOP as in the conventional thinking in ophthalmology.
Collapse
Affiliation(s)
- Tomohiro Otani
- Department of Mechanical Science and Bioengineering, Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyamacho, Toyonaka, Osaka 560-8531, Japan.
| | - Kota Miyata
- Department of Mechanical Science and Bioengineering, Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyamacho, Toyonaka, Osaka 560-8531, Japan
| | - Atsuya Miki
- Department of Myopia Control Research, Aichi Medical University, Japan
| | - Shigeo Wada
- Department of Mechanical Science and Bioengineering, Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyamacho, Toyonaka, Osaka 560-8531, Japan
| |
Collapse
|
4
|
Hara C, Maruyama K, Wakabayashi T, Liu S, Mao Z, Kawasaki R, Wang Z, Chan K, Nishida K. Choroidal Vessel and Stromal Volumetric Analysis After Photodynamic Therapy or Focal Laser for Central Serous Chorioretinopathy. Transl Vis Sci Technol 2023; 12:26. [PMID: 37982766 PMCID: PMC10668616 DOI: 10.1167/tvst.12.11.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 09/22/2023] [Indexed: 11/21/2023] Open
Abstract
Purpose To utilize volumetric analysis to quantify volumetric changes in choroidal vessels and stroma after photodynamic therapy (PDT) and focal laser photocoagulation (PC) for central serous chorioretinopathy (CSCR). Methods This retrospective, comparative study included 58 eyes (58 patients) with CSCR (PC, 33 eyes; PDT, 25 eyes) followed up with swept-source optical coherence tomography at 3 months after treatment. Three-dimensional (3D) choroidal vessel and stromal volumes in each area of the central 1.5-mm-diameter circle, the torus-shaped area with 6-mm-diameter circle excluding the area of the central 1.5-mm-diameter circle, and the treated area of the Early Treatment Diabetic Retinopathy Study (ETDRS) grid centered at the fovea were analyzed using a deep learning-based method. Changes in volume at baseline and 1 and 3 months after treatment were compared. Results The mean patient age was 49.3 ± 10.5 years. In the central 1.5-mm-diameter circle, the mean vessel and stromal volume rates significantly decreased after the treatment in both the PDT and PC groups (P = 0.00029 and P = 0.0014, respectively), and significant differences between the PDT and PC groups of continuous variables within times were observed in both volumes (P = 0.024 and P = 0.037, respectively). In the torus-shaped area and treated area, the PDT and PC groups both showed similar decreases in vessel and stromal volume over time. Conclusions In the 3D optical coherence tomography volumetric analysis, both PDT and focal PC reduced choroid vessel volume in eyes with CSCR. Translational Relevance This new finding is useful in elucidating the pathogenesis and healing mechanisms of CSCR.
Collapse
Affiliation(s)
- Chikako Hara
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Advanced Device Medicine, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Kazuichi Maruyama
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Vision Informatics, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives (OTRI), Osaka University, Suita, Osaka, Japan
| | - Taku Wakabayashi
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Shiyi Liu
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Zaixing Mao
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Ryo Kawasaki
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Innovative Visual Science, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Zhenguo Wang
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Kinpui Chan
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Kohji Nishida
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives (OTRI), Osaka University, Suita, Osaka, Japan
| |
Collapse
|
5
|
Nienhaus J, Matten P, Britten A, Scherer J, Höck E, Freytag A, Drexler W, Leitgeb RA, Schlegl T, Schmoll T. Live 4D-OCT denoising with self-supervised deep learning. Sci Rep 2023; 13:5760. [PMID: 37031338 PMCID: PMC10082772 DOI: 10.1038/s41598-023-32695-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/31/2023] [Indexed: 04/10/2023] Open
Abstract
By providing three-dimensional visualization of tissues and instruments at high resolution, live volumetric optical coherence tomography (4D-OCT) has the potential to revolutionize ophthalmic surgery. However, the necessary imaging speed is accompanied by increased noise levels. A high data rate and the requirement for minimal latency impose major limitations for real-time noise reduction. In this work, we propose a low complexity neural network for denoising, directly incorporated into the image reconstruction pipeline of a microscope-integrated 4D-OCT prototype with an A-scan rate of 1.2 MHz. For this purpose, we trained a blind-spot network on unpaired OCT images using a self-supervised learning approach. With an optimized U-Net, only a few milliseconds of additional latency were introduced. Simultaneously, these architectural adaptations improved the numerical denoising performance compared to the basic setup, outperforming non-local filtering algorithms. Layers and edges of anatomical structures in B-scans were better preserved than with Gaussian filtering despite comparable processing time. By comparing scenes with and without denoising employed, we show that neural networks can be used to improve visual appearance of volumetric renderings in real time. Enhancing the rendering quality is an important step for the clinical acceptance and translation of 4D-OCT as an intra-surgical guidance tool.
Collapse
Affiliation(s)
- Jonas Nienhaus
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Philipp Matten
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Anja Britten
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Julius Scherer
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | | | - Wolfgang Drexler
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Rainer A Leitgeb
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Thomas Schlegl
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tilman Schmoll
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Carl Zeiss Meditec, Inc., Dublin, USA
| |
Collapse
|
6
|
Applegate MB, Kose K, Ghimire S, Rajadhyaksha M, Dy J. Self-supervised denoising of Nyquist-sampled volumetric images via deep learning. J Med Imaging (Bellingham) 2023; 10:024005. [PMID: 36992871 PMCID: PMC10042483 DOI: 10.1117/1.jmi.10.2.024005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 03/06/2023] [Indexed: 03/29/2023] Open
Abstract
Purpose Deep learning has demonstrated excellent performance enhancing noisy or degraded biomedical images. However, many of these models require access to a noise-free version of the images to provide supervision during training, which limits their utility. Here, we develop an algorithm (noise2Nyquist) that leverages the fact that Nyquist sampling provides guarantees about the maximum difference between adjacent slices in a volumetric image, which allows denoising to be performed without access to clean images. We aim to show that our method is more broadly applicable and more effective than other self-supervised denoising algorithms on real biomedical images, and provides comparable performance to algorithms that need clean images during training. Approach We first provide a theoretical analysis of noise2Nyquist and an upper bound for denoising error based on sampling rate. We go on to demonstrate its effectiveness in denoising in a simulated example as well as real fluorescence confocal microscopy, computed tomography, and optical coherence tomography images. Results We find that our method has better denoising performance than existing self-supervised methods and is applicable to datasets where clean versions are not available. Our method resulted in peak signal to noise ratio (PSNR) within 1 dB and structural similarity (SSIM) index within 0.02 of supervised methods. On medical images, it outperforms existing self-supervised methods by an average of 3 dB in PSNR and 0.1 in SSIM. Conclusion noise2Nyquist can be used to denoise any volumetric dataset sampled at at least the Nyquist rate making it useful for a wide variety of existing datasets.
Collapse
Affiliation(s)
- Matthew B. Applegate
- Northeastern University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Kivanc Kose
- Dermatology Service at Memorial Sloan Kettering Cancer Center, New York, United States
| | - Sandesh Ghimire
- Northeastern University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
| | - Milind Rajadhyaksha
- Dermatology Service at Memorial Sloan Kettering Cancer Center, New York, United States
| | - Jennifer Dy
- Northeastern University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States
- Address all correspondence to Jennifer Dy,
| |
Collapse
|
7
|
Bayhaqi YA, Hamidi A, Canbaz F, Navarini AA, Cattin PC, Zam A. Deep-Learning-Based Fast Optical Coherence Tomography (OCT) Image Denoising for Smart Laser Osteotomy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2615-2628. [PMID: 35442883 DOI: 10.1109/tmi.2022.3168793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Laser osteotomy promises precise cutting and minor bone tissue damage. We proposed Optical Coherence Tomography (OCT) to monitor the ablation process toward our smart laser osteotomy approach. The OCT image is helpful to identify tissue type and provide feedback for the ablation laser to avoid critical tissues such as bone marrow and nerve. Furthermore, in the implementation, the tissue classifier's accuracy is dependent on the quality of the OCT image. Therefore, image denoising plays an important role in having an accurate feedback system. A common OCT image denoising technique is the frame-averaging method. Inherent to this method is the need for multiple images, i.e., the more images used, the better the resulting image quality. However, this approach comes at the price of increased acquisition time and sensitivity to motion artifacts. To overcome these limitations, we applied a deep-learning denoising method capable of imitating the frame-averaging method. The resulting image had a similar image quality to the frame-averaging and was better than the classical digital filtering methods. We also evaluated if this method affects the tissue classifier model's accuracy that will provide feedback to the ablation laser. We found that image denoising significantly increased the accuracy of the tissue classifier. Furthermore, we observed that the classifier trained using the deep learning denoised images achieved similar accuracy to the classifier trained using frame-averaged images. The results suggest the possibility of using the deep learning method as a pre-processing step for real-time tissue classification in smart laser osteotomy.
Collapse
|
8
|
Thompson AC, Falconi A, Sappington RM. Deep learning and optical coherence tomography in glaucoma: Bridging the diagnostic gap on structural imaging. FRONTIERS IN OPHTHALMOLOGY 2022; 2:937205. [PMID: 38983522 PMCID: PMC11182271 DOI: 10.3389/fopht.2022.937205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/22/2022] [Indexed: 07/11/2024]
Abstract
Glaucoma is a leading cause of progressive blindness and visual impairment worldwide. Microstructural evidence of glaucomatous damage to the optic nerve head and associated tissues can be visualized using optical coherence tomography (OCT). In recent years, development of novel deep learning (DL) algorithms has led to innovative advances and improvements in automated detection of glaucomatous damage and progression on OCT imaging. DL algorithms have also been trained utilizing OCT data to improve detection of glaucomatous damage on fundus photography, thus improving the potential utility of color photos which can be more easily collected in a wider range of clinical and screening settings. This review highlights ten years of contributions to glaucoma detection through advances in deep learning models trained utilizing OCT structural data and posits future directions for translation of these discoveries into the field of aging and the basic sciences.
Collapse
Affiliation(s)
- Atalie C. Thompson
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Internal Medicine, Gerontology, and Geriatric Medicine, Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Aurelio Falconi
- Wake Forest School of Medicine, Winston Salem, NC, United States
| | - Rebecca M. Sappington
- Department of Surgical Ophthalmology, Wake Forest School of Medicine, Winston Salem, NC, United States
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston Salem, NC, United States
| |
Collapse
|
9
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
10
|
Machine learning-based 3D modeling and volumetry of human posterior vitreous cavity of optical coherence tomographic images. Sci Rep 2022; 12:13836. [PMID: 35974072 PMCID: PMC9381727 DOI: 10.1038/s41598-022-17615-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 07/28/2022] [Indexed: 11/12/2022] Open
Abstract
The structure of the human vitreous varies considerably because of age-related liquefactions of the vitreous gel. These changes are poorly studied in vivo mainly because their high transparency and mobility make it difficult to obtain reliable and repeatable images of the vitreous. Optical coherence tomography can detect the boundaries between the vitreous gel and vitreous fluid, but it is difficult to obtain high resolution images that can be used to convert the images to three-dimensional (3D) images. Thus, the purpose of this study was to determine the shape and characteristics of the vitreous fluid using machine learning-based 3D modeling in which manually labelled fluid areas were used to train deep convolutional neural network (DCNN). The trained DCNN labelled vitreous fluid automatically and allowed us to obtain 3D vitreous model and to quantify the vitreous fluidic cavities. The mean volume and surface area of posterior vitreous fluidic cavities are 19.6 ± 7.8 mm3 and 104.0 ± 18.9 mm2 in eyes of 17 school children. The results suggested that vitreous fluidic cavities expanded as the cavities connects with each other, and this modeling system provided novel imaging markers for aging and eye diseases.
Collapse
|
11
|
Varadarajan D, Magnain C, Fogarty M, Boas DA, Fischl B, Wang H. A novel algorithm for multiplicative speckle noise reduction in ex vivo human brain OCT images. Neuroimage 2022; 257:119304. [PMID: 35568350 PMCID: PMC10018743 DOI: 10.1016/j.neuroimage.2022.119304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 05/06/2022] [Accepted: 05/10/2022] [Indexed: 10/18/2022] Open
Abstract
Optical coherence tomography (OCT) images of ex vivo human brain tissue are corrupted by multiplicative speckle noise that degrades the contrast to noise ratio (CNR) of microstructural compartments. This work proposes a novel algorithm to reduce noise corruption in OCT images that minimizes the penalized negative log likelihood of gamma distributed speckle noise. The proposed method is formulated as a majorize-minimize problem that reduces to solving an iterative regularized least squares optimization. We demonstrate the usefulness of the proposed method by removing speckle in simulated data, phantom data and real OCT images of human brain tissue. We compare the proposed method with state of the art filtering and non-local means based denoising methods. We demonstrate that our approach removes speckle accurately, improves CNR between different tissue types and better preserves small features and edges in human brain tissue.
Collapse
Affiliation(s)
- Divya Varadarajan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA; Radiology, Harvard Medical School, Boston, MA 02115, USA.
| | - Caroline Magnain
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA; Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Morgan Fogarty
- Imaging Science Program, Washington University McKelvey School of Engineering, St. Louis, MO 63130, USA; Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - David A Boas
- Biomedical Engineering and Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA; Radiology, Harvard Medical School, Boston, MA 02115, USA; Harvard-MIT Health Science and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Hui Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA; Radiology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
12
|
Fujimoto S, Miki A, Maruyama K, Mei S, Mao Z, Wang Z, Chan K, Nishida K. Three-Dimensional Volume Calculation of Intrachoroidal Cavitation Using Deep-Learning-Based Noise Reduction of Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:1. [PMID: 35802370 PMCID: PMC9279919 DOI: 10.1167/tvst.11.7.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Intrachoroidal cavitations (ICCs) are peripapillary pathological lesions generally associated with high myopia that can cause visual field (VF) defects. The current study aimed to evaluate a three-dimensional (3D) volume parameter of ICCs segmented from volumetric swept-source optical coherence tomography (SS-OCT) images processed using deep learning (DL)-based noise reduction and to investigate its correlation with VF sensitivity. Methods Thirteen eyes of 12 consecutive patients with peripapillary ICCs were enrolled. DL-based denoising and further analyses were applied to parapapillary 6 × 6-mm volumetric SS-OCT scans. Then, 3D ICC volume and two-dimensional depth and length measurements of the ICCs were calculated. The correlations between ICC parameters and VF sensitivity were investigated. Results The ICCs were located in the inferior hemiretina in all eyes. ICC volume (P = 0.02; regression coefficient [RC], −0.007) and ICC length (P = 0.04; RC, −4.51) were negatively correlated with the VF mean deviation, whereas ICC depth (P = 0.15) was not. All of the parameters, including ICC volume (P = 0.01; RC, −0.004), ICC depth (P = 0.02; RC, −0.008), and ICC length (P = 0.045; RC, −2.11), were negatively correlated with the superior mean total deviation. Conclusions We established the volume of ICCs as a new 3D parameter, and it reflected their influence on visual function. The automatic delineation and 3D rendering may lead to improved detection and pathological understanding of ICCs. Translational Relevance This study demonstrated the correlation between the 3D volume of ICCs and VF sensitivity.
Collapse
Affiliation(s)
- Satoko Fujimoto
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Osaka, Japan.,Hawaii Macula and Retina Institute, Aiea, HI, USA
| | - Atsuya Miki
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Osaka, Japan.,Department of Myopia Control Research, Aichi Medical University Medical School, Aichi, Japan
| | - Kazuichi Maruyama
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Osaka, Japan.,Department of Vision Informatics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Song Mei
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Zaixing Mao
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Zhenguo Wang
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Kinpui Chan
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, NJ, USA
| | - Kohji Nishida
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Osaka, Japan.,Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| |
Collapse
|
13
|
Marques R, Andrade De Jesus D, Barbosa-Breda J, Van Eijgen J, Stalmans I, van Walsum T, Klein S, G Vaz P, Sánchez Brea L. Automatic Segmentation of the Optic Nerve Head Region in Optical Coherence Tomography: A Methodological Review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106801. [PMID: 35429812 DOI: 10.1016/j.cmpb.2022.106801] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 03/07/2022] [Accepted: 04/01/2022] [Indexed: 06/14/2023]
Abstract
The optic nerve head (ONH) represents the intraocular section of the optic nerve, which is prone to damage by intraocular pressure (IOP). The advent of optical coherence tomography (OCT) has enabled the evaluation of novel ONH parameters, namely the depth and curvature of the lamina cribrosa (LC). Together with the Bruch's membrane minimum-rim-width (BMO-MRW), these seem to be promising ONH parameters for diagnosis and monitoring of retinal diseases such as glaucoma. Nonetheless, these OCT derived biomarkers are mostly extracted through manual segmentation, which is time-consuming and prone to bias, thus limiting their usability in clinical practice. The automatic segmentation of ONH in OCT scans could further improve the current clinical management of glaucoma and other diseases. This review summarizes the current state-of-the-art in automatic segmentation of the ONH in OCT. PubMed and Scopus were used to perform a systematic review. Additional works from other databases (IEEE, Google Scholar and ARVO IOVS) were also included, resulting in a total of 29 reviewed studies. For each algorithm, the methods, the size and type of dataset used for validation, and the respective results were carefully analysed. The results show a lack of consensus regarding the definition of segmented regions, extracted parameters and validation approaches, highlighting the importance and need of standardized methodologies for ONH segmentation. Only with a concrete set of guidelines, these automatic segmentation algorithms will build trust in data-driven segmentation models and be able to enter clinical practice.
Collapse
Affiliation(s)
- Rita Marques
- Laboratory for Instrumentation, Biomedical Engineering and Radiation Physics (LIBPhys-UC), Department of Physics, University of Coimbra, Coimbra, Portugal; Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Danilo Andrade De Jesus
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands.
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Porto, Portugal; Ophthalmology Department, São João Universitary Hospital Center, Porto, Portugal
| | - Jan Van Eijgen
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium; Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| | - Pedro G Vaz
- Laboratory for Instrumentation, Biomedical Engineering and Radiation Physics (LIBPhys-UC), Department of Physics, University of Coimbra, Coimbra, Portugal
| | - Luisa Sánchez Brea
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, Netherlands
| |
Collapse
|
14
|
DENOISING SWEPT SOURCE OPTICAL COHERENCE TOMOGRAPHY VOLUMETRIC SCANS USING A DEEP LEARNING MODEL. Retina 2022; 42:450-455. [PMID: 35175017 DOI: 10.1097/iae.0000000000003348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE To evaluate the use of a deep learning noise reduction model on swept source optical coherence tomography volumetric scans. METHODS Three groups of images including single-line highly averaged foveal scans (averaged images), foveal B-scans from volumetric scans using no averaging (unaveraged images), and deep learning denoised versions of the latter (denoised images) were obtained. We evaluated the potential increase in the signal-to-noise ratio by evaluating the contrast-to-noise ratio of the resultant images and measured the multiscale structural similarity index to determine whether the unaveraged and denoised images held true in structure to the averaged images. We evaluated the practical effects of denoising on a popular metric of choroidal vascularity known as the choroidal vascularity index. RESULTS Ten eyes of 10 subjects with a mean age of 31 years (range 24-64 years) were evaluated. The deep choroidal contrast-to-noise ratio mean values of the averaged and denoised image groups were similar (7.06 vs. 6.81, P = 0.75), and both groups had better maximum contrast-to-noise ratio mean values (27.65 and 46.34) than the unaveraged group (14.75; P = 0.001 and P < 0.001, respectively). The mean multiscale structural similarity index of the average-denoised images was significantly higher than the one from the averaged--unaveraged images (0.85 vs. 0.61, P < 0.001). Choroidal vascularity index values from averaged and denoised images were similar (71.81 vs. 71.16, P = 0.554). CONCLUSION Using three different metrics, we demonstrated that the deep learning denoising model can produce high-quality images that emulate, and may exceed, the quality of highly averaged scans.
Collapse
|
15
|
Rico-Jimenez JJ, Hu D, Tang EM, Oguz I, Tao YK. Real-time OCT image denoising using a self-fusion neural network. BIOMEDICAL OPTICS EXPRESS 2022; 13:1398-1409. [PMID: 35415003 PMCID: PMC8973187 DOI: 10.1364/boe.451029] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/20/2022] [Accepted: 02/06/2022] [Indexed: 06/07/2023]
Abstract
Optical coherence tomography (OCT) has become the gold standard for ophthalmic diagnostic imaging. However, clinical OCT image-quality is highly variable and limited visualization can introduce errors in the quantitative analysis of anatomic and pathologic features-of-interest. Frame-averaging is a standard method for improving image-quality, however, frame-averaging in the presence of bulk-motion can degrade lateral resolution and prolongs total acquisition time. We recently introduced a method called self-fusion, which reduces speckle noise and enhances OCT signal-to-noise ratio (SNR) by using similarity between from adjacent frames and is more robust to motion-artifacts than frame-averaging. However, since self-fusion is based on deformable registration, it is computationally expensive. In this study a convolutional neural network was implemented to offset the computational overhead of self-fusion and perform OCT denoising in real-time. The self-fusion network was pretrained to fuse 3 frames to achieve near video-rate frame-rates. Our results showed a clear gain in peak SNR in the self-fused images over both the raw and frame-averaged OCT B-scans. This approach delivers a fast and robust OCT denoising alternative to frame-averaging without the need for repeated image acquisition. Real-time self-fusion image enhancement will enable improved localization of OCT field-of-view relative to features-of-interest and improved sensitivity for anatomic features of disease.
Collapse
Affiliation(s)
- Jose J. Rico-Jimenez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37232, USA
| | - Dewei Hu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235 USA, USA
| | - Eric M. Tang
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37232, USA
| | - Ipek Oguz
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235 USA, USA
| | - Yuankai K. Tao
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37232, USA
| |
Collapse
|
16
|
Maruyama K, Mei S, Sakaguchi H, Hara C, Miki A, Mao Z, Kawasaki R, Wang Z, Sakimoto S, Hashida N, Quantock AJ, Chan K, Nishida K. Diagnosis of Choroidal Disease With Deep Learning-Based Image Enhancement and Volumetric Quantification of Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:22. [PMID: 35029631 PMCID: PMC8762713 DOI: 10.1167/tvst.11.1.22] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 12/10/2021] [Indexed: 12/24/2022] Open
Abstract
Purpose The purpose of this study was to quantify choroidal vessels (CVs) in pathological eyes in three dimensions (3D) using optical coherence tomography (OCT) and a deep-learning analysis. Methods A single-center retrospective study including 34 eyes of 34 patients (7 women and 27 men) with treatment-naïve central serous chorioretinopathy (CSC) and 33 eyes of 17 patients (7 women and 10 men) with Vogt-Koyanagi-Harada disease (VKH) or sympathetic ophthalmitis (SO) were imaged consecutively between October 2012 and May 2019 with a swept source OCT. Seventy-seven eyes of 39 age-matched volunteers (26 women and 13 men) with no sign of ocular pathology were imaged for comparison. Deep-learning-based image enhancement pipeline enabled CV segmentation and visualization in 3D, after which quantitative vessel volume maps were acquired to compare normal and diseased eyes and to track the clinical course of eyes in the disease group. Region-based vessel volumes and vessel indices were utilized for disease diagnosis. Results OCT-based CV volume maps disclose regional CV changes in patients with CSC, VKH, or SO. Three metrics, (i) choroidal volume, (ii) CV volume, and (iii) CV index, exhibit high sensitivity and specificity in discriminating pathological choroids from healthy ones. Conclusions The deep-learning analysis of OCT images described here provides a 3D visualization of the choroid, and allows quantification of features in the datasets to identify choroidal disease and distinguish between different diseases. Translational Relevance This novel analysis can be applied retrospectively to existing OCT datasets, and it represents a significant advance toward the automated diagnosis of choroidal pathologies based on observations and quantifications of the vasculature.
Collapse
Affiliation(s)
- Kazuichi Maruyama
- Department of Vision Informatics, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, Japan
| | - Song Mei
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey, USA
| | - Hirokazu Sakaguchi
- Department of Advanced Device Medicine, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Chikako Hara
- Department of Advanced Device Medicine, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Atsuya Miki
- Department of Innovative Visual Science, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Zaixing Mao
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey, USA
| | - Ryo Kawasaki
- Department of Vision Informatics, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Suita, Osaka, Japan
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Zhenguo Wang
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey, USA
| | - Susumu Sakimoto
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Noriyasu Hashida
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Andrew J. Quantock
- Structural Biophysics Group, School of Optometry and Vision Sciences, College of Biomedical and Life Sciences, Cardiff University, Cardiff, Wales, UK
| | - Kinpui Chan
- Topcon Advanced Biomedical Imaging Laboratory, Oakland, New Jersey, USA
| | - Kohji Nishida
- Department of Ophthalmology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, Japan
| |
Collapse
|
17
|
Ohno-Matsui K, Takahashi H, Mao Z, Nakao N. Determining posterior vitreous structure by analysis of images obtained by AI-based 3D segmentation and ultrawidefield optical coherence tomography. Br J Ophthalmol 2021; 107:732-737. [PMID: 34933898 DOI: 10.1136/bjophthalmol-2021-320131] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 12/02/2021] [Indexed: 11/04/2022]
Abstract
AIMS To determine the three-dimensional (3D) structure of the vitreous fluid including the posterior precortical vitreous pockets (PPVP), Cloquet's canal and cisterns in healthy subjects by AI-based segmentation of the vitreous of swept-source optical coherence tomography (OCT) images. In addition, to analyse the vitreous structures over a wide and deep area using ultrawidefield swept-source OCT (UWF-OCT). METHODS Ten eyes of six patients with the mean age was 40.7±8.4 years and the mean refractive error (spherical equivalent) was -3.275±2.2 diopters were examined. RESULTS In the UWF OCT images, the structure of the vitreous was observed in detail over 23 mm wide and 5 mm area. AI-guided analyses showed the complex 3D vitreous structures from any angle. Cisterns were observed to overlie the PPVP from the anterior. The morphology and locations of the cisterns varied among the subjects but tended to be similar in the two eyes of one individual. Cisterns joined the PPVPs superior to the macula to form a large trunk. This joined trunk was clearly seen in 3D images even in eyes whose trunk was not detected in the B scan OCT images. In some eyes, the vitreous had a complex appearance resembling an ant nest without large fluid-filled spaces. CONCLUSIONS A combination of UWF-OCT and 3D imaging is very helpful in visualising the complex structure of the vitreous. These technologies are powerful tools that can be used to clarify the normal evolution of the vitreous, pathological changes of vitreous and implications of vitreous changes in various vitreoretinal diseases.
Collapse
Affiliation(s)
- Kyoko Ohno-Matsui
- Ophthalmology & Visual Science, Tokyo Medical and Dental University, Bunkyo-ku, Japan
| | - Hiroyuki Takahashi
- Ophthalmology & Visual Science, Tokyo Medical and Dental University, Bunkyo-ku, Japan
| | | | - Noriko Nakao
- Ophthalmology & Visual Science, Tokyo Medical and Dental University, Bunkyo-ku, Japan
| |
Collapse
|
18
|
Rahman MH, Jeong HW, Kim NR, Kim DY. Automatic Quantification of Anterior Lamina Cribrosa Structures in Optical Coherence Tomography Using a Two-Stage CNN Framework. SENSORS (BASEL, SWITZERLAND) 2021; 21:5383. [PMID: 34450823 PMCID: PMC8400634 DOI: 10.3390/s21165383] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 07/28/2021] [Accepted: 08/03/2021] [Indexed: 11/17/2022]
Abstract
In this study, we propose a new intelligent system to automatically quantify the morphological parameters of the lamina cribrosa (LC) of the optical coherence tomography (OCT), including depth, curve depth, and curve index from OCT images. The proposed system consisted of a two-stage deep learning (DL) model, which was composed of the detection and the segmentation models as well as a quantification process with a post-processing scheme. The models were used to solve the class imbalance problem and obtain Bruch's membrane opening (BMO) as well as anterior LC information. The detection model was implemented by using YOLOv3 to acquire the BMO and LC position information. The Attention U-Net segmentation model is used to compute accurate locations of the BMO and LC curve information. In addition, post-processing is applied using polynomial regression to attain the anterior LC curve boundary information. Finally, the numerical values of morphological parameters are quantified from BMO and LC curve information using an image processing algorithm. The average precision values in the detection performances of BMO and LC information were 99.92% and 99.18%, respectively, which is very accurate. A highly correlated performance of R2 = 0.96 between the predicted and ground-truth values was obtained, which was very close to 1 and satisfied the quantification results. The proposed system was performed accurately by fully automatic quantification of BMO and LC morphological parameters using a DL model.
Collapse
Affiliation(s)
- Md Habibur Rahman
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (M.H.R.); (H.W.J.)
| | - Hyeon Woo Jeong
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (M.H.R.); (H.W.J.)
| | - Na Rae Kim
- Department of Ophthalmology, Inha University, Incheon 22212, Korea
| | - Dae Yu Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (M.H.R.); (H.W.J.)
- Inha Research Institute for Aerospace Medicine, Inha University, Incheon 22212, Korea
- Center for Sensor Systems, Inha University, Incheon 22212, Korea
| |
Collapse
|
19
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
20
|
Huang Y, Zhang N, Hao Q. Real-time noise reduction based on ground truth free deep learning for optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2021; 12:2027-2040. [PMID: 33996214 PMCID: PMC8086449 DOI: 10.1364/boe.419584] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 02/27/2021] [Accepted: 03/08/2021] [Indexed: 06/07/2023]
Abstract
Optical coherence tomography (OCT) is a high-resolution non-invasive 3D imaging modality, which has been widely used for biomedical research and clinical studies. The presence of noise on OCT images is inevitable which will cause problems for post-image processing and diagnosis. The frame-averaging technique that acquires multiple OCT images at the same or adjacent locations can enhance the image quality significantly. Both conventional frame averaging methods and deep learning-based methods using averaged frames as ground truth have been reported. However, conventional averaging methods suffer from the limitation of long image acquisition time, while deep learning-based methods require complicated and tedious ground truth label preparation. In this work, we report a deep learning-based noise reduction method that does not require clean images as ground truth for model training. Three network structures, including Unet, super-resolution residual network (SRResNet), and our modified asymmetric convolution-SRResNet (AC-SRResNet), were trained and evaluated using signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), edge preservation index (EPI) and computation time (CT). The effectiveness of these three trained models on OCT images of different samples and different systems was also investigated and confirmed. The SNR improvement for different sample images for L2-loss-trained Unet, SRResNet, and AC-SRResNet are 20.83 dB, 24.88 dB, and 22.19 dB, respectively. The SNR improvement for public images from different system for L1-loss-trained Unet, SRResNet, and AC-SRResNet are 19.36 dB, 20.11 dB, and 22.15 dB, respectively. AC-SRResNet and SRResNet demonstrate better denoising effect than Unet with longer computation time. AC-SRResNet demonstrates better edge preservation capability than SRResNet while Unet is close to AC-SRResNet. Eventually, we incorporated Unet, SRResNet, and AC-SRResNet into our graphic processing unit accelerated OCT imaging system for online noise reduction evaluation. Real-time noise reduction for OCT images with size of 512×512 pixels for Unet, SRResNet, and AC-SRResNet at 64 fps, 19 fps, and 17 fps were achieved respectively.
Collapse
|
21
|
Zhang H, Yang J, Zhou K, Li F, Hu Y, Zhao Y, Zheng C, Zhang X, Liu J. Automatic Segmentation and Visualization of Choroid in OCT with Knowledge Infused Deep Learning. IEEE J Biomed Health Inform 2020; 24:3408-3420. [PMID: 32931435 DOI: 10.1109/jbhi.2020.3023144] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The choroid provides oxygen and nourishment to the outer retina thus is related to the pathology of various ocular diseases. Optical coherence tomography (OCT) is advantageous in visualizing and quantifying the choroid in vivo. However, its application in the study of the choroid is still limited for two reasons. (1) The lower boundary of the choroid (choroid-sclera interface) in OCT is fuzzy, which makes the automatic segmentation difficult and inaccurate. (2) The visualization of the choroid is hindered by the vessel shadows from the superficial layers of the inner retina. In this paper, we propose to incorporate medical and imaging prior knowledge with deep learning to address these two problems. We propose a biomarker-infused global-to-local network (Bio-Net) for the choroid segmentation, which not only regularizes the segmentation via predicted choroid thickness, but also leverages a global-to-local segmentation strategy to provide global structure information and suppress overfitting. For eliminating the retinal vessel shadows, we propose a deep-learning pipeline, which firstly locate the shadows using their projection on the retinal pigment epithelium layer, then the contents of the choroidal vasculature at the shadow locations are predicted with an edge-to-texture generative adversarial inpainting network. The results show our method outperforms the existing methods on both tasks. We further apply the proposed method in a clinical prospective study for understanding the pathology of glaucoma, which demonstrates its capacity in detecting the structure and vascular changes of the choroid related to the elevation of intra-ocular pressure.
Collapse
|
22
|
Girard MJA, Schmetterer L. Artificial intelligence and deep learning in glaucoma: Current state and future prospects. PROGRESS IN BRAIN RESEARCH 2020; 257:37-64. [PMID: 32988472 DOI: 10.1016/bs.pbr.2020.07.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Over the past few years, there has been an unprecedented and tremendous excitement for artificial intelligence (AI) research in the field of Ophthalmology; this has naturally been translated to glaucoma-a progressive optic neuropathy characterized by retinal ganglion cell axon loss and associated visual field defects. In this review, we aim to discuss how AI may have a unique opportunity to tackle the many challenges faced in the glaucoma clinic. This is because glaucoma remains poorly understood with difficulties in providing early diagnosis and prognosis accurately and in a timely fashion. In the short term, AI could also become a game changer by paving the way for the first cost-effective glaucoma screening campaigns. While there are undeniable technical and clinical challenges ahead, and more so than for other ophthalmic disorders whereby AI is already booming, we strongly believe that glaucoma specialists should embrace AI as a companion to their practice. Finally, this review will also remind ourselves that glaucoma is a complex group of disorders with a multitude of physiological manifestations that cannot yet be observed clinically. AI in glaucoma is here to stay, but it will not be the only tool to solve glaucoma.
Collapse
Affiliation(s)
- Michaël J A Girard
- Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
| | - Leopold Schmetterer
- Ocular Imaging, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore; School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore; Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria; Institute of Clinical and Experimental Ophthalmology, Basel, Switzerland.
| |
Collapse
|
23
|
Sorrentino FS, Jurman G, De Nadai K, Campa C, Furlanello C, Parmeggiani F. Application of Artificial Intelligence in Targeting Retinal Diseases. Curr Drug Targets 2020; 21:1208-1215. [PMID: 32640954 DOI: 10.2174/1389450121666200708120646] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 04/20/2020] [Accepted: 04/20/2020] [Indexed: 01/17/2023]
Abstract
Retinal diseases affect an increasing number of patients worldwide because of the aging population. Request for diagnostic imaging in ophthalmology is ramping up, while the number of specialists keeps shrinking. Cutting-edge technology embedding artificial intelligence (AI) algorithms are thus advocated to help ophthalmologists perform their clinical tasks as well as to provide a source for the advancement of novel biomarkers. In particular, optical coherence tomography (OCT) evaluation of the retina can be augmented by algorithms based on machine learning and deep learning to early detect, qualitatively localize and quantitatively measure epi/intra/subretinal abnormalities or pathological features of macular or neural diseases. In this paper, we discuss the use of AI to facilitate efficacy and accuracy of retinal imaging in those diseases increasingly treated by intravitreal vascular endothelial growth factor (VEGF) inhibitors (i.e. anti-VEGF drugs), also including integration and interpretation features in the process. We review recent advances by AI in diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity that envision a potentially key role of highly automated systems in screening, early diagnosis, grading and individualized therapy. We discuss benefits and critical aspects of automating the evaluation of disease activity, recurrences, the timing of retreatment and therapeutically potential novel targets in ophthalmology. The impact of massive employment of AI to optimize clinical assistance and encourage tailored therapies for distinct patterns of retinal diseases is also discussed.
Collapse
Affiliation(s)
| | - Giuseppe Jurman
- Unit of Predictive Models for Biomedicine and Environment - MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Katia De Nadai
- Department of Morphology, Surgery and Experimental Medicine, University of Ferrara, Ferrara, Italy
| | - Claudio Campa
- Department of Surgical Specialties, Sant'Anna Hospital, Azienda Ospedaliero Universitaria di Ferrara, Ferrara, Italy
| | - Cesare Furlanello
- Unit of Predictive Models for Biomedicine and Environment - MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Francesco Parmeggiani
- Department of Morphology, Surgery and Experimental Medicine, University of Ferrara, Ferrara, Italy
| |
Collapse
|
24
|
Fan Y, Ma Q, Xin S, Peng R, Kang H. Quantitative and Qualitative Evaluation of Supercontinuum Laser‐Induced Cutaneous Thermal Injuries and Their Repair With OCT Images. Lasers Surg Med 2020. [DOI: 10.1002/lsm.23287] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Yingwei Fan
- Beijing Institute of Radiation Medicine Beijing 100850 China
| | - Qiong Ma
- Beijing Institute of Radiation Medicine Beijing 100850 China
| | - Shenghai Xin
- Department of Biomedical Engineering School of Medicine, Tsinghua University Beijing 100084 China
| | - Ruiyun Peng
- Beijing Institute of Radiation Medicine Beijing 100850 China
| | - Hongxiang Kang
- Beijing Institute of Radiation Medicine Beijing 100850 China
| |
Collapse
|