1
|
Wang XN, Li S, Cai X, Li T, Long D, Wu Q. Imaging Artifacts and Quality Evaluation with Ultrawide-Field Swept-Source OCTA in Diabetic Retinopathy. Curr Eye Res 2024; 49:410-416. [PMID: 38116796 DOI: 10.1080/02713683.2023.2296362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 12/13/2023] [Indexed: 12/21/2023]
Abstract
PURPOSE To evaluate the prevalence and types of artifacts in ultrawide-field swept-source optical coherence tomography angiography (SS-OCTA) scans of diabetic retinopathy (DR) patients. METHODS This study was a prospective, observational study conducted from May 2022 to October 2022. Participants comprised individuals with proliferative diabetic retinopathy (PDR), nonproliferative diabetic retinopathy (NPDR), no diabetic retinopathy, and healthy controls. SS-OCTA imaging was performed, and a 5-scan composite with a larger field of view (23.5 mm × 17.5 mm) was captured using built-in software. Two experienced ophthalmologists analyzed the images independently, and the image quality and artifact prevalence were recorded and analyzed. RESULTS The study included 70 eyes (16 with PDR, 24 with NPDR, 12 eyes of diabetic patients without DR, and 18 healthy eyes) in 70 subjects. Imaging artifacts were observed in a high percentage of eyes, with 98.57% of eyes presenting at least one type of artifact. A significant proportion of eyes (58.57%) exhibited a severe degree of artifacts. The most prevalent artifacts were loss of signal in 63 eyes (90%) and displacement artifact and masking artifact in 43 eyes (61.4%). Patients with more severe stages of DR had higher artifact scores (p < 0.05). Multivariate regression analysis indicated that DR severity was the most important factor influencing artifact scores (p < 0.05). CONCLUSIONS In OCTA photos, various artifacts arise at different frequencies. It is crucial to qualitatively evaluate the images to ensure their quality. The results demonstrate that DR severity has a significant correlation with artifact scores.
Collapse
Affiliation(s)
- Xiang-Ning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital, Shanghai, China
| | - Shuting Li
- Department of Ophthalmology, The First People's Hospital of Changzhou, Changzhou, China
| | - Xuan Cai
- Department of Ophthalmology, Shanghai Sixth People's Hospital, Shanghai, China
| | - Tingting Li
- Department of Ophthalmology, Shanghai Sixth People's Hospital, Shanghai, China
| | - Da Long
- Department of Ophthalmology, Shanghai Sixth People's Hospital, Shanghai, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital, Shanghai, China
| |
Collapse
|
2
|
Neuhaus K, Khan S, Thaware O, Ni S, Aga M, Jia Y, Redd T, Chen S, Huang D, Jian Y. Real-time line-field optical coherence tomography for cellular resolution imaging of biological tissue. BIOMEDICAL OPTICS EXPRESS 2024; 15:1059-1073. [PMID: 38404311 PMCID: PMC10890841 DOI: 10.1364/boe.511187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 01/17/2024] [Accepted: 01/18/2024] [Indexed: 02/27/2024]
Abstract
A real-time line-field optical coherence tomography (LF-OCT) system is demonstrated with image acquisition rates of up to 5000 B-frames or 2.5 million A-lines per second for 500 A-lines per B-frame. The system uses a high-speed low-cost camera to achieve continuous data transfer rates required for real-time imaging, allowing the evaluation of future applications in clinical or intraoperative environments. The light source is an 840 nm super-luminescent diode. Leveraging parallel computing with GPU and high speed CoaXPress data transfer interface, we were able to acquire, process, and display OCT data with low latency. The studied system uses anamorphic beam shaping in the detector arm, optimizing the field of view and sensitivity for imaging biological tissue at cellular resolution. The lateral and axial resolution measured in air were 1.7 µm and 6.3 µm, respectively. Experimental results demonstrate real-time inspection of the trabecular meshwork and Schlemm's canal on ex vivo corneoscleral wedges and real-time imaging of endothelial cells of human subjects in vivo.
Collapse
Affiliation(s)
- Kai Neuhaus
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
| | - Shanjida Khan
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Omkar Thaware
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Shuibin Ni
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Mini Aga
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Travis Redd
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
| | - Siyu Chen
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yifan Jian
- Casey Eye Institute, Oregon Health & Science University , Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
3
|
Ni S, Nguyen TTP, Ng R, Woodward M, Ostmo S, Jia Y, Chiang MF, Huang D, Skalet AH, Campbell JP, Jian Y. Panretinal Optical Coherence Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3219-3228. [PMID: 37216244 PMCID: PMC10615839 DOI: 10.1109/tmi.2023.3278269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We introduce a new concept of panoramic retinal (panretinal) optical coherence tomography (OCT) imaging system with a 140° field of view (FOV). To achieve this unprecedented FOV, a contact imaging approach was used which enabled faster, more efficient, and quantitative retinal imaging with measurement of axial eye length. The utilization of the handheld panretinal OCT imaging system could allow earlier recognition of peripheral retinal disease and prevent permanent vision loss. In addition, adequate visualization of the peripheral retina has a great potential for better understanding disease mechanisms regarding the periphery. To the best of our knowledge, the panretinal OCT imaging system presented in this manuscript has the widest FOV among all the retina OCT imaging systems and offers significant values in both clinical ophthalmology and basic vision science.
Collapse
|
4
|
Trout RM, Viehland C, Li JD, Raynor W, Dhalla AH, Vajzovic L, Kuo AN, Toth CA, Izatt JA. Methods for real-time feature-guided image fusion of intrasurgical volumetric optical coherence tomography with digital microscopy. BIOMEDICAL OPTICS EXPRESS 2023; 14:3308-3326. [PMID: 37497493 PMCID: PMC10368056 DOI: 10.1364/boe.488975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 07/28/2023]
Abstract
4D-microscope-integrated optical coherence tomography (4D-MIOCT) is an emergent multimodal imaging technology in which live volumetric OCT (4D-OCT) is implemented in tandem with standard stereo color microscopy. 4D-OCT provides ophthalmic surgeons with many useful visual cues not available in standard microscopy; however it is challenging for the surgeon to effectively integrate cues from simultaneous-but-separate imaging in real-time. In this work, we demonstrate progress towards solving this challenge via the fusion of data from each modality guided by segmented 3D features. In this way, a more readily interpretable visualization that combines and registers important cues from both modalities is presented to the surgeon.
Collapse
Affiliation(s)
- Robert M. Trout
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Christian Viehland
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Jianwei D. Li
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - William Raynor
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Al-Hafeez Dhalla
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| | - Lejla Vajzovic
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Anthony N. Kuo
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Cynthia A. Toth
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
- Department of Ophthalmology, Duker University Medical Center, 2351 Erwin Road, Durham, NC 27705, USA
| | - Joseph A. Izatt
- Department of Biomedical Engineering, Duke University, 101 Science Drive, Durham, NC 27708, USA
| |
Collapse
|
5
|
Zhang H, Yang J, Zheng C, Zhao S, Zhang A. Annotation-efficient learning for OCT segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3294-3307. [PMID: 37497504 PMCID: PMC10368022 DOI: 10.1364/boe.486276] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/29/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Deep learning has been successfully applied to OCT segmentation. However, for data from different manufacturers and imaging protocols, and for different regions of interest (ROIs), it requires laborious and time-consuming data annotation and training, which is undesirable in many scenarios, such as surgical navigation and multi-center clinical trials. Here we propose an annotation-efficient learning method for OCT segmentation that could significantly reduce annotation costs. Leveraging self-supervised generative learning, we train a Transformer-based model to learn the OCT imagery. Then we connect the trained Transformer-based encoder to a CNN-based decoder, to learn the dense pixel-wise prediction in OCT segmentation. These training phases use open-access data and thus incur no annotation costs, and the pre-trained model can be adapted to different data and ROIs without re-training. Based on the greedy approximation for the k-center problem, we also introduce an algorithm for the selective annotation of the target data. We verified our method on publicly-available and private OCT datasets. Compared to the widely-used U-Net model with 100% training data, our method only requires ∼10% of the data for achieving the same segmentation accuracy, and it speeds the training up to ∼3.5 times. Furthermore, our proposed method outperforms other potential strategies that could improve annotation efficiency. We think this emphasis on learning efficiency may help improve the intelligence and application penetration of OCT-based technologies.
Collapse
Affiliation(s)
- Haoran Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shiqing Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Aili Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
6
|
Bayhaqi YA, Hamidi A, Navarini AA, Cattin PC, Canbaz F, Zam A. Real-time closed-loop tissue-specific laser osteotomy using deep-learning-assisted optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:2986-3002. [PMID: 37342720 PMCID: PMC10278623 DOI: 10.1364/boe.486660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 05/15/2023] [Accepted: 05/18/2023] [Indexed: 06/23/2023]
Abstract
This article presents a real-time noninvasive method for detecting bone and bone marrow in laser osteotomy. This is the first optical coherence tomography (OCT) implementation as an online feedback system for laser osteotomy. A deep-learning model has been trained to identify tissue types during laser ablation with a test accuracy of 96.28 %. For the hole ablation experiments, the average maximum depth of perforation and volume loss was 0.216 mm and 0.077 mm3, respectively. The contactless nature of OCT with the reported performance shows that it is becoming more feasible to utilize it as a real-time feedback system for laser osteotomy.
Collapse
Affiliation(s)
- Yakub. A. Bayhaqi
- Biomedical Laser and Optics Group (BLOG), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Arsham Hamidi
- Biomedical Laser and Optics Group (BLOG), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Alexander A. Navarini
- Digital Dermatology Group, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Philippe C. Cattin
- Center for medical Image Analysis and Navigation (CIAN), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Ferda Canbaz
- Biomedical Laser and Optics Group (BLOG), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Azhar Zam
- Biomedical Laser and Optics Group (BLOG), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
- Division of Engineering, New York University Abu Dhabi, Abu Dhabi, 129188, United Arab Emirates
- Tandon School of Engineering, New York University, Brooklyn, NY, 11201, USA
| |
Collapse
|
7
|
Huang Y, Asaria R, Stoyanov D, Sarunic M, Bano S. PseudoSegRT: efficient pseudo-labelling for intraoperative OCT segmentation. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02928-9. [PMID: 37233893 PMCID: PMC10329588 DOI: 10.1007/s11548-023-02928-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 04/19/2023] [Indexed: 05/27/2023]
Abstract
PURPOSE Robotic ophthalmic microsurgery has significant potential to help improve the success of challenging procedures and overcome the physical limitations of the surgeon. Intraoperative optical coherence tomography (iOCT) has been reported for the visualisation of ophthalmic surgical manoeuvres, where deep learning methods can be used for real-time tissue segmentation and surgical tool tracking. However, many of these methods rely heavily on labelled datasets, where producing annotated segmentation datasets is a time-consuming and tedious task. METHODS To address this challenge, we propose a robust and efficient semi-supervised method for boundary segmentation in retinal OCT to guide a robotic surgical system. The proposed method uses U-Net as the base model and implements a pseudo-labelling strategy which combines the labelled data with unlabelled OCT scans during training. After training, the model is optimised and accelerated with the use of TensorRT. RESULTS Compared with fully supervised learning, the pseudo-labelling method can improve the generalisability of the model and show better performance for unseen data from a different distribution using only 2% of labelled training samples. The accelerated GPU inference takes less than 1 millisecond per frame with FP16 precision. CONCLUSION Our approach demonstrates the potential of using pseudo-labelling strategies in real-time OCT segmentation tasks to guide robotic systems. Furthermore, the accelerated GPU inference of our network is highly promising for segmenting OCT images and guiding the position of a surgical tool (e.g. needle) for sub-retinal injections.
Collapse
Affiliation(s)
- Yu Huang
- Department of Computer Science, University College London, London, UK
| | - Riaz Asaria
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
- Ophthalmology, Royal Free Hospital, London, UK
| | - Danail Stoyanov
- Department of Computer Science, University College London, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Marinko Sarunic
- Institute of Ophthalmology, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Sophia Bano
- Department of Computer Science, University College London, London, UK.
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| |
Collapse
|
8
|
Dehghani S, Sommersperger M, Zhang P, Martin-Gomez A, Busam B, Gehlbach P, Navab N, Nasseri MA, Iordachita I. Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2023; 2023:4724-4731. [PMID: 38125032 PMCID: PMC10732544 DOI: 10.1109/icra48891.2023.10160372] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.
Collapse
Affiliation(s)
- Shervin Dehghani
- Department of Computer Science, Technische Universität München, München 85748 Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Michael Sommersperger
- Department of Computer Science, Technische Universität München, München 85748 Germany
| | - Peiyao Zhang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin Busam
- Department of Computer Science, Technische Universität München, München 85748 Germany
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University of Munich, 85748 Munich, Germany, and an adjunct professor at the Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - M. Ali Nasseri
- Department of Computer Science, Technische Universität München, München 85748 Germany
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
9
|
Wu Y, Olvera-Barrios A, Yanagihara R, Kung TPH, Lu R, Leung I, Mishra AV, Nussinovitch H, Grimaldi G, Blazes M, Lee CS, Egan C, Tufail A, Lee AY. Training Deep Learning Models to Work on Multiple Devices by Cross-Domain Learning with No Additional Annotations. Ophthalmology 2023; 130:213-222. [PMID: 36154868 PMCID: PMC9868052 DOI: 10.1016/j.ophtha.2022.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/07/2022] [Accepted: 09/16/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To create an unsupervised cross-domain segmentation algorithm for segmenting intraretinal fluid and retinal layers on normal and pathologic macular OCT images from different manufacturers and camera devices. DESIGN We sought to use generative adversarial networks (GANs) to generalize a segmentation model trained on one OCT device to segment B-scans obtained from a different OCT device manufacturer in a fully unsupervised approach without labeled data from the latter manufacturer. PARTICIPANTS A total of 732 OCT B-scans from 4 different OCT devices (Heidelberg Spectralis, Topcon 1000, Maestro2, and Zeiss Plex Elite 9000). METHODS We developed an unsupervised GAN model, GANSeg, to segment 7 retinal layers and intraretinal fluid in Topcon 1000 OCT images (domain B) that had access only to labeled data on Heidelberg Spectralis images (domain A). GANSeg was unsupervised because it had access only to 110 Heidelberg labeled OCTs and 556 raw and unlabeled Topcon 1000 OCTs. To validate GANSeg segmentations, 3 masked graders manually segmented 60 OCTs from an external Topcon 1000 test dataset independently. To test the limits of GANSeg, graders also manually segmented 3 OCTs from Zeiss Plex Elite 9000 and Topcon Maestro2. A U-Net was trained on the same labeled Heidelberg images as baseline. The GANSeg repository with labeled annotations is at https://github.com/uw-biomedical-ml/ganseg. MAIN OUTCOME MEASURES Dice scores comparing segmentation results from GANSeg and the U-Net model with the manual segmented images. RESULTS Although GANSeg and U-Net achieved comparable Dice scores performance as human experts on the labeled Heidelberg test dataset, only GANSeg achieved comparable Dice scores with the best performance for the ganglion cell layer plus inner plexiform layer (90%; 95% confidence interval [CI], 68%-96%) and the worst performance for intraretinal fluid (58%; 95% CI, 18%-89%), which was statistically similar to human graders (79%; 95% CI, 43%-94%). GANSeg significantly outperformed the U-Net model. Moreover, GANSeg generalized to both Zeiss and Topcon Maestro2 swept-source OCT domains, which it had never encountered before. CONCLUSIONS GANSeg enables the transfer of supervised deep learning algorithms across OCT devices without labeled data, thereby greatly expanding the applicability of deep learning algorithms.
Collapse
Affiliation(s)
- Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Ryan Yanagihara
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | | | - Randy Lu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Irene Leung
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Amit V Mishra
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | | | - Gabriela Grimaldi
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington.
| |
Collapse
|
10
|
Ma D, Pasquale LR, Girard MJA, Leung CKS, Jia Y, Sarunic MV, Sappington RM, Chan KC. Reverse translation of artificial intelligence in glaucoma: Connecting basic science with clinical applications. FRONTIERS IN OPHTHALMOLOGY 2023; 2:1057896. [PMID: 36866233 PMCID: PMC9976697 DOI: 10.3389/fopht.2022.1057896] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/05/2022] [Indexed: 04/16/2023]
Abstract
Artificial intelligence (AI) has been approved for biomedical research in diverse areas from bedside clinical studies to benchtop basic scientific research. For ophthalmic research, in particular glaucoma, AI applications are rapidly growing for potential clinical translation given the vast data available and the introduction of federated learning. Conversely, AI for basic science remains limited despite its useful power in providing mechanistic insight. In this perspective, we discuss recent progress, opportunities, and challenges in the application of AI in glaucoma for scientific discoveries. Specifically, we focus on the research paradigm of reverse translation, in which clinical data are first used for patient-centered hypothesis generation followed by transitioning into basic science studies for hypothesis validation. We elaborate on several distinctive areas of research opportunities for reverse translation of AI in glaucoma including disease risk and progression prediction, pathology characterization, and sub-phenotype identification. We conclude with current challenges and future opportunities for AI research in basic science for glaucoma such as inter-species diversity, AI model generalizability and explainability, as well as AI applications using advanced ocular imaging and genomic data.
Collapse
Affiliation(s)
- Da Ma
- School of Medicine, Wake Forest University, Winston-Salem, NC, United States
- Atrium Health Wake Forest Baptist Medical Center, Winston-Salem, NC, United States
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Louis R. Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Michaël J. A. Girard
- Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Institute for Molecular and Clinical Ophthalmology, Basel, Switzerland
| | | | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, United States
| | - Marinko V. Sarunic
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Rebecca M. Sappington
- School of Medicine, Wake Forest University, Winston-Salem, NC, United States
- Atrium Health Wake Forest Baptist Medical Center, Winston-Salem, NC, United States
| | - Kevin C. Chan
- Departments of Ophthalmology and Radiology, Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, United States
- Department of Biomedical Engineering, Tandon School of Engineering, New York University, New York, NY, United States
| |
Collapse
|
11
|
Nguyen TTP, Ni S, Ostmo S, Rajagopalan A, Coyner AS, Woodward M, Chiang MF, Jia Y, Huang D, Campbell JP, Jian Y. Association of Optical Coherence Tomography-Measured Fibrovascular Ridge Thickness and Clinical Disease Stage in Retinopathy of Prematurity. JAMA Ophthalmol 2022; 140:2797385. [PMID: 36227622 PMCID: PMC9562098 DOI: 10.1001/jamaophthalmol.2022.4173] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/18/2022] [Indexed: 11/30/2022]
Abstract
Importance Accurate diagnosis of retinopathy of prematurity (ROP) is essential to provide timely treatment and reduce the risk of blindness. However, the components of an ROP examination are subjective and qualitative. Objective To evaluate whether optical coherence tomography (OCT)-derived retinal thickness measurements at the vascular-avascular junction are associated with clinical diagnosis of ROP stage. Design, Setting, and Participants This cross-sectional longitudinal study compared OCT-based ridge thickness calculated from OCT B-scans by a masked examiner to the clinical diagnosis of 2 masked examiners using both traditional stage classifications and a more granular continuous scale at the neonatal intensive care unit (NICU) of Oregon Health & Science University (OHSU) Hospital. Infants who met ROP screening criteria in the OHSU NICU between June 2021 and April 2022 and had guardian consent were included. One OCT volume and en face image per patient per eye showing at least 1 to 2 clock hours of ridge were included in the final analysis. Main Outcomes and Measures Comparison of OCT-derived ridge thickness to the clinical diagnosis of ROP stage using an ordinal and continuous scale. Repeatability was assessed using 20 repeated examinations from the same visit and compared using intraclass correlation coefficient (ICC) and coefficient of variation (CV). Comparison of ridge thickness with ordinal categories was performed using generalized estimating equations and with continuous stage using Spearman correlation. Results A total of 128 separate OCT eye examinations from 50 eyes of 25 patients were analyzed. The ICC was 0.87 with a CV of 7.0%. Higher ordinal disease classification was associated with higher axial ridge thickness on OCT, with mean (SD) thickness measurements of 264.2 (11.2) μm (P < .001), 334.2 (11.4) μm (P < .001), and 495.0 (32.2) μm (P < .001) for stages 1, 2, and 3, respectively and with continuous stage labels (ρ = 0.739, P < .001). Conclusions and Relevance These results suggest that OCT-based quantification of peripheral stage in ROP may be an objective and quantitative biomarker that may be useful for clinical diagnosis and longitudinal monitoring and may have implications for disease classification in the future.
Collapse
Affiliation(s)
| | - Shuibin Ni
- Casey Eye Institute, Oregon Health & Science University, Portland
- Department of Biomedical Engineering, Oregon Health & Science University, Portland
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Mani Woodward
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland
- Department of Biomedical Engineering, Oregon Health & Science University, Portland
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland
- Department of Biomedical Engineering, Oregon Health & Science University, Portland
| | | | - Yifan Jian
- Casey Eye Institute, Oregon Health & Science University, Portland
- Department of Biomedical Engineering, Oregon Health & Science University, Portland
| |
Collapse
|
12
|
|
13
|
A comparison of deep learning U-Net architectures for posterior segment OCT retinal layer segmentation. Sci Rep 2022; 12:14888. [PMID: 36050364 PMCID: PMC9437058 DOI: 10.1038/s41598-022-18646-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 08/17/2022] [Indexed: 11/08/2022] Open
Abstract
Deep learning methods have enabled a fast, accurate and automated approach for retinal layer segmentation in posterior segment OCT images. Due to the success of semantic segmentation methods adopting the U-Net, a wide range of variants and improvements have been developed and applied to OCT segmentation. Unfortunately, the relative performance of these methods is difficult to ascertain for OCT retinal layer segmentation due to a lack of comprehensive comparative studies, and a lack of proper matching between networks in previous comparisons, as well as the use of different OCT datasets between studies. In this paper, a detailed and unbiased comparison is performed between eight U-Net architecture variants across four different OCT datasets from a range of different populations, ocular pathologies, acquisition parameters, instruments and segmentation tasks. The U-Net architecture variants evaluated include some which have not been previously explored for OCT segmentation. Using the Dice coefficient to evaluate segmentation performance, minimal differences were noted between most of the tested architectures across the four datasets. Using an extra convolutional layer per pooling block gave a small improvement in segmentation performance for all architectures across all four datasets. This finding highlights the importance of careful architecture comparison (e.g. ensuring networks are matched using an equivalent number of layers) to obtain a true and unbiased performance assessment of fully semantic models. Overall, this study demonstrates that the vanilla U-Net is sufficient for OCT retinal layer segmentation and that state-of-the-art methods and other architectural changes are potentially unnecessary for this particular task, especially given the associated increased complexity and slower speed for the marginal performance gains observed. Given the U-Net model and its variants represent one of the most commonly applied image segmentation methods, the consistent findings across several datasets here are likely to translate to many other OCT datasets and studies. This will provide significant value by saving time and cost in experimentation and model development as well as reduced inference time in practice by selecting simpler models.
Collapse
|
14
|
Zhou H, Liu J, Laiginhas R, Zhang Q, Cheng Y, Zhang Y, Shi Y, Shen M, Gregori G, Rosenfeld PJ, Wang RK. Depth-resolved visualization and automated quantification of hyperreflective foci on OCT scans using optical attenuation coefficients. BIOMEDICAL OPTICS EXPRESS 2022; 13:4175-4189. [PMID: 36032584 PMCID: PMC9408241 DOI: 10.1364/boe.467623] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 06/25/2022] [Accepted: 06/25/2022] [Indexed: 05/11/2023]
Abstract
An automated depth-resolved algorithm using optical attenuation coefficients (OACs) was developed to visualize, localize, and quantify hyperreflective foci (HRF) seen on OCT imaging that are associated with macular hyperpigmentation and represent an increased risk of disease progression in age related macular degeneration. To achieve this, we first transformed the OCT scans to linear representation, which were then contrasted by OACs. HRF were visualized and localized within the entire scan by differentiating HRF within the retina from HRF along the retinal pigment epithelium (RPE). The total pigment burden was quantified using the en face sum projection of an OAC slab between the inner limiting membrane (ILM) to Bruch's membrane (BM). The manual total pigment burden measurements were also obtained by combining manual outlines of HRF in the B-scans with the total area of hypotransmission defects outlined on sub-RPE slabs, which was used as the reference to compare with those obtained from the automated algorithm. 6×6 mm swept-source OCT scans were collected from a total of 49 eyes from 42 patients with macular HRF. We demonstrate that the algorithm was able to automatically distinguish between HRF within the retina and HRF along the RPE. In 24 test eyes, the total pigment burden measurements by the automated algorithm were compared with measurements obtained from manual segmentations. A significant correlation was found between the total pigment area measurements from the automated and manual segmentations (P < 0.001). The proposed automated algorithm based on OACs should be useful in studying eye diseases involving HRF.
Collapse
Affiliation(s)
- Hao Zhou
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
| | - Jeremy Liu
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Rita Laiginhas
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Qinqin Zhang
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
| | - Yi Zhang
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
| | - Yingying Shi
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, WA 98105, USA
- Karalis Johnson Retina Center, Department of Ophthalmology, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
15
|
Nguyen TTP, Ni S, Liang G, Khan S, Wei X, Skalet A, Ostmo S, Chiang MF, Jia Y, Huang D, Jian Y, Campbell JP. Widefield Optical Coherence Tomography in Pediatric Retina: A Case Series of Intraoperative Applications Using a Prototype Handheld Device. Front Med (Lausanne) 2022; 9:860371. [PMID: 35860728 PMCID: PMC9289179 DOI: 10.3389/fmed.2022.860371] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 06/15/2022] [Indexed: 11/18/2022] Open
Abstract
Optical coherence tomography (OCT) has changed the standard of care for diagnosis and management of macular diseases in adults. Current commercially available OCT systems, including handheld OCT for pediatric use, have a relatively narrow field of view (FOV), which has limited the potential application of OCT to retinal diseases with primarily peripheral pathology, including many of the most common pediatric retinal conditions. More broadly, diagnosis of all types of retinal detachment (exudative, tractional, and rhegmatogenous) may be improved with OCT-based assessment of retinal breaks, identification of proliferative vitreoretinopathy (PVR) membranes, and the pattern of subretinal fluid. Intraocular tumors both benign and malignant often occur outside of the central macula and may be associated with exudation, subretinal and intraretinal fluid, and vitreoretinal traction. The development of wider field OCT systems thus has the potential to improve the diagnosis and management of myriad diseases in both adult and pediatric retina. In this paper, we present a case series of pediatric patients with complex vitreoretinal pathology undergoing examinations under anesthesia (EUA) using a portable widefield (WF) swept-source (SS)-OCT device.
Collapse
Affiliation(s)
- Thanh-Tin P. Nguyen
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
| | - Shuibin Ni
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - Guangru Liang
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - Shanjida Khan
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - Xiang Wei
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - Alison Skalet
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Knight Cancer Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Radiation Medicine, Oregon Health and Science University, Portland, OR, United States
- Department of Dermatology, Oregon Health and Science University, Portland, OR, United States
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, United States
| | - Yali Jia
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - David Huang
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - Yifan Jian
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, United States
| | - J. Peter Campbell
- Casey Eye Institute, Oregon Health and Science University, Portland, OR, United States
- *Correspondence: J. Peter Campbell,
| |
Collapse
|
16
|
Deng X, Liu K, Zhu T, Guo D, Yin X, Yao L, Ding Z, Ye J, Li P. Dynamic inverse SNR-decorrelation OCT angiography with GPU acceleration. BIOMEDICAL OPTICS EXPRESS 2022; 13:3615-3628. [PMID: 35781971 PMCID: PMC9208597 DOI: 10.1364/boe.459632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 05/12/2022] [Accepted: 05/22/2022] [Indexed: 05/02/2023]
Abstract
Dynamic OCT angiography (OCTA) is an attractive approach for monitoring stimulus-evoked hemodynamics; however, a 4D (3D space and time) dataset requires a long acquisition time and has a large data size, thereby posing a great challenge to data processing. This study proposed a GPU-based real-time data processing pipeline for dynamic inverse SNR-decorrelation OCTA (ID-OCTA), offering a measured line-process rate of 133 kHz for displaying OCT and OCTA cross-sections in real time. Real-time processing enabled automatic optimization of angiogram quality, which improved the vessel SNR, contrast-to-noise ratio, and connectivity by 14.37, 14.08, and 9.76%, respectively. Furthermore, motion-contrast 4D angiographic imaging of stimulus-evoked hemodynamics was achieved within a single trail in the mouse retina. Consequently, a flicker light stimulus evoked an apparent dilation of the retinal arterioles and venules and an elevation of the decorrelation value in the retinal plexuses. Therefore, GPU ID-OCTA enables real-time and high-quality angiographic imaging and is particularly suitable for hemodynamic studies.
Collapse
Affiliation(s)
- Xiaofeng Deng
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
- These authors contributed equally to this
work
| | - Kaiyuan Liu
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
- These authors contributed equally to this
work
| | - Tiepei Zhu
- Eye center of the Second Affiliated
Hospital, College of Medicine, Zhejiang
University, Hangzhou, Zhejiang 310003, China
| | - Dayou Guo
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
| | - Xiaoting Yin
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
| | - Lin Yao
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
| | - Zhihua Ding
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
| | - Juan Ye
- Eye center of the Second Affiliated
Hospital, College of Medicine, Zhejiang
University, Hangzhou, Zhejiang 310003, China
| | - Peng Li
- State Key Laboratory of Modern Optical
Instrumentation, College of Optical Science and
Engineering, Zhejiang University, Hangzhou 310027,
China
- Jiaxing Key Laboratory of
Photonic Sensing & Intelligent Imaging, Jiaxing
314000, China
- Intelligent Optics &
Photonics Research Center, Jiaxing Research Institute, Zhejiang
University, Jiaxing 314000, China
| |
Collapse
|
17
|
Yadav SK, Kafieh R, Zimmermann HG, Kauer-Bonin J, Nouri-Mahdavi K, Mohammadzadeh V, Shi L, Kadas EM, Paul F, Motamedi S, Brandt AU. Intraretinal Layer Segmentation Using Cascaded Compressed U-Nets. J Imaging 2022; 8:139. [PMID: 35621903 PMCID: PMC9146486 DOI: 10.3390/jimaging8050139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 04/23/2022] [Accepted: 05/03/2022] [Indexed: 12/24/2022] Open
Abstract
Reliable biomarkers quantifying neurodegeneration and neuroinflammation in central nervous system disorders such as Multiple Sclerosis, Alzheimer's dementia or Parkinson's disease are an unmet clinical need. Intraretinal layer thicknesses on macular optical coherence tomography (OCT) images are promising noninvasive biomarkers querying neuroretinal structures with near cellular resolution. However, changes are typically subtle, while tissue gradients can be weak, making intraretinal segmentation a challenging task. A robust and efficient method that requires no or minimal manual correction is an unmet need to foster reliable and reproducible research as well as clinical application. Here, we propose and validate a cascaded two-stage network for intraretinal layer segmentation, with both networks being compressed versions of U-Net (CCU-INSEG). The first network is responsible for retinal tissue segmentation from OCT B-scans. The second network segments eight intraretinal layers with high fidelity. At the post-processing stage, we introduce Laplacian-based outlier detection with layer surface hole filling by adaptive non-linear interpolation. Additionally, we propose a weighted version of focal loss to minimize the foreground-background pixel imbalance in the training data. We train our method using 17,458 B-scans from patients with autoimmune optic neuropathies, i.e., multiple sclerosis, and healthy controls. Voxel-wise comparison against manual segmentation produces a mean absolute error of 2.3 μm, outperforming current state-of-the-art methods on the same data set. Voxel-wise comparison against external glaucoma data leads to a mean absolute error of 2.6 μm when using the same gold standard segmentation approach, and 3.7 μm mean absolute error in an externally segmented data set. In scans from patients with severe optic atrophy, 3.5% of B-scan segmentation results were rejected by an experienced grader, whereas this was the case in 41.4% of B-scans segmented with a graph-based reference method. The validation results suggest that the proposed method can robustly segment macular scans from eyes with even severe neuroretinal changes.
Collapse
Affiliation(s)
- Sunil Kumar Yadav
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
- Nocturne GmbH, 10119 Berlin, Germany;
| | - Rahele Kafieh
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
| | - Hanna Gwendolyn Zimmermann
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
| | - Josef Kauer-Bonin
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
- Nocturne GmbH, 10119 Berlin, Germany;
| | - Kouros Nouri-Mahdavi
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (K.N.-M.); (V.M.); (L.S.)
| | - Vahid Mohammadzadeh
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (K.N.-M.); (V.M.); (L.S.)
| | - Lynn Shi
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA; (K.N.-M.); (V.M.); (L.S.)
| | | | - Friedemann Paul
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
- Department of Neurology, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 10098 Berlin, Germany
| | - Seyedamirhosein Motamedi
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
| | - Alexander Ulrich Brandt
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 13125 Berlin, Germany; (S.K.Y.); (R.K.); (H.G.Z.); (J.K.-B.); (F.P.); (S.M.)
- Department of Neurology, University of California Irvine, Irvine, CA 92697, USA
| |
Collapse
|
18
|
Sampson DM, Dubis AM, Chen FK, Zawadzki RJ, Sampson DD. Towards standardizing retinal optical coherence tomography angiography: a review. LIGHT, SCIENCE & APPLICATIONS 2022; 11:63. [PMID: 35304441 PMCID: PMC8933532 DOI: 10.1038/s41377-022-00740-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 02/01/2022] [Accepted: 02/14/2022] [Indexed: 05/11/2023]
Abstract
The visualization and assessment of retinal microvasculature are important in the study, diagnosis, monitoring, and guidance of treatment of ocular and systemic diseases. With the introduction of optical coherence tomography angiography (OCTA), it has become possible to visualize the retinal microvasculature volumetrically and without a contrast agent. Many lab-based and commercial clinical instruments, imaging protocols and data analysis methods and metrics, have been applied, often inconsistently, resulting in a confusing picture that represents a major barrier to progress in applying OCTA to reduce the burden of disease. Open data and software sharing, and cross-comparison and pooling of data from different studies are rare. These inabilities have impeded building the large databases of annotated OCTA images of healthy and diseased retinas that are necessary to study and define characteristics of specific conditions. This paper addresses the steps needed to standardize OCTA imaging of the human retina to address these limitations. Through review of the OCTA literature, we identify issues and inconsistencies and propose minimum standards for imaging protocols, data analysis methods, metrics, reporting of findings, and clinical practice and, where this is not possible, we identify areas that require further investigation. We hope that this paper will encourage the unification of imaging protocols in OCTA, promote transparency in the process of data collection, analysis, and reporting, and facilitate increasing the impact of OCTA on retinal healthcare delivery and life science investigations.
Collapse
Affiliation(s)
- Danuta M Sampson
- Surrey Biophotonics, Centre for Vision, Speech and Signal Processing and School of Biosciences and Medicine, The University of Surrey, Guildford, GU2 7XH, UK.
| | - Adam M Dubis
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Trust and UCL Institute of Ophthalmology, London, EC1V 2PD, UK
| | - Fred K Chen
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Nedlands, Western Australia, 6009, Australia
- Department of Ophthalmology, Royal Perth Hospital, Perth, Western Australia, 6000, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, 3002, Australia
| | - Robert J Zawadzki
- Department of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, 95817, USA
| | - David D Sampson
- Surrey Biophotonics, Advanced Technology Institute, School of Physics and School of Biosciences and Medicine, University of Surrey, Guildford, Surrey, GU2 7XH, UK
| |
Collapse
|
19
|
Miao Y, Song J, Hsu D, Ng R, Jian Y, Sarunic MV, Ju MJ. Numerical calibration method for a multiple spectrometer-based OCT system. BIOMEDICAL OPTICS EXPRESS 2022; 13:1685-1701. [PMID: 35414988 PMCID: PMC8973183 DOI: 10.1364/boe.450942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 02/14/2022] [Accepted: 02/17/2022] [Indexed: 06/14/2023]
Abstract
The present paper introduces a numerical calibration method for the easy and practical implementation of multiple spectrometer-based spectral-domain optical coherence tomography (SD-OCT) systems. To address the limitations of the traditional hardware-based spectrometer alignment across more than one spectrometer, we applied a numerical spectral calibration algorithm where the pixels corresponding to the same wavelength in each unit are identified through spatial- and frequency-domain interferometric signatures of a mirror sample. The utility of dual spectrometer-based SD-OCT imaging is demonstrated through in vivo retinal imaging at two different operation modes with high-speed and dual balanced acquisitions, respectively, in which the spectral alignment is critical to achieve improved retinal image data without any artifacts caused by misalignment of the spectrometers.
Collapse
Affiliation(s)
- Yusi Miao
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Jun Song
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Destiny Hsu
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Ringo Ng
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Yifan Jian
- Casey Eye Institute, Oregon Health and Science University, Portland, Oregon 97239, USA
| | - Marinko V. Sarunic
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
- Institute of Ophthalmology, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Myeong Jin Ju
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
20
|
Ni S, Khan S, Nguyen TTP, Ng R, Lujan BJ, Tan O, Huang D, Jian Y. Volumetric directional optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:950-961. [PMID: 35284155 PMCID: PMC8884206 DOI: 10.1364/boe.447882] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/14/2022] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
Photoreceptor loss and resultant thinning of the outer nuclear layer (ONL) is an important pathological feature of retinal degenerations and may serve as a useful imaging biomarker for age-related macular degeneration. However, the demarcation between the ONL and the adjacent Henle's fiber layer (HFL) is difficult to visualize with standard optical coherence tomography (OCT). A dedicated OCT system that can precisely control and continuously and synchronously update the imaging beam entry points during scanning has not been realized yet. In this paper, we introduce a novel imaging technology, Volumetric Directional OCT (VD-OCT), which can dynamically adjust the incident beam on the pupil without manual adjustment during a volumetric OCT scan. We also implement a customized spoke-circular scanning pattern to observe the appearance of HFL with sufficient optical contrast in continuous cross-sectional scans through the entire volume. The application of VD-OCT for retinal imaging to exploit directional reflectivity properties of tissue layers has the potential to allow for early identification of retinal diseases.
Collapse
Affiliation(s)
- Shuibin Ni
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Shanjida Khan
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Thanh-Tin P. Nguyen
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Ringo Ng
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada
| | - Brandon J. Lujan
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Ou Tan
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Yifan Jian
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| |
Collapse
|
21
|
Camacho P, Dutra-Medeiros M, Salgueiro L, Sadio S, Rosa PC. Manual Segmentation of 12 Layers of the Retina and Choroid through SD-OCT in Intermediate AMD: Repeatability and Reproducibility. J Ophthalmic Vis Res 2021; 16:384-392. [PMID: 34394867 PMCID: PMC8358755 DOI: 10.18502/jovr.v16i3.9435] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Accepted: 12/11/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the repeatability and reproducibility of the segmentation of 12 layers of the retina and the choroid, performed manually by SD-OCT, along the horizontal meridian at three different temporal moments, and to evaluate its concordance with the same measurements performed by two other operators in intermediate AMD. Methods A cross-sectional study of 40 eyes from 40 subjects with intermediate AMD was conducted. The segmentation was performed manually, using SD-OCT. The 169 measurements per eye were repeated at three time points to study the intra-operator variability. The same process was repeated a single time by two different trained operators for the inter-operator variability. Results Forty participants (28 women and 12 men) were enrolled in this study, with an average age of 76.4 ± 8.2 (range, 55–92 years). Overall, the maximum values of the various structures were found in the 3 mm of the macula. Intra-operator variability: the highest ICC values turned out to be discovered in thicker locations. Inter-operator variability: except correlation values of 0.826 (0.727; 0.898) obtained in the OPL (T2.5) and 0.634 (0.469; 0.771) obtained in the IPL (N2), all other correlation values were >0.92, in most cases approaching higher values like 0.98. Conclusion The measurements of several layers of the retina and the choroid achieved at 13 locations presented a good repeatability and reproducibility. Manual quantification is still an alternative for the weaknesses of automatic segmentation. Locations of greatest concordance should be those used for the clinical control and monitoring.
Collapse
Affiliation(s)
- Pedro Camacho
- H&TRC - Health & Technology Research Center, ESTeSL - Escola Superior de Tecnologia da Saúde, Instituto Politécnico de Lisboa, Lisbon, Portugal.,Ophtalmology Institute Dr. Gama Pinto, Lisbon, Portugal.,NOVA Medical School, Lisbon, Portugal
| | - Marco Dutra-Medeiros
- Retina Institute of Lisbon, Lisbon, Portugal.,NOVA Medical School, Lisbon, Portugal
| | | | - Sílvia Sadio
- Ophtalmology Institute Dr. Gama Pinto, Lisbon, Portugal.,Retina Institute of Lisbon, Lisbon, Portugal
| | - Paulo C Rosa
- Ophtalmology Institute Dr. Gama Pinto, Lisbon, Portugal.,Retina Institute of Lisbon, Lisbon, Portugal
| |
Collapse
|
22
|
Lee S, Kang JU. CNN-based CP-OCT sensor integrated with a subretinal injector for retinal boundary tracking and injection guidance. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210109R. [PMID: 34196137 PMCID: PMC8242537 DOI: 10.1117/1.jbo.26.6.068001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/15/2021] [Indexed: 05/08/2023]
Abstract
SIGNIFICANCE Subretinal injection is an effective way of delivering transplant genes and cells to treat many degenerative retinal diseases. However, the technique requires high-dexterity and microscale precision of experienced surgeons, who have to overcome the physiological hand tremor and limited visualization of the subretinal space. AIM To automatically guide the axial motion of microsurgical tools (i.e., a subretinal injector) with microscale precision in real time using a fiber-optic common-path swept-source optical coherence tomography distal sensor. APPROACH We propose, implement, and study real-time retinal boundary tracking of A-scan optical coherence tomography (OCT) images using a convolutional neural network (CNN) for automatic depth targeting of a selected retinal boundary for accurate subretinal injection guidance. A simplified 1D U-net is used for the retinal layer segmentation on A-scan OCT images. A Kalman filter, combining retinal boundary position measurement by CNN and velocity measurement by cross correlation between consecutive A-scan images, is applied to optimally estimate the retinal boundary position. Unwanted axial motions of the surgical tools are compensated by a piezoelectric linear motor based on the retinal boundary tracking. RESULTS CNN-based segmentation on A-scan OCT images achieves the mean unsigned error (MUE) of ∼3 pixels (8.1 μm) using an ex vivo bovine retina model. GPU parallel computing allows real-time inference (∼2 ms) and thus real-time retinal boundary tracking. Involuntary tremors, which include low-frequency draft in hundreds of micrometers and physiological tremors in tens of micrometers, are compensated effectively. The standard deviations of photoreceptor (PR) and choroid (CH) boundary positions get as low as 10.8 μm when the depth targeting is activated. CONCLUSIONS A CNN-based common-path OCT distal sensor successfully tracks retinal boundaries, especially the PR/CH boundary for subretinal injection, and automatically guides the tooltip's axial position in real time. The microscale depth targeting accuracy of our system shows its promising possibility for clinical application.
Collapse
Affiliation(s)
- Soohyun Lee
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- Address all correspondence to Soohyun Lee,
| | - Jin U. Kang
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| |
Collapse
|
23
|
Ni S, Wei X, Ng R, Ostmo S, Chiang MF, Huang D, Jia Y, Campbell JP, Jian Y. High-speed and widefield handheld swept-source OCT angiography with a VCSEL light source. BIOMEDICAL OPTICS EXPRESS 2021; 12:3553-3570. [PMID: 34221678 PMCID: PMC8221946 DOI: 10.1364/boe.425411] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 05/14/2021] [Accepted: 05/17/2021] [Indexed: 05/18/2023]
Abstract
Optical coherence tomography (OCT) and OCT angiography (OCTA) enable noninvasive structural and angiographic imaging of the eye. Portable handheld OCT/OCTA systems are required for imaging patients in the supine position. Examples include infants in the neonatal intensive care unit (NICU) and operating room (OR). The speed of image acquisition plays a pivotal role in acquiring high-quality OCT/OCTA images, particularly with the handheld system, since both the operator hand tremor and subject motion can cause significant motion artifacts. In addition, having a large field of view and the ability of real-time data visualization are critical elements in rapid disease screening, reducing imaging time, and detecting peripheral retinal pathologies. The arrangement of optical components is less flexible in the handheld system due to the limitation of size and weight. In this paper, we introduce a 400-kHz, 55-degree field of view handheld OCT/OCTA system that has overcome many technical challenges as a portable OCT system as well as a high-speed OCTA system. We demonstrate imaging premature infants with retinopathy of prematurity (ROP) in the NICU, a patient with incontinentia pigmenti (IP), and a patient with X-linked retinoschisis (XLRS) in the OR using our handheld OCT system. Our design may have the potential for improving the diagnosis of retinal diseases and help provide a practical guideline for designing a flexible and portable OCT system.
Collapse
Affiliation(s)
- Shuibin Ni
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
| | - Xiang Wei
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Ringo Ng
- Department of Engineering Science, Simon Fraser University, Burnaby, Canada
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - David Huang
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - J. Peter Campbell
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
| | - Yifan Jian
- Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
24
|
Danilov VV, Klyshnikov KY, Gerget OM, Kutikhin AG, Ganyukov VI, Frangi AF, Ovcharenko EA. Real-time coronary artery stenosis detection based on modern neural networks. Sci Rep 2021; 11:7582. [PMID: 33828165 PMCID: PMC8027436 DOI: 10.1038/s41598-021-87174-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/24/2021] [Indexed: 01/10/2023] Open
Abstract
Invasive coronary angiography remains the gold standard for diagnosing coronary artery disease, which may be complicated by both, patient-specific anatomy and image quality. Deep learning techniques aimed at detecting coronary artery stenoses may facilitate the diagnosis. However, previous studies have failed to achieve superior accuracy and performance for real-time labeling. Our study is aimed at confirming the feasibility of real-time coronary artery stenosis detection using deep learning methods. To reach this goal we trained and tested eight promising detectors based on different neural network architectures (MobileNet, ResNet-50, ResNet-101, Inception ResNet, NASNet) using clinical angiography data of 100 patients. Three neural networks have demonstrated superior results. The network based on Faster-RCNN Inception ResNet V2 is the most accurate and it achieved the mean Average Precision of 0.95, F1-score 0.96 and the slowest prediction rate of 3 fps on the validation subset. The relatively lightweight SSD MobileNet V2 network proved itself as the fastest one with a low mAP of 0.83, F1-score of 0.80 and a mean prediction rate of 38 fps. The model based on RFCN ResNet-101 V2 has demonstrated an optimal accuracy-to-speed ratio. Its mAP makes up 0.94, F1-score 0.96 while the prediction speed is 10 fps. The resultant performance-accuracy balance of the modern neural networks has confirmed the feasibility of real-time coronary artery stenosis detection supporting the decision-making process of the Heart Team interpreting coronary angiography findings.
Collapse
Affiliation(s)
| | - Kirill Yu Klyshnikov
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | | | - Anton G Kutikhin
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Vladimir I Ganyukov
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | | | - Evgeny A Ovcharenko
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| |
Collapse
|
25
|
Li J, Jin P, Zhu J, Zou H, Xu X, Tang M, Zhou M, Gan Y, He J, Ling Y, Su Y. Multi-scale GCN-assisted two-stage network for joint segmentation of retinal layers and discs in peripapillary OCT images. BIOMEDICAL OPTICS EXPRESS 2021; 12:2204-2220. [PMID: 33996224 PMCID: PMC8086482 DOI: 10.1364/boe.417212] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 05/03/2023]
Abstract
An accurate and automated tissue segmentation algorithm for retinal optical coherence tomography (OCT) images is crucial for the diagnosis of glaucoma. However, due to the presence of the optic disc, the anatomical structure of the peripapillary region of the retina is complicated and is challenging for segmentation. To address this issue, we develop a novel graph convolutional network (GCN)-assisted two-stage framework to simultaneously label the nine retinal layers and the optic disc. Specifically, a multi-scale global reasoning module is inserted between the encoder and decoder of a U-shape neural network to exploit anatomical prior knowledge and perform spatial reasoning. We conduct experiments on human peripapillary retinal OCT images. We also provide public access to the collected dataset, which might contribute to the research in the field of biomedical image processing. The Dice score of the proposed segmentation network is 0.820 ± 0.001 and the pixel accuracy is 0.830 ± 0.002, both of which outperform those from other state-of-the-art techniques.
Collapse
Affiliation(s)
- Jiaxuan Li
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Peiyao Jin
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Jianfeng Zhu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Haidong Zou
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Xun Xu
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Min Tang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Minwen Zhou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai 200080, China
| | - Yu Gan
- Department of Electrical and Computer Engineering, The University of Alabama, AL 35487, USA
| | - Jiangnan He
- Department of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai 200040, China
| | - Yuye Ling
- John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yikai Su
- State Key Lab of Advanced Optical Communication Systems and Networks, Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
26
|
Sommersperger M, Weiss J, Ali Nasseri M, Gehlbach P, Iordachita I, Navab N. Real-time tool to layer distance estimation for robotic subretinal injection using intraoperative 4D OCT. BIOMEDICAL OPTICS EXPRESS 2021; 12:1085-1104. [PMID: 33680560 PMCID: PMC7901333 DOI: 10.1364/boe.415477] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 01/15/2021] [Accepted: 01/19/2021] [Indexed: 05/24/2023]
Abstract
The emergence of robotics could enable ophthalmic microsurgical procedures that were previously not feasible due to the precision limits of manual delivery, for example, targeted subretinal injection. Determining the distance between the needle tip, the internal limiting membrane (ILM), and the retinal pigment epithelium (RPE) both precisely and reproducibly is required for safe and successful robotic retinal interventions. Recent advances in intraoperative optical coherence tomography (iOCT) have opened the path for 4D image-guided surgery by providing near video-rate imaging with micron-level resolution to visualize retinal structures, surgical instruments, and tool-tissue interactions. In this work, we present a novel pipeline to precisely estimate the distance between the injection needle and the surface boundaries of two retinal layers, the ILM and the RPE, from iOCT volumes. To achieve high computational efficiency, we reduce the analysis to the relevant area around the needle tip. We employ a convolutional neural network (CNN) to segment the tool surface, as well as the retinal layer boundaries from selected iOCT B-scans within this tip area. This results in the generation and processing of 3D surface point clouds for the tool, ILM and RPE from the B-scan segmentation maps, which in turn allows the estimation of the minimum distance between the resulting tool and layer point clouds. The proposed method is evaluated on iOCT volumes from ex-vivo porcine eyes and achieves an average error of 9.24 µm and 8.61 µm measuring the distance from the needle tip to the ILM and the RPE, respectively. The results demonstrate that this approach is robust to the high levels of noise present in iOCT B-scans and is suitable for the interventional use case by providing distance feedback at an average update rate of 15.66 Hz.
Collapse
Affiliation(s)
- Michael Sommersperger
- Johns Hopkins University, Baltimore, MD 21218, USA
- Technical University of Munich, Germany
| | | | - M. Ali Nasseri
- Technical University of Munich, Germany
- Klinikum Rechts der Isar, Augenklinik, Munich, Germany
| | | | | | - Nassir Navab
- Johns Hopkins University, Baltimore, MD 21218, USA
- Technical University of Munich, Germany
| |
Collapse
|