1
|
Yang H, Rees JP, Sanchez FG, Gardiner SK, Mansberger SL. OCT Segmentation Errors with Bruch's Membrane Opening-Minimum Rim Width as Compared with Retinal Nerve Fiber Layer Thickness. Ophthalmol Glaucoma 2024; 7:308-315. [PMID: 38104770 DOI: 10.1016/j.ogla.2023.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/15/2023] [Accepted: 12/11/2023] [Indexed: 12/19/2023]
Abstract
OBJECTIVE To compare the magnitude and location of automated segmentation errors of the Bruch's membrane opening-minimum rim width (BMO-MRW) and retinal nerve fiber layer thickness (RNFLT). DESIGN Cross-sectional study. PARTICIPANTS We included 162 glaucoma suspect or open-angle glaucoma eyes from 162 participants. METHODS We used spectral-domain optic coherence tomography (Spectralis 870 nm, Heidelberg Engineering) to image the optic nerve with 24 radial optic nerve head B-scans and a 12-degree peripapillary circle scan, and exported the native "automated segmentation only" results for BMO-MRW and RNFLT. We also exported the results after "manual refinement" of the measurements. MAIN OUTCOME MEASURES We calculated the absolute and proportional error globally and within the 12 30-degree sectors of the optic disc. We determined whether the glaucoma classifications were different between BMO-MRW and RNFLT as a result of manual and automatic segmentation. RESULTS The absolute error mean was larger for BMO-MRW than for RNFLT (10.8 μm vs. 3.58 μm, P < 0.001). However, the proportional errors were similar (4.3% vs. 4.4%, P = 0.47). In a multivariable regression model, errors in BMO-MRW were not significantly associated with age, location, magnitude, or severity of glaucoma loss (all P ≥ 0.05). However, larger RNFLT errors were associated with the superior and inferior sector location, thicker nerve fiber layer, and worse visual field (all P < 0.05). Errors in BMO-MRW and RNFLT were not likely to occur in the same sector location (R2 = 0.001; P = 0.15). With manual refinement, the glaucoma classification changed in 7.8% and 6.2% of eyes with BMO-MRW and RNFLT, respectively. CONCLUSIONS Both BMO-MRW and RNFLT measurements included segmentation errors, which did not seem to have a common location, and may result in differences in glaucoma classification. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Hongli Yang
- Devers Eye Institute Discoveries in Sight Research Laboratories, Legacy Research Institute, Portland, Oregon
| | - Jack P Rees
- Devers Eye Institute Discoveries in Sight Research Laboratories, Legacy Research Institute, Portland, Oregon
| | - Facundo G Sanchez
- Devers Eye Institute Discoveries in Sight Research Laboratories, Legacy Research Institute, Portland, Oregon
| | - Stuart K Gardiner
- Devers Eye Institute Discoveries in Sight Research Laboratories, Legacy Research Institute, Portland, Oregon
| | - Steven L Mansberger
- Devers Eye Institute Discoveries in Sight Research Laboratories, Legacy Research Institute, Portland, Oregon.
| |
Collapse
|
2
|
Wei F, Li CY, Hagan K, Stinnett SS, Kuo AN, Izatt JA, Dhalla AH. Spiral scanning improves subject fixation in widefield retinal imaging. OPTICS LETTERS 2024; 49:2489-2492. [PMID: 38691751 PMCID: PMC11068122 DOI: 10.1364/ol.517088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 04/02/2024] [Indexed: 05/03/2024]
Abstract
Point scanning retinal imaging modalities, including confocal scanning light ophthalmoscopy (cSLO) and optical coherence tomography, suffer from fixational motion artifacts. Fixation targets, though effective at reducing eye motion, are infeasible in some applications (e.g., handheld devices) due to their bulk and complexity. Here, we report on a cSLO device that scans the retina in a spiral pattern under pseudo-visible illumination, thus collecting image data while simultaneously projecting, into the subject's vision, the image of a bullseye, which acts as a virtual fixation target. An imaging study of 14 young adult volunteers was conducted to compare the fixational performance of this technique to that of raster scanning, with and without a discrete inline fixation target. Image registration was used to quantify subject eye motion; a strip-wise registration method was used for raster scans, and a novel, to the best of our knowledge, ring-based method was used for spiral scans. Results indicate a statistically significant reduction in eye motion by the use of spiral scanning as compared to raster scanning without a fixation target.
Collapse
Affiliation(s)
- Franklin Wei
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Claire Y. Li
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Kristen Hagan
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Sandra S. Stinnett
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27708, USA
| | - Anthony N. Kuo
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27708, USA
| | - Joseph A. Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27708, USA
| | - Al-Hafeez Dhalla
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27708, USA
| |
Collapse
|
3
|
Liu H, Wei D, Lu D, Tang X, Wang L, Zheng Y. Simultaneous alignment and surface regression using hybrid 2D-3D networks for 3D coherent layer segmentation of retinal OCT images with full and sparse annotations. Med Image Anal 2024; 91:103019. [PMID: 37944431 DOI: 10.1016/j.media.2023.103019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/28/2023] [Accepted: 10/31/2023] [Indexed: 11/12/2023]
Abstract
Layer segmentation is important to quantitative analysis of retinal optical coherence tomography (OCT). Recently, deep learning based methods have been developed to automate this task and yield remarkable performance. However, due to the large spatial gap and potential mismatch between the B-scans of an OCT volume, all of them were based on 2D segmentation of individual B-scans, which may lose the continuity and diagnostic information of the retinal layers in 3D space. Besides, most of these methods required dense annotation of the OCT volumes, which is labor-intensive and expertise-demanding. This work presents a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) to obtain continuous 3D retinal layer surfaces from OCT volumes, which works well with both full and sparse annotations. The 2D features of individual B-scans are extracted by an encoder consisting of 2D convolutions. These 2D features are then used to produce the alignment displacement vectors and layer segmentation by two 3D decoders coupled via a spatial transformer module. Two losses are proposed to utilize the retinal layers' natural property of being smooth for B-scan alignment and layer segmentation, respectively, and are the key to the semi-supervised learning with sparse annotation. The entire framework is trained end-to-end. To the best of our knowledge, this is the first work that attempts 3D retinal layer segmentation in volumetric OCT images based on CNNs. Experiments on a synthetic dataset and three public clinical datasets show that our framework can effectively align the B-scans for potential motion correction, and achieves superior performance to state-of-the-art 2D deep learning methods in terms of both layer segmentation accuracy and cross-B-scan 3D continuity in both fully and semi-supervised settings, thus offering more clinical values than previous works.
Collapse
Affiliation(s)
- Hong Liu
- School of Informatics, Xiamen University, Xiamen 361005, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen 361005, China; Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| | - Dong Wei
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| | - Donghuan Lu
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| | - Xiaoying Tang
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Liansheng Wang
- School of Informatics, Xiamen University, Xiamen 361005, China.
| | - Yefeng Zheng
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518075, China
| |
Collapse
|
4
|
Luo M, Xu Z, Ye Z, Liang Z, Xiao H, Li Y, Li Z, Zhu Y, He Y, Zhuo Y. Deep learning for anterior segment OCT angiography automated denoising and vascular quantitative measurement. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
5
|
Ferreño D, Revuelta JM, Sainz-Aja JA, Wert-Carvajal C, Casado JA, Diego S, Carrascal IA, Silva J, Gutiérrez-Solana F. Shannon entropy as a reliable score to diagnose human fibroelastic degenerative mitral chords: A micro-ct ex-vivo study. Med Eng Phys 2022; 110:103919. [PMID: 36564142 DOI: 10.1016/j.medengphy.2022.103919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 09/12/2022] [Accepted: 11/03/2022] [Indexed: 11/09/2022]
Abstract
This paper is aimed at identifying by means of micro-CT the microstructural differences between normal and degenerative mitral marginal chordae tendineae. The control group is composed of 21 normal chords excised from 14 normal mitral valves from heart transplant recipients. The experimental group comprises 22 degenerative fibroelastic chords obtained at surgery from 11 pathological valves after mitral repair or replacement. In the control group the superficial endothelial cells and spongiosa layer remained intact, covering the wavy core collagen. In contrast, in the experimental group the collagen fibers were arranged as straightened thick bundles in a parallel configuration. 100 cross-sections were examined by micro-CT from each chord. Each image was randomized through the K-means machine learning algorithm and then, the global and local Shannon entropies were obtained. The optimum number of clusters, K, was estimated to maximize the differences between normal and degenerative chords in global and local Shannon entropy; the p-value after a nested ANOVA test was chosen as the parameter to be minimized. Optimum results were obtained with global Shannon entropy and 2≤K≤7, providing p < 0.01; for K=3, p = 2.86·10-3. These findings open the door to novel perioperative diagnostic methods in order to avoid or reduce postoperative mitral valve regurgitation recurrences.
Collapse
Affiliation(s)
- Diego Ferreño
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain.
| | - José M Revuelta
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain; Cardiovascular Surgery. Hospital Universitario Marqués de Valdecilla, Av/Valdecilla, s/n, 39008 Santander, Spain
| | - José A Sainz-Aja
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain
| | - Carlos Wert-Carvajal
- Universidad Carlos III de Madrid. Avda. de la Universidad, 30. 28911 Madrid, Spain; University of California, San Diego. 9500 Gilman Drive, MC 0412 La Jolla, California
| | - José A Casado
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain
| | - Soraya Diego
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain
| | - Isidro A Carrascal
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain
| | - Jacobo Silva
- Hospital Universitario Central de Asturias, Av. Roma, s/n, 33011 Oviedo, Asturias, Spain
| | - Federico Gutiérrez-Solana
- LADICIM (Laboratory of Materials Science and Engineering), University of Cantabria. E.T.S. de Ingenieros de Caminos, Canales y Puertos, Av/Los Castros 44, 39005 Santander, Spain
| |
Collapse
|
6
|
Makita S, Azuma S, Mino T, Yamaguchi T, Miura M, Yasuno Y. Extending field-of-view of retinal imaging by optical coherence tomography using convolutional Lissajous and slow scan patterns. BIOMEDICAL OPTICS EXPRESS 2022; 13:5212-5230. [PMID: 36425618 PMCID: PMC9664899 DOI: 10.1364/boe.467563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 08/27/2022] [Accepted: 08/28/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) is a high-speed non-invasive cross-sectional imaging technique. Although its imaging speed is high, three-dimensional high-spatial-sampling-density imaging of in vivo tissues with a wide field-of-view (FOV) is challenging. We employed convolved Lissajous and slow circular scanning patterns to extend the FOV of retinal OCT imaging with a 1-µm, 100-kHz-sweep-rate swept-source OCT prototype system. Displacements of sampling points due to eye movements are corrected by post-processing based on a Lissajous scan. Wide FOV three-dimensional retinal imaging with high sampling density and motion correction is achieved. Three-dimensional structures obtained using repeated imaging sessions of a healthy volunteer and two patients showed good agreement. The demonstrated technique will extend the FOV of simple point-scanning OCT, such as commercial ophthalmic OCT devices, without sacrificing sampling density.
Collapse
Affiliation(s)
- Shuichi Makita
- Computational Optics Group,
University of Tsukuba, 1–1–1 Tennodai, Tsukuba, Ibaraki 305–8573, Japan
| | - Shinnosuke Azuma
- Topcon Corporation, 75–1 Hasunumacho, Itabashi, Tokyo 174–8580, Japan
| | - Toshihiro Mino
- Topcon Corporation, 75–1 Hasunumacho, Itabashi, Tokyo 174–8580, Japan
| | - Tatsuo Yamaguchi
- Topcon Corporation, 75–1 Hasunumacho, Itabashi, Tokyo 174–8580, Japan
| | - Masahiro Miura
- Department of Ophthalmology, Tokyo Medical University Ibaraki Medical Center, 3–20–1 Chuo, Ami, Ibaraki 300–0395, Japan
| | - Yoshiaki Yasuno
- Computational Optics Group,
University of Tsukuba, 1–1–1 Tennodai, Tsukuba, Ibaraki 305–8573, Japan
| |
Collapse
|
7
|
Zhang Y, Gao W, Xie C. Fourier spatial transform-based method of suppressing motion noises in OCTA. OPTICS LETTERS 2022; 47:4544-4547. [PMID: 36048700 DOI: 10.1364/ol.464501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 07/22/2022] [Indexed: 06/15/2023]
Abstract
A large amount of lateral noise will be generated in blood flow imaging with optical coherence tomography angiography (OCTA) due to the presence of muscle shaking, heartbeat, and respiration, resulting in the deterioration of images. In this paper, to the best of our knowledge, for the first time, the spatial frequency information of motion noise in the blood flow signal region is used to remove the motion noise and false connections in the blood flow signal region. The effectiveness of the proposed adaptive denoising algorithm is verified by the imaging of finger blood flow. It is found that OCTA with different projection methods has improved signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) after applying our algorithm. It is also found that the visual effect of the original blood flow image based on standard deviation projection is better, but mean projection is the most sensitive to the algorithm, and the average SNR and CNR are improved by 5.7 dB and 8.9 dB, respectively.
Collapse
|
8
|
Cai Y, Grieve K, Mecê P. Characterization and Analysis of Retinal Axial Motion at High Spatiotemporal Resolution and Its Implication for Real-Time Correction in Human Retinal Imaging. Front Med (Lausanne) 2022; 9:868217. [PMID: 35903318 PMCID: PMC9320321 DOI: 10.3389/fmed.2022.868217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 06/20/2022] [Indexed: 11/25/2022] Open
Abstract
High-resolution ophthalmic imaging devices including spectral-domain and full-field optical coherence tomography (SDOCT and FFOCT) are adversely affected by the presence of continuous involuntary retinal axial motion. Here, we thoroughly quantify and characterize retinal axial motion with both high temporal resolution (200,000 A-scans/s) and high axial resolution (4.5 μm), recorded over a typical data acquisition duration of 3 s with an SDOCT device over 14 subjects. We demonstrate that although breath-holding can help decrease large-and-slow drifts, it increases small-and-fast fluctuations, which is not ideal when motion compensation is desired. Finally, by simulating the action of an axial motion stabilization control loop, we show that a loop rate of 1.2 kHz is ideal to achieve 100% robust clinical in-vivo retinal imaging.
Collapse
Affiliation(s)
- Yao Cai
- Institut Langevin, ESPCI Paris, CNRS, PSL University, Paris, France
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Kate Grieve
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- CHNO des Quinze-Vingts, INSERM-DGOS CIC 1423, Paris, France
| | - Pedro Mecê
- Institut Langevin, ESPCI Paris, CNRS, PSL University, Paris, France
- DOTA, ONERA, Université Paris Saclay, Palaiseau, France
- *Correspondence: Pedro Mecê
| |
Collapse
|
9
|
Wang Y, Warter A, Cavichini-Cordeiro M, Freeman WR, Bartsch DUG, Nguyen TQ, An C. LEARNING TO CORRECT AXIAL MOTION IN OCT FOR 3D RETINAL IMAGING. PROCEEDINGS. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING 2021; 2021:126-130. [PMID: 35950046 PMCID: PMC9359411 DOI: 10.1109/icip42928.2021.9506620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Optical Coherence Tomography (OCT) is a powerful technique for non-invasive 3D imaging of biological tissues at high resolution that has revolutionized retinal imaging. A major challenge in OCT imaging is the motion artifacts introduced by involuntary eye movements. In this paper, we propose a convolutional neural network that learns to correct axial motion in OCT based on a single volumetric scan. The proposed method is able to correct large motion, while preserving the overall curvature of the retina. The experimental results show significant improvements in visual quality as well as overall error compared to the conventional methods in both normal and disease cases.
Collapse
Affiliation(s)
- Yiqian Wang
- Department of Electrical and Computer Engineering, University of California, San Diego
| | - Alexandra Warter
- Jacobs Retina Center, Shiley Eye Institute, La Jolla, California, USA
| | | | - William R Freeman
- Jacobs Retina Center, Shiley Eye Institute, La Jolla, California, USA
| | | | - Truong Q Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego
| |
Collapse
|
10
|
Wu N, Yi M, Guan C, Wang M, Zhang Z, Yang X, Li H, Han D, Zeng Y, Tang Z. Retinal cross-section motion correction in three-dimensional retinal optical coherence tomography. JOURNAL OF BIOPHOTONICS 2021; 14:e202000443. [PMID: 33576160 DOI: 10.1002/jbio.202000443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/15/2020] [Accepted: 02/08/2021] [Indexed: 06/12/2023]
Abstract
Motion correction is an important issue in ophthalmic optical coherence tomography (OCT), and can improve the ability of data sets to reflect the physiological structures of tissues and make visualization and subsequent analysis easier. In this study, we present a novel method to correct the cross-sectional motion artifacts in retinal OCT volumes. Motion along the x-direction (fast-scan direction) is corrected through the normalized cross-correlation algorithm, while axial motion compensation is performed using the polynomial fitting method on the inner segment/outer segment (IS/OS) layer segmented by the shortest path faster algorithm (SPFA). The results of volunteers with central serous chorioretinopathy demonstrate that the proposed method effectively corrects motion artifacts in OCT volumes and may have potential application value in the evaluation of ophthalmic diseases such as diabetic retinopathy, glaucoma and age-related macular degeneration.
Collapse
Affiliation(s)
- Nanshou Wu
- School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, China
| | - Min Yi
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Caizhong Guan
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Mingyi Wang
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Zhang Zhang
- School of Mechanical and Electrical Engineering and Automation, Foshan University, Foshan, China
| | - Xulun Yang
- School of Mechanical and Electrical Engineering and Automation, Foshan University, Foshan, China
| | - Hongyi Li
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Dingan Han
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Yaguang Zeng
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Zhilie Tang
- School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, China
| |
Collapse
|
11
|
Litts KM, Woertz EN, Georgiou M, Patterson EJ, Lam BL, Fishman GA, Pennesi ME, Kay CN, Hauswirth WW, Michaelides M, Carroll J. Optical Coherence Tomography Artifacts Are Associated With Adaptive Optics Scanning Light Ophthalmoscopy Success in Achromatopsia. Transl Vis Sci Technol 2021; 10:11. [PMID: 33510950 PMCID: PMC7804582 DOI: 10.1167/tvst.10.1.11] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 12/04/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose To determine whether artifacts in optical coherence tomography (OCT) images are associated with the success or failure of adaptive optics scanning light ophthalmoscopy (AOSLO) imaging in subjects with achromatopsia (ACHM). Methods Previously acquired OCT and non-confocal, split-detector AOSLO images from one eye of 66 subjects with genetically confirmed achromatopsia (15 CNGA3 and 51 CNGB3) were reviewed along with best-corrected visual acuity (BCVA) and axial length. OCT artifacts in interpolated vertical volumes from CIRRUS macular cubes were divided into four categories: (1) none or minimal, (2) clear and low frequency, (3) low amplitude and high frequency, and (4) high amplitude and high frequency. Each vertical volume was assessed once by two observers. AOSLO success was defined as sufficient image quality in split-detector images at the fovea to assess cone quantity. Results There was excellent agreement between the two observers for assessing OCT artifact severity category (weighted kappa = 0.88). Overall, AOSLO success was 47%. For subjects with OCT artifact severity category 1, AOSLO success was 65%; for category 2, 47%; for category 3, 11%; and for category 4, 0%. There was a significant association between OCT artifact severity category and AOSLO success (P = 0.0002). Neither BCVA nor axial length was associated with AOSLO success (P = 0.07 and P = 0.75, respectively). Conclusions Artifacts in OCT volumes are associated with AOSLO success in ACHM. Subjects with less severe OCT artifacts are more likely to be good candidates for AOSLO imaging, whereas AOSLO was successful in only 7% of subjects with category 3 or 4 OCT artifacts. These results may be useful in guiding patient selection for AOSLO imaging. Translational Relevance Using OCT to prescreen patients could be a valuable tool for clinical trials that utilize AOSLO to reduce costs and decrease patient testing burden.
Collapse
Affiliation(s)
- Katie M. Litts
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Erica N. Woertz
- Department of Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Michalis Georgiou
- UCL Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Emily J. Patterson
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Byron L. Lam
- Bascom Palmer Eye Institute, University of Miami, Miami, FL, USA
| | - Gerald A. Fishman
- Pangere Center for Inherited Retinal Diseases, The Chicago Lighthouse, Chicago, IL, USA
| | - Mark E. Pennesi
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | | | | - Michel Michaelides
- UCL Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
- Department of Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
12
|
Makita S, Miura M, Azuma S, Mino T, Yamaguchi T, Yasuno Y. Accurately motion-corrected Lissajous OCT with multi-type image registration. BIOMEDICAL OPTICS EXPRESS 2021; 12:637-653. [PMID: 33659092 PMCID: PMC7899516 DOI: 10.1364/boe.409004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 12/02/2020] [Accepted: 12/15/2020] [Indexed: 05/06/2023]
Abstract
Passive motion correction methods for optical coherence tomography (OCT) use image registration to estimate eye movements. To improve motion correction, a multi-image cross-correlation that employs spatial features in different image types is introduced. Lateral motion correction using en face OCT and OCT-A projections on Lissajous-scanned OCT data is applied. Motion correction using OCT-A projection of whole depth and OCT amplitude, OCT logarithmic intensity, and OCT maximum intensity projections were evaluated in retinal imaging with 76 patients. The proposed method was compared with motion correction using OCT-A projection of whole depth. The comparison shows improvements in the image quality of motion-corrected superficial OCT-A images and image registration.
Collapse
Affiliation(s)
- Shuichi Makita
- Computation Optics Group, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
- Computational Optics and Ophthalmology Group, Ibaraki, Japan
| | - Masahiro Miura
- Computational Optics and Ophthalmology Group, Ibaraki, Japan
- Department of Ophthalmology, Tokyo Medical University Ibaraki Medical Center, 3-20-1 Chuo, Ami, Ibaraki 300-0395, Japan
| | - Shinnosuke Azuma
- Topcon Corporation, 75-1 Hasunumacho, Itabashi, Tokyo 174-8580, Japan
| | - Toshihiro Mino
- Topcon Corporation, 75-1 Hasunumacho, Itabashi, Tokyo 174-8580, Japan
| | - Tatsuo Yamaguchi
- Topcon Corporation, 75-1 Hasunumacho, Itabashi, Tokyo 174-8580, Japan
| | - Yoshiaki Yasuno
- Computation Optics Group, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
- Computational Optics and Ophthalmology Group, Ibaraki, Japan
| |
Collapse
|
13
|
Ksenofontov SY, Shilyagin PA, Terpelov DA, Gelikonov VM, Gelikonov GV. Numerical method for axial motion artifact correction in retinal spectral-domain optical coherence tomography. FRONTIERS OF OPTOELECTRONICS 2020; 13:393-401. [PMID: 36641561 PMCID: PMC9743928 DOI: 10.1007/s12200-019-0951-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 10/16/2019] [Indexed: 06/12/2023]
Abstract
A numerical method that compensates image distortions caused by random fluctuations of the distance to an object in spectral-domain optical coherence tomography (SD OCT) has been proposed and verified experimentally. The proposed method is based on the analysis of the phase shifts between adjacent scans that are caused by micrometer-scale displacements and the subsequent compensation for the displacements through phase-frequency correction in the spectral space. The efficiency of the method is demonstrated in model experiments with harmonic and random movements of a scattering object as well as during in vivo imaging of the retina of the human eye.
Collapse
Affiliation(s)
- Sergey Yu Ksenofontov
- BioMedTech Llc, Nizhny Novgorod, 603155, Russia
- Institute of Applied Physics of the Russian Academy of Science, Nizhny Novgorod, 603950, Russia
| | - Pavel A Shilyagin
- Institute of Applied Physics of the Russian Academy of Science, Nizhny Novgorod, 603950, Russia.
| | - Dmitry A Terpelov
- Institute of Applied Physics of the Russian Academy of Science, Nizhny Novgorod, 603950, Russia
| | - Valentin M Gelikonov
- Institute of Applied Physics of the Russian Academy of Science, Nizhny Novgorod, 603950, Russia
| | - Grigory V Gelikonov
- Institute of Applied Physics of the Russian Academy of Science, Nizhny Novgorod, 603950, Russia
| |
Collapse
|
14
|
Kelly JP, Baran FM, Phillips JO, Weiss AH. Matching Misaligned Spectralis OCTs to a Reference Scan in Pediatric Glaucoma with Poor Fixation and Nystagmus. Transl Vis Sci Technol 2020; 9:21. [PMID: 33005479 PMCID: PMC7509772 DOI: 10.1167/tvst.9.10.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 08/25/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Poor fixation or nystagmus in children causes misalignment errors when measuring circumpapillary retinal nerve fiber layer (cpRNFL) thickness by simultaneous scanning laser ophthalmoscope imaging/optical coherence tomography (SLO/OCT). We investigated a method to assess cpRNFL from misaligned SLO/OCT scans. Methods Heidelberg Spectralis SLO/OCT scans from a single clinical examination were retrospectively analyzed when automated eye tracking was unreliable. Retinal layer thickness was measured at overlapping match locations between a reference and misaligned scans based on the position data from simultaneously acquired SLO images. Three layers were segmented: cpRNFL, internal limiting membrane to outer nuclear layer (ILM-ONL), and total retinal thickness (TR). Accuracy was defined as the difference in thickness between the reference and misaligned scans at their match locations after correction for scan angle. Results Thirty-five subjects, evaluated for glaucomatous nerve loss, met inclusion criteria. Group-averaged accuracy was −2.7, 1.4, and 0.3 µm for cpRNFL, ILM-ONL, and TR thickness, respectively. Across all layers, interobserver intraclass correlation coefficients ranged from 0.97 to 0.63 and the maximum Bland-Altman 95% limits of agreement were −21.6 to 20.7 µm. Variability was greatest for cpRNFL thickness and least for TR thickness. Increased variability was associated with lower signal-to-noise ratio but not with image-motion indices of shear, rotation, and scale. Conclusions Retinal layer thickness can be compared to a reference cpRNFL OCT scan when poor fixation and nystagmus causes misalignment errors. The analysis can be performed post hoc using multiple misaligned scans from standard SLO/OCT protocols. Translational Relevance Our method allows for assessment of cpRNFL in children who fail eye tracking.
Collapse
Affiliation(s)
- John P Kelly
- Roger H. Johnson Vision Clinic, Seattle Children's Hospital, Division of Ophthalmology, Seattle, WA, USA.,University of Washington, Department of Ophthalmology, Seattle, WA, USA
| | - Francine M Baran
- Roger H. Johnson Vision Clinic, Seattle Children's Hospital, Division of Ophthalmology, Seattle, WA, USA.,University of Washington, Department of Ophthalmology, Seattle, WA, USA
| | - James O Phillips
- Roger H. Johnson Vision Clinic, Seattle Children's Hospital, Division of Ophthalmology, Seattle, WA, USA.,University of Washington School of Medicine, Department of Otolaryngology, Seattle, WA, USA
| | - Avery H Weiss
- University of Washington, Department of Ophthalmology, Seattle, WA, USA
| |
Collapse
|
15
|
Jin Z, Chen S, Dai Y, Bao C, Ye S, Zhou Y, Wang Y, Huang S, Wang Y, Shen M, Zhu D, Lu F. In vivo noninvasive measurement of spatially resolved corneal elasticity in human eyes using Lamb wave optical coherence elastography. JOURNAL OF BIOPHOTONICS 2020; 13:e202000104. [PMID: 32368840 DOI: 10.1002/jbio.202000104] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 04/24/2020] [Accepted: 04/29/2020] [Indexed: 05/23/2023]
Abstract
Current elastography techniques are limited in application to accurately assess spatially resolved corneal elasticity in vivo for human eyes. The air-puff optical coherence elastography (OCE) with an eye motion artifacts correction algorithm is developed to distinguish the in vivo cornea vibration from the eye motion and visualize the Lamb wave propagation clearly in healthy subjects. Based on the Lamb wave model, the phase velocity dispersion curve in the high-frequency is calculated to obtain spatially resolved corneal elasticity accurately with high repeatability. It is found that the corneal elasticity has regional variations and is correlated with intraocular pressure, which suggests that the method has the potential to provide noninvasive measurement of spatially resolved corneal elasticity in clinical practice.
Collapse
Affiliation(s)
- Zi Jin
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Sisi Chen
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Yingying Dai
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Chenhong Bao
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Shuling Ye
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Yuheng Zhou
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Yiyi Wang
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Shenghai Huang
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Yuanyuan Wang
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Meixiao Shen
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Dexi Zhu
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| | - Fan Lu
- School of Ophthalmology and Optometry, Wenzhou Medical University, Zhejiang, China
| |
Collapse
|
16
|
Puyo L, Paques M, Atlan M. Spatio-temporal filtering in laser Doppler holography for retinal blood flow imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:3274-3287. [PMID: 32637254 PMCID: PMC7316027 DOI: 10.1364/boe.392699] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 05/08/2020] [Accepted: 05/09/2020] [Indexed: 05/20/2023]
Abstract
Laser Doppler holography (LDH) is a full-field interferometric imaging technique recently applied in ophthalmology to measure blood flow, a parameter of high clinical interest. From the temporal fluctuations of digital holograms acquired at ultrafast frame rates, LDH reveals retinal and choroidal blood flow with a few milliseconds of temporal resolution. However, LDH experiences difficulties to detect slower blood flow as it requires to work with low Doppler frequency shifts which are corrupted by eye motion. We here demonstrate the use of a spatio-temporal decomposition adapted from Doppler ultrasound that provides a basis appropriate to the discrimination of blood flow from eye motion. A singular value decomposition (SVD) can be used as a simple, robust, and efficient way to separate the Doppler fluctuations of blood flow from those of strong spatial coherence such as eye motion. We show that the SVD outperforms the conventional Fourier based filter to reveal slower blood flow, and dramatically improves the ability of LDH to reveal vessels of smaller size or with a pathologically reduced blood flow.
Collapse
Affiliation(s)
- Léo Puyo
- Centre Hospitalier National d’Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423. 28 rue de Charenton, 75012 Paris, France
- Institut de la Vision-Sorbonne Universités. 17 rue Moreau, 75012 Paris, France
| | - Michel Paques
- Centre Hospitalier National d’Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423. 28 rue de Charenton, 75012 Paris, France
- Institut de la Vision-Sorbonne Universités. 17 rue Moreau, 75012 Paris, France
| | - Michael Atlan
- Institut Langevin. Centre National de la Recherche Scientifique (CNRS). Paris Sciences & Lettres (PSL University). École Supérieure de Physique et de Chimie Industrielles (ESPCI Paris) - 1 rue Jussieu. 75005 Paris, France
| |
Collapse
|
17
|
Daneshmand PG, Rabbani H, Mehridehnavi A. Super-Resolution of Optical Coherence Tomography Images by Scale Mixture Models. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5662-5676. [PMID: 32275595 DOI: 10.1109/tip.2020.2984896] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this paper, a new statistical model is proposed for the single image super-resolution of retinal Optical Coherence Tomography (OCT) images. OCT imaging relies on interfero-metry, which explains why OCT images suffer from a high level of noise. Moreover, data subsampling is carried out during the acquisition of OCT A-scans and B-scans. So, it is necessary to utilize effective super-resolution algorithms to reconstruct high-resolution clean OCT images. In this paper, a nonlocal sparse model-based Bayesian framework is proposed for OCT restoration. For this reason, by characterizing nonlocal patches with similar structures, known as a group, the sparse coefficients of each group of OCT images are modeled by the scale mixture models. In this base, the coefficient vector is decomposed into the point-wise product of a random vector and a positive scaling variable. Estimation of the sparse coefficients depends on the proposed distribution for the random vector and scaling variable where the Laplacian random vector and Generalized Extreme-Value (GEV) scale parameter (Laplacian+GEV model) show the best goodness of fit for each group of OCT images. Finally, a new OCT super-resolution method based on this new scale mixture model is introduced, where the maximum a posterior estimation of both sparse coefficients and scaling variables are calculated efficiently by applying an alternating minimization method. Our experimental results prove that the proposed OCT super-resolution method based on the Laplacian+GEV model outperforms other competing methods in terms of both subjective and objective visual qualities.
Collapse
|
18
|
Jin Z, Zhou Y, Shen M, Wang Y, Lu F, Zhu D. Assessment of corneal viscoelasticity using elastic wave optical coherence elastography. JOURNAL OF BIOPHOTONICS 2020; 13:e201960074. [PMID: 31626371 DOI: 10.1002/jbio.201960074] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 09/21/2019] [Accepted: 10/16/2019] [Indexed: 06/10/2023]
Abstract
The corneal viscoelasticity have great clinical significance, such as the early diagnosis of keratoconus. In this work, an analysis method which utilized the elastic wave velocity, frequency and energy attenuation to assess the corneal viscoelasticity is presented. Using phase-resolved optical coherence tomography, the spatial-temporal displacement map is derived. The phase velocity dispersion curve and center frequency are obtained by transforming the displacement map into the wavenumber-frequency domain through the 2D fast Fourier transform (FFT). The shear modulus is calculated through Rayleigh wave equation using the phase velocity in the high frequency. The normalized energy distribution is plotted by transforming the displacement map into the spatial-frequency domain through the 1D FFT. The energy attenuation coefficient is derived by exponential fitting to calculate the viscous modulus. Different concentrations of tissue-mimicking phantoms and porcine corneas are imaged to validate this method, which demonstrates that the method has the capability to assess the corneal viscoelasticity.
Collapse
Affiliation(s)
- Zi Jin
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Yuheng Zhou
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Meixiao Shen
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Yuanyuan Wang
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Fan Lu
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Dexi Zhu
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
19
|
Jin Z, Khazaeinezhad R, Zhu J, Yu J, Qu Y, He Y, Li Y, Gomez Alvarez-Arenas TE, Lu F, Chen Z. In-vivo 3D corneal elasticity using air-coupled ultrasound optical coherence elastography. BIOMEDICAL OPTICS EXPRESS 2019; 10:6272-6285. [PMID: 31853399 PMCID: PMC6913398 DOI: 10.1364/boe.10.006272] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 10/03/2019] [Accepted: 10/10/2019] [Indexed: 05/03/2023]
Abstract
Corneal elasticity can resist elastic deformations under intraocular pressure to maintain normal corneal shape, which has a great influence on corneal refractive function. Elastography can measure tissue elasticity and provide a powerful tool for clinical diagnosis. Air-coupled ultrasound optical coherence elastography (OCE) has been used in the quantification of ex-vivo corneal elasticity. However, in-vivo imaging of the cornea remains a challenge. The 3D air-coupled ultrasound OCE with an axial motion artifacts correction algorithm was developed to distinguish the in-vivo cornea vibration from the axial eye motion in anesthetized rabbits and visualize the elastic wave propagation clearly. The elastic wave group velocity of in-vivo rabbit cornea was measured to be 5.96 ± 0.55 m/s, which agrees with other studies. The results show the potential of 3D air-coupled ultrasound OCE with an axial motion artifacts correction algorithm for quantitative in-vivo assessment of corneal elasticity.
Collapse
Affiliation(s)
- Zi Jin
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou 325003, Zhejiang, China
- These authors contributed equally to this work
| | - Reza Khazaeinezhad
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
- These authors contributed equally to this work
| | - Jiang Zhu
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
| | - Junxiao Yu
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
| | - Yueqiao Qu
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
| | - Youmin He
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
| | - Yan Li
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
| | - Tomas E Gomez Alvarez-Arenas
- Institute of Physical and Information Technologies, Spanish National Research Council (CSIC), 28006 Madrid, Spain
| | - Fan Lu
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou 325003, Zhejiang, China
| | - Zhongping Chen
- Beckman Laser Institute, Department of Biomedical Engineering, University of California, Irvine, Irvine, California 92612, USA
| |
Collapse
|
20
|
Review on Retrospective Procedures to Correct Retinal Motion Artefacts in OCT Imaging. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9132700] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Motion artefacts from involuntary changes in eye fixation remain a major imaging issue in optical coherence tomography (OCT). This paper reviews the state-of-the-art of retrospective procedures to correct retinal motion and axial eye motion artefacts in OCT imaging. Following an overview of motion induced artefacts and correction strategies, a chronological survey of retrospective approaches since the introduction of OCT until the current days is presented. Pre-processing, registration, and validation techniques are described. The review finishes by discussing the limitations of the current techniques and the challenges to be tackled in future developments.
Collapse
|
21
|
Abbasi A, Monadjemi A, Fang L, Rabbani H, Zhang Y. Three-dimensional optical coherence tomography image denoising through multi-input fully-convolutional networks. Comput Biol Med 2019; 108:1-8. [PMID: 30901625 DOI: 10.1016/j.compbiomed.2019.01.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Revised: 01/14/2019] [Accepted: 01/14/2019] [Indexed: 11/30/2022]
Abstract
In recent years, there has been a growing interest in applying convolutional neural networks (CNNs) to low-level vision tasks such as denoising and super-resolution. Due to the coherent nature of the image formation process, the optical coherence tomography (OCT) images are inevitably affected by noise. This paper proposes a new method named the multi-input fully-convolutional networks (MIFCN) for denoising of OCT images. In contrast to recently proposed natural image denoising CNNs, the proposed architecture allows the exploitation of high degrees of correlation and complementary information among neighboring OCT images through pixel by pixel fusion of multiple FCNs. The parameters of the proposed multi-input architecture are learned by considering the consistency between the overall output and the contribution of each input image. The proposed MIFCN method is compared with the state-of-the-art denoising methods adopted on OCT images of normal and age-related macular degeneration eyes in a quantitative and qualitative manner.
Collapse
Affiliation(s)
- Ashkan Abbasi
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran
| | - Amirhassan Monadjemi
- Artificial Intelligence Department, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran.
| | - Leyuan Fang
- College of Electrical and Information Engineering, Hunan University, Changsha, China.
| | - Hossein Rabbani
- Department of Biomedical Engineering, Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu, China
| |
Collapse
|
22
|
Teikari P, Najjar RP, Schmetterer L, Milea D. Embedded deep learning in ophthalmology: making ophthalmic imaging smarter. Ther Adv Ophthalmol 2019; 11:2515841419827172. [PMID: 30911733 PMCID: PMC6425531 DOI: 10.1177/2515841419827172] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 12/20/2018] [Indexed: 01/22/2023] Open
Abstract
Deep learning has recently gained high interest in ophthalmology due to its ability to detect clinically significant features for diagnosis and prognosis. Despite these significant advances, little is known about the ability of various deep learning systems to be embedded within ophthalmic imaging devices, allowing automated image acquisition. In this work, we will review the existing and future directions for 'active acquisition'-embedded deep learning, leading to as high-quality images with little intervention by the human operator. In clinical practice, the improved image quality should translate into more robust deep learning-based clinical diagnostics. Embedded deep learning will be enabled by the constantly improving hardware performance with low cost. We will briefly review possible computation methods in larger clinical systems. Briefly, they can be included in a three-layer framework composed of edge, fog, and cloud layers, the former being performed at a device level. Improved egde-layer performance via 'active acquisition' serves as an automatic data curation operator translating to better quality data in electronic health records, as well as on the cloud layer, for improved deep learning-based clinical data mining.
Collapse
Affiliation(s)
- Petteri Teikari
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Raymond P. Najjar
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Leopold Schmetterer
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna, Vienna, Austria
| | - Dan Milea
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore
| |
Collapse
|