1
|
Li K, Yang J, Liang W, Li X, Zhang C, Chen L, Wu C, Zhang X, Xu Z, Wang Y, Meng L, Zhang Y, Chen Y, Zhou SK. O-PRESS: Boosting OCT axial resolution with Prior guidance, Recurrence, and Equivariant Self-Supervision. Med Image Anal 2024; 99:103319. [PMID: 39270466 DOI: 10.1016/j.media.2024.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/10/2024] [Accepted: 08/19/2024] [Indexed: 09/15/2024]
Abstract
Optical coherence tomography (OCT) is a noninvasive technology that enables real-time imaging of tissue microanatomies. The axial resolution of OCT is intrinsically constrained by the spectral bandwidth of the employed light source while maintaining a fixed center wavelength for a specific application. Physically extending this bandwidth faces strong limitations and requires a substantial cost. We present a novel computational approach, called as O-PRESS, for boosting the axial resolution of OCT with Prior guidance, a Recurrent mechanism, and Equivariant Self-Supervision. Diverging from conventional deconvolution methods that rely on physical models or data-driven techniques, our method seamlessly integrates OCT modeling and deep learning, enabling us to achieve real-time axial-resolution enhancement exclusively from measurements without a need for paired images. Our approach solves two primary tasks of resolution enhancement and noise reduction with one treatment. Both tasks are executed in a self-supervised manner, with equivariance imaging and free space priors guiding their respective processes. Experimental evaluations, encompassing both quantitative metrics and visual assessments, consistently verify the efficacy and superiority of our approach, which exhibits performance on par with fully supervised methods. Importantly, the robustness of our model is affirmed, showcasing its dual capability to enhance axial resolution while concurrently improving the signal-to-noise ratio.
Collapse
Affiliation(s)
- Kaiyan Li
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Wenxuan Liang
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China; School of Physical Sciences, University of Science and Technology of China, Hefei Anhui, 230026, China
| | - Xingde Li
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21287, USA
| | - Chenxi Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Lulu Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Zhiyan Xu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yueling Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Lihui Meng
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yue Zhang
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China; Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| | - S Kevin Zhou
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China (USTC), Hefei Anhui, 230026, China; Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, USTC, Suzhou Jiangsu, 215123, China; Key Laboratory of Precision and Intelligent Chemistry, USTC, Hefei Anhui, 230026, China; Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China.
| |
Collapse
|
2
|
Tse T, Chen Y, Siadati M, Miao Y, Song J, Ma D, Mammo Z, Ju MJ. Generalized 3D registration algorithm for enhancing retinal optical coherence tomography images. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:066002. [PMID: 38745984 PMCID: PMC11091473 DOI: 10.1117/1.jbo.29.6.066002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/25/2024] [Accepted: 05/01/2024] [Indexed: 05/16/2024]
Abstract
Significance Optical coherence tomography (OCT) has emerged as the standard of care for diagnosing and monitoring the treatment of various ocular disorders due to its noninvasive nature and in vivo volumetric acquisition capability. Despite its widespread applications in ophthalmology, motion artifacts remain a challenge in OCT imaging, adversely impacting image quality. While several multivolume registration algorithms have been developed to address this issue, they are often designed to cater to one specific OCT system or acquisition protocol. Aim We aim to generate an OCT volume free of motion artifacts using a system-agnostic registration algorithm that is independent of system specifications or protocol. Approach We developed a B-scan registration algorithm that removes motion and corrects for both translational eye movements and rotational angle differences between volumes. Tests were carried out on various datasets obtained from two different types of custom-built OCT systems and one commercially available system to determine the reliability of the proposed algorithm. Additionally, different system specifications were used, with variations in axial resolution, lateral resolution, signal-to-noise ratio, and real-time motion tracking. The accuracy of this method has further been evaluated through mean squared error (MSE) and multiscale structural similarity index measure (MS-SSIM). Results The results demonstrate improvements in the overall contrast of the images, facilitating detailed visualization of retinal vasculatures in both superficial and deep vasculature plexus. Finer features of the inner and outer retina, such as photoreceptors and other pathology-specific features, are discernible after multivolume registration and averaging. Quantitative analyses affirm that increasing the number of averaged registered volumes will decrease MSE and increase MS-SSIM as compared to the reference volume. Conclusions The multivolume registered data obtained from this algorithm offers significantly improved visualization of the retinal microvascular network as well as retinal morphological features. Furthermore, we have validated that the versatility of our methodology extends beyond specific OCT modalities, thereby enhancing the clinical utility of OCT for the diagnosis and monitoring of ocular pathologies.
Collapse
Affiliation(s)
- Tiffany Tse
- The University of British Columbia, School of Biomedical Engineering, Faculty of Medicine and Applied Science, Vancouver, British Columbia, Canada
| | - Yudan Chen
- The University of British Columbia, School of Biomedical Engineering, Faculty of Medicine and Applied Science, Vancouver, British Columbia, Canada
| | - Mahsa Siadati
- The University of British Columbia, School of Biomedical Engineering, Faculty of Medicine and Applied Science, Vancouver, British Columbia, Canada
| | - Yusi Miao
- The University of British Columbia, Department of Ophthalmology and Visual Sciences, Faculty of Medicine, Vancouver, British Columbia, Canada
| | - Jun Song
- The University of British Columbia, School of Biomedical Engineering, Faculty of Medicine and Applied Science, Vancouver, British Columbia, Canada
| | - Da Ma
- Wake Forest University, School of Medicine, Winston-Salem, North Carolina, United States
| | - Zaid Mammo
- The University of British Columbia, Department of Ophthalmology and Visual Sciences, Faculty of Medicine, Vancouver, British Columbia, Canada
| | - Myeong Jin Ju
- The University of British Columbia, School of Biomedical Engineering, Faculty of Medicine and Applied Science, Vancouver, British Columbia, Canada
- The University of British Columbia, Department of Ophthalmology and Visual Sciences, Faculty of Medicine, Vancouver, British Columbia, Canada
| |
Collapse
|
3
|
Salimi M, Tabatabaei N, Villiger M. Artificial neural network for enhancing signal-to-noise ratio and contrast in photothermal optical coherence tomography. Sci Rep 2024; 14:10264. [PMID: 38704427 PMCID: PMC11069506 DOI: 10.1038/s41598-024-60682-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 04/25/2024] [Indexed: 05/06/2024] Open
Abstract
Optical coherence tomography (OCT) is a medical imaging method that generates micron-resolution 3D volumetric images of tissues in-vivo. Photothermal (PT)-OCT is a functional extension of OCT with the potential to provide depth-resolved molecular information complementary to the OCT structural images. PT-OCT typically requires long acquisition times to measure small fluctuations in the OCT phase signal. Here, we use machine learning with a neural network to infer the amplitude of the photothermal phase modulation from a short signal trace, trained in a supervised fashion with the ground truth signal obtained by conventional reconstruction of the PT-OCT signal from a longer acquisition trace. Results from phantom and tissue studies show that the developed network improves signal to noise ratio (SNR) and contrast, enabling PT-OCT imaging with short acquisition times and without any hardware modification to the PT-OCT system. The developed network removes one of the key barriers in translation of PT-OCT (i.e., long acquisition time) to the clinic.
Collapse
Affiliation(s)
- Mohammadhossein Salimi
- Department of Mechanical Engineering, Lassonde School of Engineering, York University, Toronto, ON, M3J 1P3, Canada
| | - Nima Tabatabaei
- Department of Mechanical Engineering, Lassonde School of Engineering, York University, Toronto, ON, M3J 1P3, Canada.
- Center for Vision Research, York University, Toronto, ON, M3J 1P3, Canada.
| | - Martin Villiger
- Department of Mechanical Engineering, Lassonde School of Engineering, York University, Toronto, ON, M3J 1P3, Canada.
- Harvard Medical School, Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
4
|
Salimi M, Roshanfar M, Tabatabaei N, Mosadegh B. Machine Learning-Assisted Short-Wave InfraRed (SWIR) Techniques for Biomedical Applications: Towards Personalized Medicine. J Pers Med 2023; 14:33. [PMID: 38248734 PMCID: PMC10817559 DOI: 10.3390/jpm14010033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/08/2023] [Accepted: 12/20/2023] [Indexed: 01/23/2024] Open
Abstract
Personalized medicine transforms healthcare by adapting interventions to individuals' unique genetic, molecular, and clinical profiles. To maximize diagnostic and/or therapeutic efficacy, personalized medicine requires advanced imaging devices and sensors for accurate assessment and monitoring of individual patient conditions or responses to therapeutics. In the field of biomedical optics, short-wave infrared (SWIR) techniques offer an array of capabilities that hold promise to significantly enhance diagnostics, imaging, and therapeutic interventions. SWIR techniques provide in vivo information, which was previously inaccessible, by making use of its capacity to penetrate biological tissues with reduced attenuation and enable researchers and clinicians to delve deeper into anatomical structures, physiological processes, and molecular interactions. Combining SWIR techniques with machine learning (ML), which is a powerful tool for analyzing information, holds the potential to provide unprecedented accuracy for disease detection, precision in treatment guidance, and correlations of complex biological features, opening the way for the data-driven personalized medicine field. Despite numerous biomedical demonstrations that utilize cutting-edge SWIR techniques, the clinical potential of this approach has remained significantly underexplored. This paper demonstrates how the synergy between SWIR imaging and ML is reshaping biomedical research and clinical applications. As the paper showcases the growing significance of SWIR imaging techniques that are empowered by ML, it calls for continued collaboration between researchers, engineers, and clinicians to boost the translation of this technology into clinics, ultimately bridging the gap between cutting-edge technology and its potential for personalized medicine.
Collapse
Affiliation(s)
| | - Majid Roshanfar
- Department of Mechanical Engineering, Concordia University, Montreal, QC H3G 1M8, Canada;
| | - Nima Tabatabaei
- Department of Mechanical Engineering, York University, Toronto, ON M3J 1P3, Canada;
| | - Bobak Mosadegh
- Dalio Institute of Cardiovascular Imaging, Department of Radiology, Weill Cornell Medicine, New York, NY 10021, USA
| |
Collapse
|
5
|
Maltais-Tariant R, Itzamna Becerra-Deana R, Brais-Brunet S, Dehaes M, Boudoux C. Speckle contrast reduction through the use of a modally-specific photonic lantern for optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:6250-6259. [PMID: 38420311 PMCID: PMC10898554 DOI: 10.1364/boe.504861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 03/02/2024]
Abstract
A few-mode optical coherence tomography (FM-OCT) system was developed around a 2 × 1 modally-specific photonic lantern (MSPL) centered at 1310 nm. The MSPL allowed FM-OCT to acquire two coregistered images with uncorrelated speckle patterns generated by their specific coherent spread function. Here, we showed that averaging such images in vitro and in vivo reduced the speckle contrast by up to 28% and increased signal-to-noise ratio (SNR) by up to 48% with negligible impact on image spatial resolution. This method is compatible with other speckle reduction techniques to further improve OCT image quality.
Collapse
Affiliation(s)
| | | | - Simon Brais-Brunet
- Research Centre, CHU Sainte-Justine, Montréal, Canada
- Université de Montréal, Institute of Biomedical Engineering, Montréal, Canada
| | - Mathieu Dehaes
- Research Centre, CHU Sainte-Justine, Montréal, Canada
- Université de Montréal, Institute of Biomedical Engineering, Montréal, Canada
- Université de Montréal, Department of Radiology, Radio-oncology and Nuclear Medicine, Montréal, Canada
| | - Caroline Boudoux
- Polytechnique Montréal, Department of Engineering Physics, Montréal, Canada
- Castor Optics, Saint-Laurent, Canada
- Research Centre, CHU Sainte-Justine, Montréal, Canada
| |
Collapse
|
6
|
Wu X, Gao W, Bian H. Self-denoising method for OCT images with single spectrogram-based deep learning. OPTICS LETTERS 2023; 48:4945-4948. [PMID: 37773356 DOI: 10.1364/ol.499966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 08/29/2023] [Indexed: 10/01/2023]
Abstract
The presence of noise in images reconstructed with optical coherence tomography (OCT) is a key issue which limits the further improvement of the image quality. In this Letter, for the first time, to the best of our knowledge, a self-denoising method for OCT images is presented with single spectrogram-based deep learning. Different noises in different images could be customized with an extremely low computation. The deep-learning model consists of two fully connected layers, two convolution layers, and one deconvolution layer, with the input being the raw interference spectrogram and the label being its reconstructed image using the Fourier transform. The denoising image could be calculated by subtracting the noise predicted by our model from the label image. The OCT images of the TiO2 phantom, the orange, and the zebrafish obtained with our spectral-domain OCT system are used as examples to demonstrate the capability of our method. The results demonstrate its effectiveness in reducing noises such as speckle patterns and horizontal and vertical stripes. Compared with the label image, the signal-to-noise ratio could be improved by 35.0 dB, and the image contrast could be improved by a factor of two. Compared with the results denoised by the average method, the mean peak signal-to-noise ratio is 26.2 dB.
Collapse
|
7
|
Li X, Dong Z, Liu H, Kang-Mieler JJ, Ling Y, Gan Y. Frequency-aware optical coherence tomography image super-resolution via conditional generative adversarial neural network. BIOMEDICAL OPTICS EXPRESS 2023; 14:5148-5161. [PMID: 37854579 PMCID: PMC10581809 DOI: 10.1364/boe.494557] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/27/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023]
Abstract
Optical coherence tomography (OCT) has stimulated a wide range of medical image-based diagnosis and treatment in fields such as cardiology and ophthalmology. Such applications can be further facilitated by deep learning-based super-resolution technology, which improves the capability of resolving morphological structures. However, existing deep learning-based method only focuses on spatial distribution and disregards frequency fidelity in image reconstruction, leading to a frequency bias. To overcome this limitation, we propose a frequency-aware super-resolution framework that integrates three critical frequency-based modules (i.e., frequency transformation, frequency skip connection, and frequency alignment) and frequency-based loss function into a conditional generative adversarial network (cGAN). We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks. In addition, we confirmed the generalizability of our framework by applying it to fish corneal images and rat retinal images, demonstrating its capability to super-resolve morphological details in eye imaging.
Collapse
Affiliation(s)
- Xueshen Li
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Zhenxing Dong
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, Minhang District, 200240, China
| | - Hongshan Liu
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Jennifer J. Kang-Mieler
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| | - Yuye Ling
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, Minhang District, 200240, China
| | - Yu Gan
- Department of Biomedical Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
| |
Collapse
|