1
|
Zhong W, Li T, Hou S, Zhang H, Li Z, Wang G, Liu Q, Song X. Unsupervised disentanglement strategy for mitigating artifact in photoacoustic tomography under extremely sparse view. PHOTOACOUSTICS 2024; 38:100613. [PMID: 38764521 PMCID: PMC11101706 DOI: 10.1016/j.pacs.2024.100613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/15/2024] [Accepted: 04/30/2024] [Indexed: 05/21/2024]
Abstract
Traditional methods under sparse view for reconstruction of photoacoustic tomography (PAT) often result in significant artifacts. Here, a novel image to image transformation method based on unsupervised learning artifact disentanglement network (ADN), named PAT-ADN, was proposed to address the issue. This network is equipped with specialized encoders and decoders that are responsible for encoding and decoding the artifacts and content components of unpaired images, respectively. The performance of the proposed PAT-ADN was evaluated using circular phantom data and the animal in vivo experimental data. The results demonstrate that PAT-ADN exhibits excellent performance in effectively removing artifacts. In particular, under extremely sparse view (e.g., 16 projections), structural similarity index and peak signal-to-noise ratio are improved by ∼188 % and ∼85 % in in vivo experimental data using the proposed method compared to traditional reconstruction methods. PAT-ADN improves the imaging performance of PAT, opening up possibilities for its application in multiple domains.
Collapse
Affiliation(s)
- Wenhua Zhong
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Tianle Li
- Nanchang University, Jiluan Academy, Nanchang, China
| | - Shangkun Hou
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Hongyu Zhang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Zilong Li
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Guijun Wang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Qiegen Liu
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Xianlin Song
- Nanchang University, School of Information Engineering, Nanchang, China
| |
Collapse
|
2
|
Liang Z, Zhang S, Liang Z, Mo Z, Zhang X, Zhong Y, Chen W, Qi L. Deep learning acceleration of iterative model-based light fluence correction for photoacoustic tomography. PHOTOACOUSTICS 2024; 37:100601. [PMID: 38516295 PMCID: PMC10955667 DOI: 10.1016/j.pacs.2024.100601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Photoacoustic tomography (PAT) is a promising imaging technique that can visualize the distribution of chromophores within biological tissue. However, the accuracy of PAT imaging is compromised by light fluence (LF), which hinders the quantification of light absorbers. Currently, model-based iterative methods are used for LF correction, but they require extensive computational resources due to repeated LF estimation based on differential light transport models. To improve LF correction efficiency, we propose to use Fourier neural operator (FNO), a neural network specially designed for estimating partial differential equations, to learn the forward projection of light transport in PAT. Trained using paired finite-element-based LF simulation data, our FNO model replaces the traditional computational heavy LF estimator during iterative correction, such that the correction procedure is considerably accelerated. Simulation and experimental results demonstrate that our method achieves comparable LF correction quality to traditional iterative methods while reducing the correction time by over 30 times.
Collapse
Affiliation(s)
- Zhaoyong Liang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Shuangyang Zhang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Zhichao Liang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Zongxin Mo
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Xiaoming Zhang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Yutian Zhong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| |
Collapse
|
3
|
Kong S, Zuo H, Wu C, Liu MY, Ma C. Oxygenation heterogeneity facilitates spatiotemporal flow pattern visualization inside human blood vessels using photoacoustic computed tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:2741-2752. [PMID: 38855671 PMCID: PMC11161372 DOI: 10.1364/boe.518895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 03/21/2024] [Accepted: 03/23/2024] [Indexed: 06/11/2024]
Abstract
Hemodynamics can be explored through various biomedical imaging techniques. However, observing transient spatiotemporal variations in the saturation of oxygen (sO2) within human blood vessels proves challenging with conventional methods. In this study, we employed photoacoustic computed tomography (PACT) to reconstruct the evolving spatiotemporal patterns in a human vein. Through analysis of the multi-wavelength photoacoustic (PA) spectrum, we illustrated the dynamic distribution within blood vessels. Additionally, we computationally rendered the dynamic process of venous blood flowing into the major vein and entering a branching vessel. Notably, we successfully recovered, in real time, the parabolic wavefront profile of laminar flow inside a deep vein in vivo-a first-time achievement. While the study is preliminary, the demonstrated capability of dynamic sO2 imaging holds promise for new applications in biology and medicine.
Collapse
Affiliation(s)
- Siying Kong
- Tsinghua University, Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Beijing 100084, China
| | - Hongzhi Zuo
- Tsinghua University, Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Beijing 100084, China
| | - Chuhua Wu
- Tsinghua University, Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Beijing 100084, China
| | - Ming-Yuan Liu
- Department of Vascular Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Beijing 100084, China
- Institute for Precision Healthcare, Tsinghua University, Beijing 100084, China
- Institute for Intelligent Healthcare, Tsinghua University, Beijing 100084, China
| |
Collapse
|
4
|
Sweeney A, Arora A, Edwards S, Mallidi S. Ultrasound-guided Photoacoustic image Annotation Toolkit in MATLAB (PHANTOM) for preclinical applications. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.07.565885. [PMID: 37986998 PMCID: PMC10659350 DOI: 10.1101/2023.11.07.565885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Depth-dependent fluence-compensation in photoacoustic (PA) imaging is paramount for accurate quantification of chromophores from deep tissues. Here we present a user-friendly toolkit named PHANTOM (PHotoacoustic ANnotation TOolkit for MATLAB) that includes a graphical interface and assists in the segmentation of ultrasound-guided PA images. We modelled the light source configuration with Monte Carlo eXtreme and utilized 3D segmented tissues from ultrasound to generate fluence maps to depth compensate PA images. The methodology was used to analyze PA images of phantoms with varying blood oxygenation and results were validated with oxygen electrode measurements. Two preclinical models, a subcutaneous tumor and a calcified placenta, were imaged and fluence-compensated using the PHANTOM toolkit and the results were verified with immunohistochemistry. The PHANTOM toolkit provides scripts and auxiliary functions to enable biomedical researchers not specialized in optical imaging to apply fluence correction to PA images, enhancing accessibility of quantitative PAI for researchers in various fields.
Collapse
Affiliation(s)
- Allison Sweeney
- Department of Biomedical Engineering, Tufts University, Medford, MA, United States
| | - Aayush Arora
- Department of Biomedical Engineering, Tufts University, Medford, MA, United States
| | - Skye Edwards
- Department of Biomedical Engineering, Tufts University, Medford, MA, United States
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, United States
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
5
|
Rix T, Dreher KK, Nölke JH, Schellenberg M, Tizabi MD, Seitel A, Maier-Hein L. Efficient Photoacoustic Image Synthesis with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7085. [PMID: 37631628 PMCID: PMC10457787 DOI: 10.3390/s23167085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/25/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
Photoacoustic imaging potentially allows for the real-time visualization of functional human tissue parameters such as oxygenation but is subject to a challenging underlying quantification problem. While in silico studies have revealed the great potential of deep learning (DL) methodology in solving this problem, the inherent lack of an efficient gold standard method for model training and validation remains a grand challenge. This work investigates whether DL can be leveraged to accurately and efficiently simulate photon propagation in biological tissue, enabling photoacoustic image synthesis. Our approach is based on estimating the initial pressure distribution of the photoacoustic waves from the underlying optical properties using a back-propagatable neural network trained on synthetic data. In proof-of-concept studies, we validated the performance of two complementary neural network architectures, namely a conventional U-Net-like model and a Fourier Neural Operator (FNO) network. Our in silico validation on multispectral human forearm images shows that DL methods can speed up image generation by a factor of 100 when compared to Monte Carlo simulations with 5×108 photons. While the FNO is slightly more accurate than the U-Net, when compared to Monte Carlo simulations performed with a reduced number of photons (5×106), both neural network architectures achieve equivalent accuracy. In contrast to Monte Carlo simulations, the proposed DL models can be used as inherently differentiable surrogate models in the photoacoustic image synthesis pipeline, allowing for back-propagation of the synthesis error and gradient-based optimization over the entire pipeline. Due to their efficiency, they have the potential to enable large-scale training data generation that can expedite the clinical application of photoacoustic imaging.
Collapse
Affiliation(s)
- Tom Rix
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
| | - Kris K. Dreher
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, 69120 Heidelberg, Germany
| | - Jan-Hinrich Nölke
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
| | - Melanie Schellenberg
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
- HIDSS4Health—Helmholtz Information and Data Science School for Health, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
| | - Minu D. Tizabi
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
| | - Alexander Seitel
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
- HIDSS4Health—Helmholtz Information and Data Science School for Health, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
- Medical Faculty, Heidelberg University, 69120 Heidelberg, Germany
| |
Collapse
|
6
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
7
|
Lamilla E, Sacarelo C, Alvarez-Alvarado MS, Pazmino A, Iza P. Optical Encoding Model Based on Orbital Angular Momentum Powered by Machine Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:2755. [PMID: 36904967 PMCID: PMC10007020 DOI: 10.3390/s23052755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/18/2023] [Accepted: 02/22/2023] [Indexed: 06/18/2023]
Abstract
Based on orbital angular momentum (OAM) properties of Laguerre-Gaussian beams LG(p,ℓ), a robust optical encoding model for efficient data transmission applications is designed. This paper presents an optical encoding model based on an intensity profile generated by a coherent superposition of two OAM-carrying Laguerre-Gaussian modes and a machine learning detection method. In the encoding process, the intensity profile for data encoding is generated based on the selection of p and ℓ indices, while the decoding process is performed using a support vector machine (SVM) algorithm. Two different decoding models based on an SVM algorithm are tested to verify the robustness of the optical encoding model, finding a BER =10-9 for 10.2 dB of signal-to-noise ratio in one of the SVM models.
Collapse
Affiliation(s)
- Erick Lamilla
- Escuela Superior Politécnica del Litoral, ESPOL, Departamento de Física, Campus Gustavo Galindo, Km 30.5 Vía Perimetral, P.O. Box 09-01-5863, Guayaquil 090150, Ecuador
- Facultad de Ciencias Matemáticas y Físicas, Universidad de Guayaquil, Guayaquil 090514, Ecuador
| | - Christian Sacarelo
- Escuela Superior Politécnica del Litoral, ESPOL, Departamento de Física, Campus Gustavo Galindo, Km 30.5 Vía Perimetral, P.O. Box 09-01-5863, Guayaquil 090150, Ecuador
| | - Manuel S. Alvarez-Alvarado
- Escuela Superior Politécnica del Litoral, ESPOL, Facultad de Ingeniería en Electricidad y Computación(FIEC), Campus Gustavo Galindo, Km 30.5 Vía Perimetral, P.O. Box 09-01-5863, Guayaquil 090150, Ecuador
| | - Arturo Pazmino
- Escuela Superior Politécnica del Litoral, ESPOL, Departamento de Física, Campus Gustavo Galindo, Km 30.5 Vía Perimetral, P.O. Box 09-01-5863, Guayaquil 090150, Ecuador
| | - Peter Iza
- Escuela Superior Politécnica del Litoral, ESPOL, Departamento de Física, Campus Gustavo Galindo, Km 30.5 Vía Perimetral, P.O. Box 09-01-5863, Guayaquil 090150, Ecuador
- Center of Research and Development in Nanotechnology, CIDNA, Escuela Superior Politécnica del Litoral, ESPOL, Campus G. Galindo, Km 30.5 víA Perimetral, Guayaquil 090150, Ecuador
| |
Collapse
|
8
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
9
|
Zuo H, Cui M, Wang X, Ma C. Spectral crosstalk in photoacoustic computed tomography. PHOTOACOUSTICS 2022; 26:100356. [PMID: 35574185 PMCID: PMC9095891 DOI: 10.1016/j.pacs.2022.100356] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/04/2022] [Accepted: 04/11/2022] [Indexed: 06/15/2023]
Abstract
Multispectral photoacoustic (PA) imaging faces two major challenges: the spectral coloring effect, which has been studied extensively as an optical inversion problem, and the spectral crosstalk, which is basically a result of non-ideal acoustic inversion. So far, there is no systematic work to analyze the spectral crosstalk because acoustic inversion and spectroscopic measurement are always treated as decoupled. In this work, we theorize and demonstrate through a series of simulations and experiments how imperfect acoustic inversion induces inaccurate PA spectrum measurement. We provide detailed analysis to elucidate how different factors, including limited bandwidth, limited view, light attenuation, out-of-plane signal, and image reconstruction schemes, conspire to render the measured PA spectrum inaccurate. We found that the model-based reconstruction outperforms universal back-projection in suppressing the spectral crosstalk in some cases.
Collapse
Affiliation(s)
- Hongzhi Zuo
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Manxiu Cui
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Xuanhao Wang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Center for Clinical Big Data Research, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China
- Photomedicine Laboratory, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
10
|
Zheng S, Meng Q, Wang XY. Quantitative endoscopic photoacoustic tomography using a convolutional neural network. APPLIED OPTICS 2022; 61:2574-2581. [PMID: 35471325 DOI: 10.1364/ao.441250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
Endoscopic photoacoustic tomography (EPAT) is a catheter-based hybrid imaging modality capable of providing structural and functional information of biological luminal structures, such as coronary arterial vessels and the digestive tract. The recovery of the optical properties of the imaged tissue from acoustic measurements achieved by optical inversion is essential for implementing quantitative EPAT (qEPAT). In this paper, a convolutional neural network (CNN) based on deep gradient descent is developed for qEPAT. The network enables the reconstruction of images representing the spatially varying absorption coefficient in cross-sections of the tubular structures from limited measurement data. The forward operator reflecting the mapping from the absorption coefficient to the optical deposition due to pulsed irradiation is embedded into the network training. The network parameters are optimized layer by layer through the deep gradient descent mechanism using the numerically simulated data. The operation processes of the forward operator and its adjoint operator are separated from the network training. The trained network outputs an image representing the distribution of absorption coefficients by inputting an image that represents the optical deposition. The method has been tested with computer-generated phantoms mimicking coronary arterial vessels containing various tissue types. Results suggest that the structural similarity of the images reconstructed by our method is increased by about 10% in comparison with the non-learning method based on error minimization in the case of the same measuring view.
Collapse
|
11
|
Gröhl J, Dreher KK, Schellenberg M, Rix T, Holzwarth N, Vieten P, Ayala L, Bohndiek SE, Seitel A, Maier-Hein L. SIMPA: an open-source toolkit for simulation and image processing for photonics and acoustics. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210395SSR. [PMID: 35380031 PMCID: PMC8978263 DOI: 10.1117/1.jbo.27.8.083010] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 02/28/2022] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings. AIM To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards. APPROACH SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models. RESULTS To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations. CONCLUSIONS SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Kris K. Dreher
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
| | - Tom Rix
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| | - Niklas Holzwarth
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Patricia Vieten
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Leonardo Ayala
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Sarah E. Bohndiek
- University of Cambridge, Cancer Research UK Cambridge Institute, Robinson Way, Cambridge, United Kingdom
- University of Cambridge, Department of Physics, Cambridge, United Kingdom
| | - Alexander Seitel
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| |
Collapse
|
12
|
Lu M, Liu X, Liu C, Li B, Gu W, Jiang J, Ta D. Artifact removal in photoacoustic tomography with an unsupervised method. BIOMEDICAL OPTICS EXPRESS 2021; 12:6284-6299. [PMID: 34745737 PMCID: PMC8548009 DOI: 10.1364/boe.434172] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 08/13/2021] [Accepted: 09/07/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic tomography (PAT) is an emerging biomedical imaging technology that can realize high contrast imaging with a penetration depth of the acoustic. Recently, deep learning (DL) methods have also been successfully applied to PAT for improving the image reconstruction quality. However, the current DL-based PAT methods are implemented by the supervised learning strategy, and the imaging performance is dependent on the available ground-truth data. To overcome the limitation, this work introduces a new image domain transformation method based on cyclic generative adversarial network (CycleGAN), termed as PA-GAN, which is used to remove artifacts in PAT images caused by the use of the limited-view measurement data in an unsupervised learning way. A series of data from phantom and in vivo experiments are used to evaluate the performance of the proposed PA-GAN. The experimental results show that PA-GAN provides a good performance in removing artifacts existing in photoacoustic tomographic images. In particular, when dealing with extremely sparse measurement data (e.g., 8 projections in circle phantom experiments), higher imaging performance is achieved by the proposed unsupervised PA-GAN, with an improvement of ∼14% in structural similarity (SSIM) and ∼66% in peak signal to noise ratio (PSNR), compared with the supervised-learning U-Net method. With an increasing number of projections (e.g., 128 projections), U-Net, especially FD U-Net, shows a slight improvement in artifact removal capability, in terms of SSIM and PSNR. Furthermore, the computational time obtained by PA-GAN and U-Net is similar (∼60 ms/frame), once the network is trained. More importantly, PA-GAN is more flexible than U-Net that allows the model to be effectively trained with unpaired data. As a result, PA-GAN makes it possible to implement PAT with higher flexibility without compromising imaging performance.
Collapse
Affiliation(s)
- Mengyang Lu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- State Key Laboratory of Medical Neurobiology, Institutes of Brain Science, Fudan University, Shanghai 200433, China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Wenting Gu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiehui Jiang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, China
| |
Collapse
|
13
|
Kirchner T, Frenz M. Multiple illumination learned spectral decoloring for quantitative optoacoustic oximetry imaging. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210069RR. [PMID: 34350736 PMCID: PMC8336722 DOI: 10.1117/1.jbo.26.8.085001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/16/2021] [Indexed: 06/13/2023]
Abstract
SIGNIFICANCE Quantitative measurement of blood oxygen saturation (sO2) with optoacoustic (OA) imaging is one of the most sought after goals of quantitative OA imaging research due to its wide range of biomedical applications. AIM A method for accurate and applicable real-time quantification of local sO2 with OA imaging. APPROACH We combine multiple illumination (MI) sensing with learned spectral decoloring (LSD). We train LSD feedforward neural networks and random forests on Monte Carlo simulations of spectrally colored absorbed energy spectra, to apply the trained models to real OA measurements. We validate our combined MI-LSD method on a highly reliable, reproducible, and easily scalable phantom model, based on copper and nickel sulfate solutions. RESULTS With this sulfate model, we see a consistently high estimation accuracy using MI-LSD, with median absolute estimation errors of 2.5 to 4.5 percentage points. We further find fewer outliers in MI-LSD estimates compared with LSD. Random forest regressors outperform previously reported neural network approaches. CONCLUSIONS Random forest-based MI-LSD is a promising method for accurate quantitative OA oximetry imaging.
Collapse
Affiliation(s)
- Thomas Kirchner
- University of Bern, Biomedical Photonics, Institute of Applied Physics, Bern, Switzerland
| | - Martin Frenz
- University of Bern, Biomedical Photonics, Institute of Applied Physics, Bern, Switzerland
| |
Collapse
|
14
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
15
|
Gupta K, Reddy S. Heart, Eye, and Artificial Intelligence: A Review. Cardiol Res 2021; 12:132-139. [PMID: 34046105 PMCID: PMC8139752 DOI: 10.14740/cr1179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 11/12/2020] [Indexed: 12/30/2022] Open
Abstract
Heart disease continues to be the leading cause of death in the USA. Deep learning-based artificial intelligence (AI) methods have become increasingly common in studying the various factors involved in cardiovascular disease. The usage of retinal scanning techniques to diagnose retinal diseases, such as diabetic retinopathy, age-related macular degeneration, glaucoma and others, using fundus photographs and optical coherence tomography angiography (OCTA) has been extensively documented. Researchers are now looking to combine the power of AI with the non-invasive ease of retinal scanning to examine the workings of the heart and predict changes in the macrovasculature based on microvascular features and function. In this review, we summarize the current state of the field in using retinal imaging to diagnose cardiovascular issues and other diseases.
Collapse
Affiliation(s)
- Kush Gupta
- Kasturba Medical College, Mangalore, India
| | | |
Collapse
|
16
|
Amidi E, Yang G, Uddin KMS, Luo H, Middleton W, Powell M, Siegel C, Zhu Q. Role of blood oxygenation saturation in ovarian cancer diagnosis using multi-spectral photoacoustic tomography. JOURNAL OF BIOPHOTONICS 2021; 14:e202000368. [PMID: 33377620 PMCID: PMC8044001 DOI: 10.1002/jbio.202000368] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 12/18/2020] [Accepted: 12/19/2020] [Indexed: 05/05/2023]
Abstract
In photoacoustic tomography (PAT), a tunable laser typically illuminates the tissue at multiple wavelengths, and the received photoacoustic waves are used to form functional images of relative total haemoglobin (rHbT) and blood oxygenation saturation (%sO2 ). Due to measurement errors, the estimation of these parameters can be challenging, especially in clinical studies. In this study, we use a multi-pixel method to smooth the measurements before calculating rHbT and %sO2 . We first perform phantom studies using blood tubes of calibrated %sO2 to evaluate the accuracy of our %sO2 estimation. We conclude by presenting diagnostic results from PAT of 33 patients with 51 ovarian masses imaged by our co-registered PAT and ultrasound system. The ovarian masses were divided into malignant and benign/normal groups. Functional maps of rHbT and %sO2 and their histograms as well as spectral features were calculated using the PAT data from all ovaries in these two groups. Support vector machine models were trained on different combinations of the significant features. The area under ROC (AUC) of 0.93 (0.95%CI: 0.90-0.96) on the testing data set was achieved by combining mean %sO2 , a spectral feature, and the score of the study radiologist.
Collapse
Affiliation(s)
- Eghbal Amidi
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Guang Yang
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - K. M. Shihab Uddin
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Hongbo Luo
- Department of Electrical and System Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - William Middleton
- Department of Radiology, Washington University School of Medicine, St. Louis, Missouri
| | - Matthew Powell
- Division of Gynecological Oncology, Washington University School of Medicine, St. Louis, Missouri
| | - Cary Siegel
- Department of Radiology, Washington University School of Medicine, St. Louis, Missouri
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
- Department of Radiology, Washington University School of Medicine, St. Louis, Missouri
| |
Collapse
|
17
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
18
|
Zheng S, Fei Y, Jian S. Method for parametric imaging of attenuation by intravascular optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2021; 12:1882-1904. [PMID: 33996205 PMCID: PMC8086439 DOI: 10.1364/boe.420094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 02/27/2021] [Accepted: 03/01/2021] [Indexed: 06/12/2023]
Abstract
Catheter-based intravascular optical coherence tomography (IVOCT) is a powerful imaging modality for visualization of atherosclerosis with high resolution. Quantitative characterization of various tissue types by attenuation coefficient (AC) extraction has been proven to be a potentially significant application of OCT attenuation imaging. However, existing methods for AC extraction from OCT suffer from the challenge of variability in complex tissue types in IVOCT pullback data such as healthy vessel wall, mixed atherosclerotic plaques, plaques with a single component and stent struts, etc. This challenge leads to the ineffectiveness in the tissue differentiation by AC representation based on single scattering model of OCT signal. In this paper, we propose a novel method based on multiple scattering model for parametric imaging of optical attenuation by AC retrieval from IVOCT images conventionally acquired during cardiac catheterization. The OCT signal characterized by the AC is physically modeled by Monte Carlo simulation. Then, the pixel-wise AC retrieval is achieved by iteratively minimizing an error function regarding the modeled and measured backscattered signal. This method provides a general scheme for AC extraction from IVOCT without the premise of complete attenuation of the incident light through the imaging depths. Results of computer-simulated and clinical images demonstrate that the method can avoid overestimation at the end of the depth profile in comparison with the approaches based on the depth-resolved (DR) model. The energy error depth and structural similarity are improved by about 30% and 10% respectively compared with DR. It provides a useful way to differentiate and characterize arterial tissue types in IVOCT images.
Collapse
Affiliation(s)
- Sun Zheng
- Department of Electronic and Communication Engineering, North China Electric Power University, Baoding 071003, Hebei, China
- Hebei Key Laboratory of Power Internet of Things Technology, North China Electric Power University, Baoding 071003, Hebei, China
| | - Yang Fei
- Department of Electronic and Communication Engineering, North China Electric Power University, Baoding 071003, Hebei, China
- Hebei Key Laboratory of Power Internet of Things Technology, North China Electric Power University, Baoding 071003, Hebei, China
| | - Sun Jian
- Department of Radiology, Hebei University Affiliated Hospital, Baoding 071003, Hebei, China
| |
Collapse
|
19
|
Gröhl J, Kirchner T, Adler TJ, Hacker L, Holzwarth N, Hernández-Aguilera A, Herrera MA, Santos E, Bohndiek SE, Maier-Hein L. Learned spectral decoloring enables photoacoustic oximetry. Sci Rep 2021; 11:6565. [PMID: 33753769 PMCID: PMC7985523 DOI: 10.1038/s41598-021-83405-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 01/27/2021] [Indexed: 01/15/2023] Open
Abstract
The ability of photoacoustic imaging to measure functional tissue properties, such as blood oxygenation sO[Formula: see text], enables a wide variety of possible applications. sO[Formula: see text] can be computed from the ratio of oxyhemoglobin HbO[Formula: see text] and deoxyhemoglobin Hb, which can be distuinguished by multispectral photoacoustic imaging due to their distinct wavelength-dependent absorption. However, current methods for estimating sO[Formula: see text] yield inaccurate results in realistic settings, due to the unknown and wavelength-dependent influence of the light fluence on the signal. In this work, we propose learned spectral decoloring to enable blood oxygenation measurements to be inferred from multispectral photoacoustic imaging. The method computes sO[Formula: see text] pixel-wise, directly from initial pressure spectra [Formula: see text], which represent initial pressure values at a fixed spatial location [Formula: see text] over all recorded wavelengths [Formula: see text]. The method is compared to linear unmixing approaches, as well as pO[Formula: see text] and blood gas analysis reference measurements. Experimental results suggest that the proposed method is able to obtain sO[Formula: see text] estimates from multispectral photoacoustic measurements in silico, in vitro, and in vivo.
Collapse
Affiliation(s)
- Janek Gröhl
- Computer Assisted Medical Interventions, German Cancer Research Center, Heidelberg, Germany.
- Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Thomas Kirchner
- Institute of Applied Physics, Biomedical Photonics, Bern University, Bern, Switzerland
| | - Tim J Adler
- Computer Assisted Medical Interventions, German Cancer Research Center, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Lina Hacker
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Niklas Holzwarth
- Computer Assisted Medical Interventions, German Cancer Research Center, Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | | | - Mildred A Herrera
- Department of Neurosurgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Edgar Santos
- Department of Neurosurgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Sarah E Bohndiek
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
| | - Lena Maier-Hein
- Computer Assisted Medical Interventions, German Cancer Research Center, Heidelberg, Germany.
- Medical Faculty, Heidelberg University, Heidelberg, Germany.
| |
Collapse
|
20
|
Godefroy G, Arnal B, Bossy E. Compensating for visibility artefacts in photoacoustic imaging with a deep learning approach providing prediction uncertainties. PHOTOACOUSTICS 2021; 21:100218. [PMID: 33364161 PMCID: PMC7750172 DOI: 10.1016/j.pacs.2020.100218] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 10/15/2020] [Accepted: 10/17/2020] [Indexed: 05/04/2023]
Abstract
Conventional photoacoustic imaging may suffer from the limited view and bandwidth of ultrasound transducers. A deep learning approach is proposed to handle these problems and is demonstrated both in simulations and in experiments on a multi-scale model of leaf skeleton. We employed an experimental approach to build the training and the test sets using photographs of the samples as ground truth images. Reconstructions produced by the neural network show a greatly improved image quality as compared to conventional approaches. In addition, this work aimed at quantifying the reliability of the neural network predictions. To achieve this, the dropout Monte-Carlo procedure is applied to estimate a pixel-wise degree of confidence on each predicted picture. Last, we address the possibility to use transfer learning with simulated data in order to drastically limit the size of the experimental dataset.
Collapse
Affiliation(s)
| | - Bastien Arnal
- Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
| | - Emmanuel Bossy
- Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
| |
Collapse
|
21
|
Jeng GS, Li ML, Kim M, Yoon SJ, Pitre JJ, Li DS, Pelivanov I, O’Donnell M. Real-time interleaved spectroscopic photoacoustic and ultrasound (PAUS) scanning with simultaneous fluence compensation and motion correction. Nat Commun 2021; 12:716. [PMID: 33514737 PMCID: PMC7846772 DOI: 10.1038/s41467-021-20947-5] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 12/22/2020] [Indexed: 02/06/2023] Open
Abstract
For over two decades photoacoustic imaging has been tested clinically, but successful human trials have been limited. To enable quantitative clinical spectroscopy, the fundamental issues of wavelength-dependent fluence variations and inter-wavelength motion must be overcome. Here we propose a real-time, spectroscopic photoacoustic/ultrasound (PAUS) imaging approach using a compact, 1-kHz rate wavelength-tunable laser. Instead of illuminating tissue over a large area, the fiber-optic delivery system surrounding an US array sequentially scans a narrow laser beam, with partial PA image reconstruction for each laser pulse. The final image is then formed by coherently summing partial images. This scheme enables (i) automatic compensation for wavelength-dependent fluence variations in spectroscopic PA imaging and (ii) motion correction of spectroscopic PA frames using US speckle tracking in real-time systems. The 50-Hz video rate PAUS system is demonstrated in vivo using a murine model of labelled drug delivery.
Collapse
Affiliation(s)
- Geng-Shi Jeng
- grid.34477.330000000122986657Department of Bioengineering, University of Washington, Seattle, WA USA ,grid.260539.b0000 0001 2059 7017Institute of Electronics, National Chiao Tung University, Hsinchu, Taiwan
| | - Meng-Lin Li
- grid.38348.340000 0004 0532 0580Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan ,grid.38348.340000 0004 0532 0580Institute of Photonics Technologies, National Tsing Hua University, Hsinchu, Taiwan
| | - MinWoo Kim
- grid.34477.330000000122986657Department of Bioengineering, University of Washington, Seattle, WA USA
| | - Soon Joon Yoon
- grid.34477.330000000122986657Department of Bioengineering, University of Washington, Seattle, WA USA
| | - John J. Pitre
- grid.34477.330000000122986657Department of Bioengineering, University of Washington, Seattle, WA USA
| | - David S. Li
- grid.34477.330000000122986657Department of Chemical Engineering, University of Washington, Seattle, WA USA
| | - Ivan Pelivanov
- grid.34477.330000000122986657Department of Bioengineering, University of Washington, Seattle, WA USA
| | - Matthew O’Donnell
- grid.34477.330000000122986657Department of Bioengineering, University of Washington, Seattle, WA USA
| |
Collapse
|
22
|
Oxygen Saturation Imaging Using LED-Based Photoacoustic System. SENSORS 2021; 21:s21010283. [PMID: 33406653 PMCID: PMC7795655 DOI: 10.3390/s21010283] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 12/31/2020] [Accepted: 01/01/2021] [Indexed: 12/31/2022]
Abstract
Oxygen saturation imaging has potential in several preclinical and clinical applications. Dual-wavelength LED array-based photoacoustic oxygen saturation imaging can be an affordable solution in this case. For the translation of this technology, there is a need to improve its accuracy and validate it against ground truth methods. We propose a fluence compensated oxygen saturation imaging method, utilizing structural information from the ultrasound image, and prior knowledge of the optical properties of the tissue with a Monte-Carlo based light propagation model for the dual-wavelength LED array configuration. We then validate the proposed method with oximeter measurements in tissue-mimicking phantoms. Further, we demonstrate in vivo imaging on small animal and a human subject. We conclude that the proposed oxygen saturation imaging can be used to image tissue at a depth of 6–8 mm in both preclinical and clinical applications.
Collapse
|
23
|
Johnstonbaugh K, Agrawal S, Durairaj DA, Fadden C, Dangi A, Karri SPK, Kothapalli SR. A Deep Learning Approach to Photoacoustic Wavefront Localization in Deep-Tissue Medium. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2649-2659. [PMID: 31944951 PMCID: PMC7769001 DOI: 10.1109/tuffc.2020.2964698] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Optical photons undergo strong scattering when propagating beyond 1-mm deep inside biological tissue. Finding the origin of these diffused optical wavefronts is a challenging task. Breaking through the optical diffusion limit, photoacoustic (PA) imaging (PAI) provides high-resolution and label-free images of human vasculature with high contrast due to the optical absorption of hemoglobin. In real-time PAI, an ultrasound transducer array detects PA signals, and B-mode images are formed by delay-and-sum or frequency-domain beamforming. Fundamentally, the strength of a PA signal is proportional to the local optical fluence, which decreases with the increasing depth due to depth-dependent optical attenuation. This limits the visibility of deep-tissue vasculature or other light-absorbing PA targets. To address this practical challenge, an encoder-decoder convolutional neural network architecture was constructed with custom modules and trained to identify the origin of the PA wavefronts inside an optically scattering deep-tissue medium. A comprehensive ablation study provides strong evidence that each module improves the localization accuracy. The network was trained on model-based simulated PA signals produced by 16 240 blood-vessel targets subjected to both optical scattering and Gaussian noise. Test results on 4600 simulated and five experimental PA signals collected under various scattering conditions show that the network can localize the targets with a mean error less than 30 microns (standard deviation 20.9 microns) for targets below 40-mm imaging depth and 1.06 mm (standard deviation 2.68 mm) for targets at a depth between 40 and 60 mm. The proposed work has broad applications such as diffused optical wavefront shaping, circulating melanoma cell detection, and real-time vascular surgeries (e.g., deep-vein thrombosis).
Collapse
Affiliation(s)
| | | | - Deepit Abhishek Durairaj
- Department of Electrical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Christopher Fadden
- Department of Electrical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Ajay Dangi
- Department of Biomedical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Sri Phani Krishna Karri
- Department of Electrical Engineering, National Institute of Technology Andhra Pradesh, AP, India 534102
| | | |
Collapse
|
24
|
Olefir I, Tzoumas S, Restivo C, Mohajerani P, Xing L, Ntziachristos V. Deep Learning-Based Spectral Unmixing for Optoacoustic Imaging of Tissue Oxygen Saturation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3643-3654. [PMID: 32746111 PMCID: PMC7671861 DOI: 10.1109/tmi.2020.3001750] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Label free imaging of oxygenation distribution in tissues is highly desired in numerous biomedical applications, but is still elusive, in particular in sub-epidermal measurements. Eigenspectra multispectral optoacoustic tomography (eMSOT) and its Bayesian-based implementation have been introduced to offer accurate label-free blood oxygen saturation (sO2) maps in tissues. The method uses the eigenspectra model of light fluence in tissue to account for the spectral changes due to the wavelength dependent attenuation of light with tissue depth. eMSOT relies on the solution of an inverse problem bounded by a number of ad hoc hand-engineered constraints. Despite the quantitative advantage offered by eMSOT, both the non-convex nature of the optimization problem and the possible sub-optimality of the constraints may lead to reduced accuracy. We present herein a neural network architecture that is able to learn how to solve the inverse problem of eMSOT by directly regressing from a set of input spectra to the desired fluence values. The architecture is composed of a combination of recurrent and convolutional layers and uses both spectral and spatial features for inference. We train an ensemble of such networks using solely simulated data and demonstrate how this approach can improve the accuracy of sO2 computation over the original eMSOT, not only in simulations but also in experimental datasets obtained from blood phantoms and small animals (mice) in vivo. The use of a deep-learning approach in optoacoustic sO2 imaging is confirmed herein for the first time on ground truth sO2 values experimentally obtained in vivo and ex vivo.
Collapse
|
25
|
Feng J, Deng J, Li Z, Sun Z, Dou H, Jia K. End-to-end Res-Unet based reconstruction algorithm for photoacoustic imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:5321-5340. [PMID: 33014617 PMCID: PMC7510873 DOI: 10.1364/boe.396598] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 07/30/2020] [Accepted: 08/20/2020] [Indexed: 05/03/2023]
Abstract
Recently, deep neural networks have attracted great attention in photoacoustic imaging (PAI). In PAI, reconstructing the initial pressure distribution from acquired photoacoustic (PA) signals is a typically inverse problem. In this paper, an end-to-end Unet with residual blocks (Res-Unet) is designed and trained to solve the inverse problem in PAI. The performance of the proposed algorithm is explored and analyzed by comparing a recent model-resolution-based regularization algorithm (MRR) with numerical and physical phantom experiments. The improvement obtained in the reconstructed images was more than 95% in pearson correlation and 39% in peak signal-to-noise ratio in comparison to the MRR. The Res-Unet also achieved superior performance over the state-of-the-art Unet++ architecture by more than 18% in PSNR in simulation experiments.
Collapse
Affiliation(s)
- Jinchao Feng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Jianguang Deng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhe Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhonghua Sun
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Huijing Dou
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kebin Jia
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| |
Collapse
|
26
|
Kim M, Jeng GS, O’Donnell M, Pelivanov I. Correction of wavelength-dependent laser fluence in swept-beam spectroscopic photoacoustic imaging with a hand-held probe. PHOTOACOUSTICS 2020; 19:100192. [PMID: 32670789 PMCID: PMC7339128 DOI: 10.1016/j.pacs.2020.100192] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/16/2020] [Accepted: 05/16/2020] [Indexed: 05/11/2023]
Abstract
Recently, we demonstrated an integrated photoacoustic (PA) and ultrasound (PAUS) system using a kHz-rate wavelength-tunable laser and a swept-beam delivery approach. It irradiates a medium using a narrow laser beam swept at high repetition rate (∼1 kHz) over the desired imaging area, in contrast to the conventional PA approach using broad-beam illumination at a low repetition rate (10-50 Hz). Here, we present a method to correct the wavelength-dependent fluence distribution and demonstrate its performance in phantom studies using a conventional limited view/bandwidth hand-held US probe. We adopted analytic fluence models, extending diffusion theory for the case of a pencil beam obliquely incident on an optically homogenous turbid medium, and developed a robust method to estimate fluence attenuation in the medium using PA measurements acquired from multiple fiber-irradiation positions swept at a kHz rate. We conducted comprehensive simulation tests and phantom studies using well-known contrast-agents to validate the reliability of the fluence model and its spectral corrections.
Collapse
Affiliation(s)
- MinWoo Kim
- Department of Bioengineering, University of Washington, Seattle, WA, 98105, USA
| | - Geng-Shi Jeng
- Department of Electronics Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan
| | - Matthew O’Donnell
- Department of Bioengineering, University of Washington, Seattle, WA, 98105, USA
| | - Ivan Pelivanov
- Department of Bioengineering, University of Washington, Seattle, WA, 98105, USA
| |
Collapse
|
27
|
Zhou X, Akhlaghi N, Wear KA, Garra BS, Pfefer TJ, Vogt WC. Evaluation of Fluence Correction Algorithms in Multispectral Photoacoustic Imaging. PHOTOACOUSTICS 2020; 19:100181. [PMID: 32405456 PMCID: PMC7210453 DOI: 10.1016/j.pacs.2020.100181] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 04/03/2020] [Accepted: 04/07/2020] [Indexed: 05/07/2023]
Abstract
Multispectral photoacoustic imaging (MPAI) is a promising emerging diagnostic technology, but fluence artifacts can degrade device performance. Our goal was to develop well-validated phantom-based test methods for evaluating and comparing MPAI fluence correction algorithms, including a heuristic diffusion approximation, Monte Carlo simulations, and an algorithm we developed based on novel application of the diffusion dipole model (DDM). Phantoms simulated a range of breast-mimicking optical properties and contained channels filled with chromophore solutions (ink, hemoglobin, or copper sulfate) or connected to a previously developed blood flow circuit providing tunable oxygen saturation (SO2). The DDM algorithm achieved similar spectral recovery and SO2 measurement accuracy to Monte Carlo-based corrections with lower computational cost, potentially providing an accurate, real-time correction approach. Algorithms were sensitive to optical property uncertainty, but error was minimized by matching phantom albedo. The developed test methods may provide a foundation for standardized assessment of MPAI fluence correction algorithm performance.
Collapse
Affiliation(s)
- Xuewen Zhou
- Fischell Department of Bioengineering, University of Maryland, College Park, MD, 02742, United States
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - Nima Akhlaghi
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - Keith A. Wear
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - Brian S. Garra
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - T. Joshua Pfefer
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
| | - William C. Vogt
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, United States
- Corresponding author.
| |
Collapse
|
28
|
Bench C, Hauptmann A, Cox B. Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:jbo-200119R. [PMID: 32840068 PMCID: PMC7443711 DOI: 10.1117/1.jbo.25.8.085003] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 07/23/2020] [Indexed: 05/02/2023]
Abstract
SIGNIFICANCE Two-dimensional (2-D) fully convolutional neural networks have been shown capable of producing maps of sO2 from 2-D simulated images of simple tissue models. However, their potential to produce accurate estimates in vivo is uncertain as they are limited by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D), and they have not been tested with realistic images. AIM To demonstrate the capability of deep neural networks to process whole 3-D images and output 3-D maps of vascular sO2 from realistic tissue models/images. APPROACH Two separate fully convolutional neural networks were trained to produce 3-D maps of vascular blood oxygen saturation and vessel positions from multiwavelength simulated images of tissue models. RESULTS The mean of the absolute difference between the true mean vessel sO2 and the network output for 40 examples was 4.4% and the standard deviation was 4.5%. CONCLUSIONS 3-D fully convolutional networks were shown capable of producing accurate sO2 maps using the full extent of spatial information contained within 3-D images generated under conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some of the confounding effects present in real images such as limited-view artifacts and have the potential to produce accurate estimates in vivo.
Collapse
Affiliation(s)
- Ciaran Bench
- University College London, Department of Medical Physics and Biomedical Engineering, Gower Street, London, United Kingdom
- Address all correspondence to Ciaran Bench, E-mail:
| | - Andreas Hauptmann
- University of Oulu, Research Unit of Mathematical Sciences, Oulu, Finland
- University College London, Department of Computer Science, Gower Street, London, United Kingdom
| | - Ben Cox
- University College London, Department of Medical Physics and Biomedical Engineering, Gower Street, London, United Kingdom
| |
Collapse
|
29
|
Buchmann J, Kaplan B, Powell S, Prohaska S, Laufer J. Quantitative PA tomography of high resolution 3-D images: Experimental validation in a tissue phantom. PHOTOACOUSTICS 2020; 17:100157. [PMID: 31956487 PMCID: PMC6961715 DOI: 10.1016/j.pacs.2019.100157] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 11/29/2019] [Accepted: 12/05/2019] [Indexed: 05/18/2023]
Abstract
Quantitative photoacoustic tomography aims to recover the spatial distribution of absolute chromophore concentrations and their ratios from deep tissue, high-resolution images. In this study, a model-based inversion scheme based on a Monte-Carlo light transport model is experimentally validated on 3-D multispectral images of a tissue phantom acquired using an all-optical scanner with a planar detection geometry. A calibrated absorber allowed scaling of the measured data during the inversion, while an acoustic correction method was employed to compensate the effects of limited view detection. Chromophore- and fluence-dependent step sizes and Adam optimization were implemented to achieve rapid convergence. High resolution 3-D maps of absolute concentrations and their ratios were recovered with high accuracy. Potential applications of this method include quantitative functional and molecular photoacoustic tomography of deep tissue in preclinical and clinical studies.
Collapse
Affiliation(s)
- Jens Buchmann
- Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, von-Danckelmann-Platz 3, 06120 Halle (Saale), Germany
- Institut für Optik und Atomare Physik, Technische Universität Berlin, Straße des 17, Juni 135, 10623 Berlin, Germany
| | - Bernhard Kaplan
- Visual Data Analysis, Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany
| | - Samuel Powell
- Optics and Photonics Group, Faculty of Engineering, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom
| | - Steffen Prohaska
- Visual Data Analysis, Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany
| | - Jan Laufer
- Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, von-Danckelmann-Platz 3, 06120 Halle (Saale), Germany
- Corresponding author.
| |
Collapse
|
30
|
Vu T, Razansky D, Yao J. Listening to tissues with new light: recent technological advances in photoacoustic imaging. JOURNAL OF OPTICS (2010) 2019; 21:10.1088/2040-8986/ab3b1a. [PMID: 32051756 PMCID: PMC7015182 DOI: 10.1088/2040-8986/ab3b1a] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Photoacoustic tomography (PAT), or optoacoustic tomography, has achieved remarkable progress in the past decade, benefiting from the joint developments in optics, acoustics, chemistry, computing and mathematics. Unlike pure optical or ultrasound imaging, PAT can provide unique optical absorption contrast as well as widely scalable spatial resolution, penetration depth and imaging speed. Moreover, PAT has inherent sensitivity to tissue's functional, molecular, and metabolic state. With these merits, PAT has been applied in a wide range of life science disciplines, and has enabled biomedical research unattainable by other imaging methods. This Review article aims at introducing state-of-the-art PAT technologies and their representative applications. The focus is on recent technological breakthroughs in structural, functional, molecular PAT, including super-resolution imaging, real-time small-animal whole-body imaging, and high-sensitivity functional/molecular imaging. We also discuss the remaining challenges in PAT and envisioned opportunities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Daniel Razansky
- Faculty of Medicine and Institute of Pharmacology and Toxicology, University of Zurich, Switzerland
- Institute for Biomedical Engineering and Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Junjie Yao
- Photoacoustic Imaging Lab, Department of Biomedical Engineering, Duke University, Durham, NC, USA
| |
Collapse
|
31
|
Davoudi N, Deán-Ben XL, Razansky D. Deep learning optoacoustic tomography with sparse data. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0095-3] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
32
|
Buchmann J, Kaplan BA, Powell S, Prohaska S, Laufer J. Three-dimensional quantitative photoacoustic tomography using an adjoint radiance Monte Carlo model and gradient descent. JOURNAL OF BIOMEDICAL OPTICS 2019; 24:1-13. [PMID: 31172727 PMCID: PMC6977014 DOI: 10.1117/1.jbo.24.6.066001] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 04/24/2019] [Indexed: 05/18/2023]
Abstract
Quantitative photoacoustic tomography aims to recover maps of the local concentrations of tissue chromophores from multispectral images. While model-based inversion schemes are promising approaches, major challenges to their practical implementation include the unknown fluence distribution and the scale of the inverse problem. We describe an inversion scheme based on a radiance Monte Carlo model and an adjoint-assisted gradient optimization that incorporates fluence-dependent step sizes and adaptive moment estimation. The inversion is shown to recover absolute chromophore concentrations, blood oxygen saturation, and the Grüneisen parameter from in silico three-dimensional phantom images for different radiance approximations. The scattering coefficient is assumed to be homogeneous and known a priori.
Collapse
Affiliation(s)
- Jens Buchmann
- Technische Universität Berlin, Institut für Optik und Atomare Physik, Berlin, Germany
| | | | - Samuel Powell
- King’s College London, Biomedical Engineering and Imaging Sciences, Becket House, London, United Kingdom
| | | | - Jan Laufer
- Martin-Luther-Universität Halle-Wittenberg, Institut für Physik, Halle (Saale), Germany
| |
Collapse
|
33
|
Adler TJ, Ardizzone L, Vemuri A, Ayala L, Gröhl J, Kirchner T, Wirkert S, Kruse J, Rother C, Köthe U, Maier-Hein L. Uncertainty-aware performance assessment of optical imaging modalities with invertible neural networks. Int J Comput Assist Radiol Surg 2019; 14:997-1007. [PMID: 30903566 DOI: 10.1007/s11548-019-01939-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 03/07/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE Optical imaging is evolving as a key technique for advanced sensing in the operating room. Recent research has shown that machine learning algorithms can be used to address the inverse problem of converting pixel-wise multispectral reflectance measurements to underlying tissue parameters, such as oxygenation. Assessment of the specific hardware used in conjunction with such algorithms, however, has not properly addressed the possibility that the problem may be ill-posed. METHODS We present a novel approach to the assessment of optical imaging modalities, which is sensitive to the different types of uncertainties that may occur when inferring tissue parameters. Based on the concept of invertible neural networks, our framework goes beyond point estimates and maps each multispectral measurement to a full posterior probability distribution which is capable of representing ambiguity in the solution via multiple modes. Performance metrics for a hardware setup can then be computed from the characteristics of the posteriors. RESULTS Application of the assessment framework to the specific use case of camera selection for physiological parameter estimation yields the following insights: (1) estimation of tissue oxygenation from multispectral images is a well-posed problem, while (2) blood volume fraction may not be recovered without ambiguity. (3) In general, ambiguity may be reduced by increasing the number of spectral bands in the camera. CONCLUSION Our method could help to optimize optical camera design in an application-specific manner.
Collapse
Affiliation(s)
- Tim J Adler
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany. .,Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | | | - Anant Vemuri
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Leonardo Ayala
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Janek Gröhl
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.,Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Thomas Kirchner
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.,Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Sebastian Wirkert
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Jakob Kruse
- Visual Learning Lab, Heidelberg University, Heidelberg, Germany
| | - Carsten Rother
- Visual Learning Lab, Heidelberg University, Heidelberg, Germany
| | - Ullrich Köthe
- Visual Learning Lab, Heidelberg University, Heidelberg, Germany
| | - Lena Maier-Hein
- Computer Assisted Medical Interventions, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| |
Collapse
|
34
|
Abstract
In medical applications, the accuracy and robustness of imaging methods are of crucial importance to ensure optimal patient care. While photoacoustic imaging (PAI) is an emerging modality with promising clinical applicability, state-of-the-art approaches to quantitative photoacoustic imaging (qPAI), which aim to solve the ill-posed inverse problem of recovering optical absorption from the measurements obtained, currently cannot comply with these high standards. This can be attributed to the fact that existing methods often rely on several simplifying a priori assumptions of the underlying physical tissue properties or cannot deal with realistic noise levels. In this manuscript, we address this issue with a new method for estimating an indicator of the uncertainty of an estimated optical property. Specifically, our method uses a deep learning model to compute error estimates for optical parameter estimations of a qPAI algorithm. Functional tissue parameters, such as blood oxygen saturation, are usually derived by averaging over entire signal intensity-based regions of interest (ROIs). Therefore, we propose to reduce the systematic error of the ROI samples by additionally discarding those pixels for which our method estimates a high error and thus a low confidence. In silico experiments show an improvement in the accuracy of optical absorption quantification when applying our method to refine the ROI, and it might thus become a valuable tool for increasing the robustness of qPAI methods.
Collapse
|
35
|
Abstract
Reconstruction of photoacoustic (PA) images acquired with clinical ultrasound transducers is usually performed using the Delay and Sum (DAS) beamforming algorithm. Recently, a variant of DAS, referred to as Delay Multiply and Sum (DMAS) beamforming has been shown to provide increased contrast, signal-to-noise ratio (SNR) and resolution in PA imaging. The main reasons for the use of DAS beamforming in photoacoustics are its simple implementation, real-time capability, and the linearity of the beamformed image to the PA signal. This is crucial for the identification of different chromophores in multispectral PA applications. In contrast, current DMAS implementations are not responsive to the full spectrum of sound frequencies from a photoacoustic source and have not been shown to provide a reconstruction linear to the PA signal. Furthermore, due to its increased computational complexity, DMAS has not been shown yet to work in real-time. Here, we present an open-source real-time variant of the DMAS algorithm, signed DMAS (sDMAS), that ensures linearity in the original PA signal response while providing the increased image quality of DMAS. We show the applicability of sDMAS for multispectral PA applications, in vitro and in vivo. The sDMAS and reference DAS algorithms were integrated in the open-source Medical Imaging Interaction Toolkit (MITK) and are available as real-time capable implementations.
Collapse
|
36
|
Li M, Tang Y, Yao J. Photoacoustic tomography of blood oxygenation: A mini review. PHOTOACOUSTICS 2018; 10:65-73. [PMID: 29988848 PMCID: PMC6033062 DOI: 10.1016/j.pacs.2018.05.001] [Citation(s) in RCA: 160] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 05/24/2018] [Accepted: 05/28/2018] [Indexed: 05/04/2023]
Abstract
Photoacoustic tomography (PAT) is a hybrid imaging modality that combines rich contrast of optical excitation and deep penetration of ultrasound detection. With its unique optical absorption contrast mechanism, PAT is inherently sensitive to the functional and molecular information of biological tissues, and thus has been widely used in preclinical and clinical studies. Among many functional capabilities of PAT, measuring blood oxygenation is arguably one of the most important applications, and has been widely performed in photoacoustic studies of brain functions, tumor hypoxia, wound healing, and cancer therapy. Yet, the complex optical conditions of biological tissues, especially the strong wavelength-dependent optical attenuation, have long hurdled the PAT measurement of blood oxygenation at depths beyond a few millimeters. A variety of PAT methods have been developed to improve the accuracy of blood oxygenation measurement, using novel laser illumination schemes, oxygen-sensitive fluorescent dyes, comprehensive mathematic models, or prior information provided by complementary imaging modalities. These novel methods have made exciting progress, while several challenges remain. This concise review aims to introduce the recent developments in photoacoustic blood oxygenation measurement, compare each method's advantages and limitations, highlight their representative applications, and discuss the remaining challenges for future advances.
Collapse
Affiliation(s)
| | | | - Junjie Yao
- Photoacoustic Imaging Laboratory, Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|