1
|
Paul A, Mallidi S. U-Net enhanced real-time LED-based photoacoustic imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300465. [PMID: 38622811 PMCID: PMC11164633 DOI: 10.1002/jbio.202300465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/18/2024] [Accepted: 03/17/2024] [Indexed: 04/17/2024]
Abstract
Photoacoustic (PA) imaging is hybrid imaging modality with good optical contrast and spatial resolution. Portable, cost-effective, smaller footprint light emitting diodes (LEDs) are rapidly becoming important PA optical sources. However, the key challenge faced by the LED-based systems is the low light fluence that is generally compensated by high frame averaging, consequently reducing acquisition frame-rate. In this study, we present a simple deep learning U-Net framework that enhances the signal-to-noise ratio (SNR) and contrast of PA image obtained by averaging low number of frames. The SNR increased by approximately four-fold for both in-class in vitro phantoms (4.39 ± 2.55) and out-of-class in vivo models (4.27 ± 0.87). We also demonstrate the noise invariancy of the network and discuss the downsides (blurry outcome and failure to reduce the salt & pepper noise). Overall, the developed U-Net framework can provide a real-time image enhancement platform for clinically translatable low-cost and low-energy light source-based PA imaging systems.
Collapse
Affiliation(s)
- Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | |
Collapse
|
2
|
Xu M, Ma Q, Zhang H, Kong D, Zeng T. MEF-UNet: An end-to-end ultrasound image segmentation algorithm based on multi-scale feature extraction and fusion. Comput Med Imaging Graph 2024; 114:102370. [PMID: 38513396 DOI: 10.1016/j.compmedimag.2024.102370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 03/10/2024] [Accepted: 03/13/2024] [Indexed: 03/23/2024]
Abstract
Ultrasound image segmentation is a challenging task due to the complexity of lesion types, fuzzy boundaries, and low-contrast images along with the presence of noises and artifacts. To address these issues, we propose an end-to-end multi-scale feature extraction and fusion network (MEF-UNet) for the automatic segmentation of ultrasound images. Specifically, we first design a selective feature extraction encoder, including detail extraction stage and structure extraction stage, to precisely capture the edge details and overall shape features of the lesions. In order to enhance the representation capacity of contextual information, we develop a context information storage module in the skip-connection section, responsible for integrating information from adjacent two-layer feature maps. In addition, we design a multi-scale feature fusion module in the decoder section to merge feature maps with different scales. Experimental results indicate that our MEF-UNet can significantly improve the segmentation results in both quantitative analysis and visual effects.
Collapse
Affiliation(s)
- Mengqi Xu
- School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, Jiangsu, 210044, China
| | - Qianting Ma
- School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, Jiangsu, 210044, China.
| | - Huajie Zhang
- School of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, Jiangsu, 210044, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, 310027, China
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong Special Administrative Region of China
| |
Collapse
|
3
|
Song X, Zhong W, Li Z, Peng S, Zhang H, Wang G, Dong J, Liu X, Xu X, Liu Q. Accelerated model-based iterative reconstruction strategy for sparse-view photoacoustic tomography aided by multi-channel autoencoder priors. JOURNAL OF BIOPHOTONICS 2024; 17:e202300281. [PMID: 38010827 DOI: 10.1002/jbio.202300281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 11/29/2023]
Abstract
Photoacoustic tomography (PAT) commonly works in sparse view due to data acquisition limitations. However, reconstruction suffers from serious deterioration (e.g., severe artifacts) using traditional algorithms under sparse view. Here, a novel accelerated model-based iterative reconstruction strategy for sparse-view PAT aided by multi-channel autoencoder priors was proposed. A multi-channel denoising autoencoder network was designed to learn prior information, which provides constraints for model-based iterative reconstruction. This integration accelerates the iteration process, leading to optimal reconstruction outcomes. The performance of the proposed method was evaluated using blood vessel simulation data and experimental data. The results show that the proposed method can achieve superior sparse-view reconstruction with a significant acceleration of iteration. Notably, the proposed method exhibits excellent performance under extremely sparse condition (e.g., 32 projections) compared with the U-Net method, with an improvement of 48% in PSNR and 12% in SSIM for in vivo experimental data.
Collapse
Affiliation(s)
- Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Shuchong Peng
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Hongyu Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Guijun Wang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xiaoling Xu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
4
|
Li JN, Zhang SW, Qiang YR, Zhou QY. A novel cross-layer dual encoding-shared decoding network framework with spatial self-attention mechanism for hippocampus segmentation. Comput Biol Med 2023; 167:107584. [PMID: 37883852 DOI: 10.1016/j.compbiomed.2023.107584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/21/2023] [Accepted: 10/15/2023] [Indexed: 10/28/2023]
Abstract
Accurate segmentation of the hippocampus from the brain magnetic resonance images (MRIs) is a crucial task in the neuroimaging research, since its structural integrity is strongly related to several neurodegenerative disorders, such as Alzheimer's disease (AD). Automatic segmentation of the hippocampus structures is challenging due to the small volume, complex shape, low contrast and discontinuous boundaries of hippocampus. Although some methods have been developed for the hippocampus segmentation, most of them paid too much attention to the hippocampus shape and volume instead of considering the spatial information. Additionally, the extracted features are independent of each other, ignoring the correlation between the global and local information. In view of this, here we proposed a novel cross-layer dual Encoding-Shared Decoding network framework with Spatial self-Attention mechanism (called ESDSA) for hippocampus segmentation in human brains. Considering that the hippocampus is a relatively small part in MRI, we introduced the spatial self-attention mechanism in ESDSA to capture the spatial information of hippocampus for improving the segmentation accuracy. We also designed a cross-layer dual encoding-shared decoding network to effectively extract the global information of MRIs and the spatial information of hippocampus. The spatial features of hippocampus and the features extracted from the MRIs were combined to realize the hippocampus segmentation. Results on the baseline T1-weighted structural MRI data show that the performance of our ESDSA is superior to other state-of-the-art methods, and the dice similarity coefficient of ESDSA achieves 89.37%. In addition, the dice similarity coefficient of the Spatial Self-Attention mechanism (SSA) strategy and the dual Encoding-Shared Decoding (ESD) strategy is 9.47%, 5.35% higher than that of the baseline U-net, respectively, indicating that the strategies of SSA and ESD can effectively enhance the segmentation accuracy of human brain hippocampus.
Collapse
Affiliation(s)
- Jia-Ni Li
- MOE Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Shao-Wu Zhang
- MOE Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Yan-Rui Qiang
- MOE Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Qin-Yi Zhou
- MOE Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
5
|
Cai R, Liu Y, Sun Z, Wang Y, Wang Y, Li F, Jiang H. Deep-learning based segmentation of ultrasound adipose image for liposuction. Int J Med Robot 2023; 19:e2548. [PMID: 37448348 DOI: 10.1002/rcs.2548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 06/25/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
BACKGROUND To develop an automatic and reliable ultrasonic visual system for robot- or computer-assisted liposuction, we examined the use of deep learning for the segmentation of adipose ultrasound images in clinical and educational settings. METHODS To segment adipose layers, it is proposed to use an Attention Skip-Convolutions ResU-Net (Attention SCResU-Net) consisting of SC residual blocks, attention gates and U-Net architecture. Transfer learning is utilised to compensate for the deficiency of clinical data. The Bama pig and clinical human adipose ultrasound image datasets are utilized, respectively. RESULTS The final model obtains a Dice of 99.06 ± 0.95% and an ASD of 0.19 ± 0.18 mm on clinical datasets, outperforming other methods. By fine-tuning the eight deepest layers, accurate and stable segmentation results are obtained. CONCLUSIONS The new deep-learning method achieves the accurate and automatic segmentation of adipose ultrasound images in real-time, thereby enhancing the safety of liposuction and enabling novice surgeons to better control the cannula.
Collapse
Affiliation(s)
- Ruxin Cai
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Yanzhen Liu
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Zhibin Sun
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Yuneng Wang
- Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China
| | - Yu Wang
- Beihang University, School of Biological Science and Medical Engineering, Beijing, China
| | - Facheng Li
- Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China
| | - Haiyue Jiang
- Chinese Academy of Medical Sciences and Peking Union Medical College, Plastic Surgery Hospital, Beijing, China
| |
Collapse
|
6
|
Mondal S, Paul S, Singh N, Saha RK. Deep learning on photoacoustic tomography to remove image distortion due to inaccurate measurement of the scanning radius. BIOMEDICAL OPTICS EXPRESS 2023; 14:5817-5832. [PMID: 38021110 PMCID: PMC10659812 DOI: 10.1364/boe.501277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/17/2023] [Accepted: 10/04/2023] [Indexed: 12/01/2023]
Abstract
Photoacoustic tomography (PAT) is a non-invasive, non-ionizing hybrid imaging modality that holds great potential for various biomedical applications and the incorporation with deep learning (DL) methods has experienced notable advancements in recent times. In a typical 2D PAT setup, a single-element ultrasound detector (USD) is used to collect the PA signals by making a 360° full scan of the imaging region. The traditional backprojection (BP) algorithm has been widely used to reconstruct the PAT images from the acquired signals. Accurate determination of the scanning radius (SR) is required for proper image reconstruction. Even a slight deviation from its nominal value can lead to image distortion compromising the quality of the reconstruction. To address this challenge, two approaches have been developed and examined herein. The first framework includes a modified version of dense U-Net (DUNet) architecture. The second procedure involves a DL-based convolutional neural network (CNN) for image classification followed by a DUNet. The first protocol was trained with heterogeneous simulated images generated from three different phantoms to learn the relationship between the reconstructed and the corresponding ground truth (GT) images. In the case of the second scheme, the first stage was trained with the same heterogeneous dataset to classify the image type and the second stage was trained individually with the appropriate images. The performance of these architectures has been tested on both simulated and experimental images. The first method can sustain SR deviation up to approximately 6% for simulated images and 5% for experimental images and can accurately reproduce the GTs. The proposed DL-approach extends the limits further (approximately 7% and 8% for simulated and experimental images, respectively). Our results suggest that classification-based DL method does not need a precise assessment of SR for accurate PAT image formation.
Collapse
Affiliation(s)
- Sudeep Mondal
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Subhadip Paul
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Navjot Singh
- Department of Information Technology, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Ratan K Saha
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| |
Collapse
|
7
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
8
|
Wang T, Chen C, Shen K, Liu W, Tian C. Streak artifact suppressed back projection for sparse-view photoacoustic computed tomography. APPLIED OPTICS 2023; 62:3917-3925. [PMID: 37706701 DOI: 10.1364/ao.487957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/21/2023] [Indexed: 09/15/2023]
Abstract
The development of fast and accurate image reconstruction algorithms under constrained data acquisition conditions is important for photoacoustic computed tomography (PACT). Sparse-view measurements have been used to accelerate data acquisition and reduce system complexity; however, reconstructed images suffer from sparsity-induced streak artifacts. In this paper, a modified back-projection (BP) method termed anti-streak BP is proposed to suppress streak artifacts in sparse-view PACT reconstruction. During the reconstruction process, the anti-streak BP finds the back-projection terms contaminated by high-intensity sources with an outlier detection method. Then, the weights of the contaminated back-projection terms are adaptively adjusted to eliminate the effects of high-intensity sources. The proposed anti-streak BP method is compared with the conventional BP method on both simulation and in vivo data. The anti-streak BP method shows substantially fewer artifacts in the reconstructed images, and the streak index is 54% and 20% lower than that of the conventional BP method on simulation and in vivo data, when the transducer number N=128. The anti-streak BP method is a powerful improvement of the BP method with the ability of artifact suppression.
Collapse
|
9
|
Wang F, Kim SH, Zhao Y, Raghuram A, Veeraraghavan A, Robinson J, Hielscher AH. High-Speed Time-Domain Diffuse Optical Tomography with a Sensitivity Equation-based Neural Network. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:459-474. [PMID: 37456517 PMCID: PMC10348778 DOI: 10.1109/tci.2023.3273423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
Steady progress in time-domain diffuse optical tomography (TD-DOT) technology is allowing for the first time the design of low-cost, compact, and high-performance systems, thus promising more widespread clinical TD-DOT use, such as for recording brain tissue hemodynamics. TD-DOT is known to provide more accurate values of optical properties and physiological parameters compared to its frequency-domain or steady-state counterparts. However, achieving high temporal resolution is still difficult, as solving the inverse problem is computationally demanding, leading to relatively long reconstruction times. The runtime is further compromised by processes that involve 'nontrivial' empirical tuning of reconstruction parameters, which increases complexity and inefficiency. To address these challenges, we present a new reconstruction algorithm that combines a deep-learning approach with our previously introduced sensitivity-equation-based, non-iterative sparse optical reconstruction (SENSOR) code. The new algorithm (called SENSOR-NET) unfolds the iterations of SENSOR into a deep neural network. In this way, we achieve high-resolution sparse reconstruction using only learned parameters, thus eliminating the need to tune parameters prior to reconstruction empirically. Furthermore, once trained, the reconstruction time is not dependent on the number of sources or wavelengths used. We validate our method with numerical and experimental data and show that accurate reconstructions with 1 mm spatial resolution can be obtained in under 20 milliseconds regardless of the number of sources used in the setup. This opens the door for real-time brain monitoring and other high-speed DOT applications.
Collapse
Affiliation(s)
- Fay Wang
- Department of Biomedical Engineering, Columbia University, New York, NY 10027
| | - Stephen H Kim
- Department of Biomedical Engineering, New York University - Tandon School of Engineering, New York, NY 10001
| | - Yongyi Zhao
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Ankit Raghuram
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Jacob Robinson
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Andreas H Hielscher
- Department of Biomedical Engineering, New York University - Tandon School of Engineering, New York, NY 10001
| |
Collapse
|
10
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
11
|
Zhang W, Hu T, Li Z, Sun Z, Jia K, Dou H, Feng J, Pogue BW. Selfrec-Net: self-supervised deep learning approach for the reconstruction of Cherenkov-excited luminescence scanned tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:783-798. [PMID: 36874507 PMCID: PMC9979688 DOI: 10.1364/boe.480429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/23/2022] [Accepted: 01/04/2023] [Indexed: 06/18/2023]
Abstract
As an emerging imaging technique, Cherenkov-excited luminescence scanned tomography (CELST) can recover a high-resolution 3D distribution of quantum emission fields within tissue using X-ray excitation for deep penetrance. However, its reconstruction is an ill-posed and under-conditioned inverse problem because of the diffuse optical emission signal. Deep learning based image reconstruction has shown very good potential for solving these types of problems, however they suffer from a lack of ground-truth image data to confirm when used with experimental data. To overcome this, a self-supervised network cascaded by a 3D reconstruction network and the forward model, termed Selfrec-Net, was proposed to perform CELST reconstruction. Under this framework, the boundary measurements are input to the network to reconstruct the distribution of the quantum field and the predicted measurements are subsequently obtained by feeding the reconstructed result to the forward model. The network was trained by minimizing the loss between the input measurements and the predicted measurements rather than the reconstructed distributions and the corresponding ground truths. Comparative experiments were carried out on both numerical simulations and physical phantoms. For singular luminescent targets, the results demonstrate the effectiveness and robustness of the proposed network, and comparable performance can be attained to a state-of-the-art deep supervised learning algorithm, where the accuracy of the emission yield and localization of the objects was far superior to iterative reconstruction methods. Reconstruction of multiple objects is still reasonable with high localization accuracy, although with limits to the emission yield accuracy as the distribution becomes more complex. Overall though the reconstruction of Selfrec-Net provides a self-supervised way to recover the location and emission yield of molecular distributions in murine model tissues.
Collapse
Affiliation(s)
- Wenqian Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Ting Hu
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhe Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhonghua Sun
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Kebin Jia
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Huijing Dou
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jinchao Feng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Brian W. Pogue
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705, USA
| |
Collapse
|
12
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
13
|
Wang T, He M, Shen K, Liu W, Tian C. Learned regularization for image reconstruction in sparse-view photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:5721-5737. [PMID: 36733736 PMCID: PMC9872879 DOI: 10.1364/boe.469460] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/07/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Constrained data acquisitions, such as sparse view measurements, are sometimes used in photoacoustic computed tomography (PACT) to accelerate data acquisition. However, it is challenging to reconstruct high-quality images under such scenarios. Iterative image reconstruction with regularization is a typical choice to solve this problem but it suffers from image artifacts. In this paper, we present a learned regularization method to suppress image artifacts in model-based iterative reconstruction in sparse view PACT. A lightweight dual-path network is designed to learn regularization features from both the data and the image domains. The network is trained and tested on both simulation and in vivo datasets and compared with other methods such as Tikhonov regularization, total variation regularization, and a U-Net based post-processing approach. Results show that although the learned regularization network possesses a size of only 0.15% of a U-Net, it outperforms other methods and converges after as few as five iterations, which takes less than one-third of the time of conventional methods. Moreover, the proposed reconstruction method incorporates the physical model of photoacoustic imaging and explores structural information from training datasets. The integration of deep learning with a physical model can potentially achieve improved imaging performance in practice.
Collapse
Affiliation(s)
- Tong Wang
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Menghui He
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
| | - Kang Shen
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Wen Liu
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Chao Tian
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
14
|
Hui X, Malik MOA, Pramanik M. Looking deep inside tissue with photoacoustic molecular probes: a review. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:070901. [PMID: 36451698 PMCID: PMC9307281 DOI: 10.1117/1.jbo.27.7.070901] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/01/2022] [Indexed: 05/19/2023]
Abstract
Significance Deep tissue noninvasive high-resolution imaging with light is challenging due to the high degree of light absorption and scattering in biological tissue. Photoacoustic imaging (PAI) can overcome some of the challenges of pure optical or ultrasound imaging to provide high-resolution deep tissue imaging. However, label-free PAI signals from light absorbing chromophores within the tissue are nonspecific. The use of exogeneous contrast agents (probes) not only enhances the imaging contrast (and imaging depth) but also increases the specificity of PAI by binding only to targeted molecules and often providing signals distinct from the background. Aim We aim to review the current development and future progression of photoacoustic molecular probes/contrast agents. Approach First, PAI and the need for using contrast agents are briefly introduced. Then, the recent development of contrast agents in terms of materials used to construct them is discussed. Then, various probes are discussed based on targeting mechanisms, in vivo molecular imaging applications, multimodal uses, and use in theranostic applications. Results Material combinations are being used to develop highly specific contrast agents. In addition to passive accumulation, probes utilizing activation mechanisms show promise for greater controllability. Several probes also enable concurrent multimodal use with fluorescence, ultrasound, Raman, magnetic resonance imaging, and computed tomography. Finally, targeted probes are also shown to aid localized and molecularly specific photo-induced therapy. Conclusions The development of contrast agents provides a promising prospect for increased contrast, higher imaging depth, and molecularly specific information. Of note are agents that allow for controlled activation, explore other optical windows, and enable multimodal use to overcome some of the shortcomings of label-free PAI.
Collapse
Affiliation(s)
- Xie Hui
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| | - Mohammad O. A. Malik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| |
Collapse
|
15
|
Zuo H, Cui M, Wang X, Ma C. Spectral crosstalk in photoacoustic computed tomography. PHOTOACOUSTICS 2022; 26:100356. [PMID: 35574185 PMCID: PMC9095891 DOI: 10.1016/j.pacs.2022.100356] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/04/2022] [Accepted: 04/11/2022] [Indexed: 06/15/2023]
Abstract
Multispectral photoacoustic (PA) imaging faces two major challenges: the spectral coloring effect, which has been studied extensively as an optical inversion problem, and the spectral crosstalk, which is basically a result of non-ideal acoustic inversion. So far, there is no systematic work to analyze the spectral crosstalk because acoustic inversion and spectroscopic measurement are always treated as decoupled. In this work, we theorize and demonstrate through a series of simulations and experiments how imperfect acoustic inversion induces inaccurate PA spectrum measurement. We provide detailed analysis to elucidate how different factors, including limited bandwidth, limited view, light attenuation, out-of-plane signal, and image reconstruction schemes, conspire to render the measured PA spectrum inaccurate. We found that the model-based reconstruction outperforms universal back-projection in suppressing the spectral crosstalk in some cases.
Collapse
Affiliation(s)
- Hongzhi Zuo
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Manxiu Cui
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Xuanhao Wang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Center for Clinical Big Data Research, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China
- Photomedicine Laboratory, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
16
|
Rajendran P, Pramanik M. High frame rate (∼3 Hz) circular photoacoustic tomography using single-element ultrasound transducer aided with deep learning. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:066005. [PMID: 36452448 PMCID: PMC9209813 DOI: 10.1117/1.jbo.27.6.066005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 06/01/2022] [Indexed: 05/29/2023]
Abstract
SIGNIFICANCE In circular scanning photoacoustic tomography (PAT), it takes several minutes to generate an image of acceptable quality, especially with a single-element ultrasound transducer (UST). The imaging speed can be enhanced by faster scanning (with high repetition rate light sources) and using multiple-USTs. However, artifacts arising from the sparse signal acquisition and low signal-to-noise ratio at higher scanning speeds limit the imaging speed. Thus, there is a need to improve the imaging speed of the PAT systems without hampering the quality of the PAT image. AIM To improve the frame rate (or imaging speed) of the PAT system by using deep learning (DL). APPROACH For improving the frame rate (or imaging speed) of the PAT system, we propose a novel U-Net-based DL framework to reconstruct PAT images from fast scanning data. RESULTS The efficiency of the network was evaluated on both single- and multiple-UST-based PAT systems. Both phantom and in vivo imaging demonstrate that the network can improve the imaging frame rate by approximately sixfold in single-UST-based PAT systems and by approximately twofold in multi-UST-based PAT systems. CONCLUSIONS We proposed an innovative method to improve the frame rate (or imaging speed) by using DL and with this method, the fastest frame rate of ∼ 3 Hz imaging is achieved without hampering the quality of the reconstructed image.
Collapse
Affiliation(s)
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| |
Collapse
|
17
|
Browne AW, Deyneka E, Ceccarelli F, To JK, Chen S, Tang J, Vu AN, Baldi PF. Deep learning to enable color vision in the dark. PLoS One 2022; 17:e0265185. [PMID: 35385502 PMCID: PMC8985995 DOI: 10.1371/journal.pone.0265185] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 02/24/2022] [Indexed: 12/02/2022] Open
Abstract
Humans perceive light in the visible spectrum (400-700 nm). Some night vision systems use infrared light that is not perceptible to humans and the images rendered are transposed to a digital display presenting a monochromatic image in the visible spectrum. We sought to develop an imaging algorithm powered by optimized deep learning architectures whereby infrared spectral illumination of a scene could be used to predict a visible spectrum rendering of the scene as if it were perceived by a human with visible spectrum light. This would make it possible to digitally render a visible spectrum scene to humans when they are otherwise in complete “darkness” and only illuminated with infrared light. To achieve this goal, we used a monochromatic camera sensitive to visible and near infrared light to acquire an image dataset of printed images of faces under multispectral illumination spanning standard visible red (604 nm), green (529 nm) and blue (447 nm) as well as infrared wavelengths (718, 777, and 807 nm). We then optimized a convolutional neural network with a U-Net-like architecture to predict visible spectrum images from only near-infrared images. This study serves as a first step towards predicting human visible spectrum scenes from imperceptible near-infrared illumination. Further work can profoundly contribute to a variety of applications including night vision and studies of biological samples sensitive to visible light.
Collapse
Affiliation(s)
- Andrew W. Browne
- Gavin Herbert Eye Institute, Center for Translational Vision Research, Department of Ophthalmology, University of California-Irvine, Irvine, CA, United States of America
- Institute for Clinical and Translational Sciences, University of California-Irvine, Irvine, CA, United States of America
- Department of Biomedical Engineering, University of California-Irvine, Irvine, CA, United States of America
- * E-mail: (AWB); (PFB)
| | - Ekaterina Deyneka
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
| | - Francesco Ceccarelli
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
| | - Josiah K. To
- Gavin Herbert Eye Institute, Center for Translational Vision Research, Department of Ophthalmology, University of California-Irvine, Irvine, CA, United States of America
| | - Siwei Chen
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
| | - Jianing Tang
- Department of Biomedical Engineering, University of California-Irvine, Irvine, CA, United States of America
| | - Anderson N. Vu
- Gavin Herbert Eye Institute, Center for Translational Vision Research, Department of Ophthalmology, University of California-Irvine, Irvine, CA, United States of America
| | - Pierre F. Baldi
- Department of Computer Science, University of California, Irvine, CA, United States of America
- Institute for Genomics and Bioinformatics, University of California, Irvine, CA, United States of America
- * E-mail: (AWB); (PFB)
| |
Collapse
|
18
|
Ueda K, Ikeda K, Koyama O, Yamada M. Absolute phase retrieval of shiny objects using fringe projection and deep learning with computer-graphics-based images. APPLIED OPTICS 2022; 61:2750-2756. [PMID: 35471347 DOI: 10.1364/ao.450723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/07/2022] [Indexed: 06/14/2023]
Abstract
Fringe projection profilometry is a high-precision method used to measure the 3D shape of an object by projecting sinusoidal fringes onto an object. However, fringes projected onto a metallic or shiny object are distorted nonlinearly, which causes significant measurement errors. A high-precision measurement method for shiny objects that employs computer graphics (CG) and deep learning is proposed. We trained a deep neural network by projecting fringes on a shiny object in CG space. Our results show that the method can reduce the nonlinear fringe distortion caused by gloss in real space.
Collapse
|
19
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
20
|
Kim J, Lee H, Im S, Lee SA, Kim D, Toh KA. Machine learning-based leaky momentum prediction of plasmonic random nanosubstrate. OPTICS EXPRESS 2021; 29:30625-30636. [PMID: 34614783 DOI: 10.1364/oe.437939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 08/29/2021] [Indexed: 06/13/2023]
Abstract
In this work, we explore the use of machine learning for constructing the leakage radiation characteristics of the bright-field images of nanoislands from surface plasmon polariton based on the plasmonic random nanosubstrate. The leakage radiation refers to a leaky wave of surface plasmon polariton (SPP) modes through a dielectric substrate which has drawn interest due to its possibility of direct visualization and analysis of SPP propagation. A fast-learning two-layer neural network has been deployed to learn and predict the relationship between the leakage radiation characteristics and the bright-field images of nanoislands utilizing a limited number of training samples. The proposed learning framework is expected to significantly simplify the process of leaky radiation image construction without the need of sophisticated equipment. Moreover, a wide range of application extensions can be anticipated for the proposed image-to-image prediction.
Collapse
|
21
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
22
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
23
|
Yang C, Lan H, Gao F, Gao F. Review of deep learning for photoacoustic imaging. PHOTOACOUSTICS 2021; 21:100215. [PMID: 33425679 PMCID: PMC7779783 DOI: 10.1016/j.pacs.2020.100215] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 05/02/2023]
Abstract
Machine learning has been developed dramatically and witnessed a lot of applications in various fields over the past few years. This boom originated in 2009, when a new model emerged, that is, the deep artificial neural network, which began to surpass other established mature models on some important benchmarks. Later, it was widely used in academia and industry. Ranging from image analysis to natural language processing, it fully exerted its magic and now become the state-of-the-art machine learning models. Deep neural networks have great potential in medical imaging technology, medical data analysis, medical diagnosis and other healthcare issues, and is promoted in both pre-clinical and even clinical stages. In this review, we performed an overview of some new developments and challenges in the application of machine learning to medical image analysis, with a special focus on deep learning in photoacoustic imaging. The aim of this review is threefold: (i) introducing deep learning with some important basics, (ii) reviewing recent works that apply deep learning in the entire ecological chain of photoacoustic imaging, from image reconstruction to disease diagnosis, (iii) providing some open source materials and other resources for researchers interested in applying deep learning to photoacoustic imaging.
Collapse
Affiliation(s)
- Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
24
|
ZHANG KANGNING, HU JUNJIE, YANG WEIJIAN. Deep Compressed Imaging via Optimized-Pattern Scanning. PHOTONICS RESEARCH 2021; 9:B57-B70. [PMID: 34532505 PMCID: PMC8443127 DOI: 10.1364/prj.410556] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 01/13/2021] [Indexed: 05/31/2023]
Abstract
The need for high-speed imaging in applications such as biomedicine, surveillance and consumer electronics has called for new developments of imaging systems. While the industrial effort continuously pushes the advance of silicon focal plane array image sensors, imaging through a single-pixel detector has gained significant interests thanks to the development of computational algorithms. Here, we present a new imaging modality, Deep Compressed Imaging via Optimized-Pattern Scanning (DeCIOPS), which can significantly increase the acquisition speed for a single-detector-based imaging system. We project and scan an illumination pattern across the object and collect the sampling signal with a single-pixel detector. We develop an innovative end-to-end optimized auto-encoder, using a deep neural network and compressed sensing algorithm, to optimize the illumination pattern, which allows us to reconstruct faithfully the image from a small number of samples, and with a high frame rate. Compared with the conventional switching-mask based single-pixel camera and point scanning imaging systems, our method achieves a much higher imaging speed, while retaining a similar imaging quality. We experimentally validated this imaging modality in the settings of both continuous-wave (CW) illumination and pulsed light illumination and showed high-quality image reconstructions with a high compressed sampling rate. This new compressed sensing modality could be widely applied in different imaging systems, enabling new applications which require high imaging speed.
Collapse
Affiliation(s)
- KANGNING ZHANG
- Department of Electrical and Computer Engineering, University of California, Davis, CA 95616, USA
| | - JUNJIE HU
- Department of Electrical and Computer Engineering, University of California, Davis, CA 95616, USA
| | - WEIJIAN YANG
- Department of Electrical and Computer Engineering, University of California, Davis, CA 95616, USA
| |
Collapse
|
25
|
Das D, Sharma A, Rajendran P, Pramanik M. Another decade of photoacoustic imaging. Phys Med Biol 2020; 66. [PMID: 33361580 DOI: 10.1088/1361-6560/abd669] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/23/2020] [Indexed: 01/09/2023]
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
26
|
Rajendran P, Pramanik M. Deep learning approach to improve tangential resolution in photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:7311-7323. [PMID: 33408998 PMCID: PMC7747891 DOI: 10.1364/boe.410145] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/29/2020] [Accepted: 11/15/2020] [Indexed: 05/09/2023]
Abstract
In circular scan photoacoustic tomography (PAT), the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially variant and is dependent on the aperture size of the detector. In particular, the tangential resolution improves with the decreasing aperture size. However, using a detector with a smaller aperture reduces the sensitivity of the transducer. Thus, large aperture size detectors are widely preferred in circular scan PAT imaging systems. Although several techniques have been proposed to improve the tangential resolution, they have inherent limitations such as high cost and the need for customized detectors. Herein, we propose a novel deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT imaging systems. We used a fully dense U-Net based convolutional neural network architecture along with 9 residual blocks to improve the tangential resolution of the PAT images. The network was trained on the simulated datasets and its performance was verified by experimental in vivo imaging. Results show that the proposed deep learning network improves the tangential resolution by eight folds, without compromising the structural similarity and quality of image.
Collapse
Affiliation(s)
- Praveenbalaji Rajendran
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| |
Collapse
|
27
|
Sharma A, Pramanik M. Convolutional neural network for resolution enhancement and noise reduction in acoustic resolution photoacoustic microscopy. BIOMEDICAL OPTICS EXPRESS 2020; 11:6826-6839. [PMID: 33408964 PMCID: PMC7747888 DOI: 10.1364/boe.411257] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/24/2020] [Accepted: 10/24/2020] [Indexed: 05/03/2023]
Abstract
In acoustic resolution photoacoustic microscopy (AR-PAM), a high numerical aperture focused ultrasound transducer (UST) is used for deep tissue high resolution photoacoustic imaging. There is a significant degradation of lateral resolution in the out-of-focus region. Improvement in out-of-focus resolution without degrading the image quality remains a challenge. In this work, we propose a deep learning-based method to improve the resolution of AR-PAM images, especially at the out of focus plane. A modified fully dense U-Net based architecture was trained on simulated AR-PAM images. Applying the trained model on experimental images showed that the variation in resolution is ∼10% across the entire imaging depth (∼4 mm) in the deep learning-based method, compared to ∼180% variation in the original PAM images. Performance of the trained network on in vivo rat vasculature imaging further validated that noise-free, high resolution images can be obtained using this method.
Collapse
Affiliation(s)
- Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| |
Collapse
|