1
|
Zhong Y, Liu Z, Zhang X, Liang Z, Chen W, Dai C, Qi L. Unsupervised adversarial neural network for enhancing vasculature in photoacoustic tomography images using optical coherence tomography angiography. Comput Med Imaging Graph 2024; 117:102425. [PMID: 39216343 DOI: 10.1016/j.compmedimag.2024.102425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 08/23/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
Abstract
Photoacoustic tomography (PAT) is a powerful imaging modality for visualizing tissue physiology and exogenous contrast agents. However, PAT faces challenges in visualizing deep-seated vascular structures due to light scattering, absorption, and reduced signal intensity with depth. Optical coherence tomography angiography (OCTA) offers high-contrast visualization of vasculature networks, yet its imaging depth is limited to a millimeter scale. Herein, we propose OCPA-Net, a novel unsupervised deep learning method that utilizes the rich vascular feature of OCTA to enhance PAT images. Trained on unpaired OCTA and PAT images, OCPA-Net incorporates a vessel-aware attention module to enhance deep-seated vessel details captured from OCTA. It leverages a domain-adversarial loss function to enforce structural consistency and a novel identity invariant loss to mitigate excessive image content generation. We validate the structural fidelity of OCPA-Net on simulation experiments, and then demonstrate its vascular enhancement performance on in vivo imaging experiments of tumor-bearing mice and contrast-enhanced pregnant mice. The results show the promise of our method for comprehensive vessel-related image analysis in preclinical research applications.
Collapse
Affiliation(s)
- Yutian Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Zhenyang Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China; Department of Radiotherapy, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, 210003, China
| | - Xiaoming Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Zhaoyong Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Cuixia Dai
- College of Science, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Hong X, Tang F, Wang L, Chen J. Unsupervised deep learning enables real-time image registration of fast-scanning optical-resolution photoacoustic microscopy. PHOTOACOUSTICS 2024; 38:100632. [PMID: 39100197 PMCID: PMC11296048 DOI: 10.1016/j.pacs.2024.100632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/16/2024] [Accepted: 07/02/2024] [Indexed: 08/06/2024]
Abstract
A fast scanner of optical-resolution photoacoustic microscopy is inherently vulnerable to perturbation, leading to severe image distortion and significant misalignment among multiple 2D or 3D images. Restoration and registration of these images is critical for accurately quantifying dynamic information in long-term imaging. However, traditional registration algorithms face a great challenge in computational throughput. Here, we develop an unsupervised deep learning based registration network to achieve real-time image restoration and registration. This method can correct artifacts from B-scan distortion and remove misalignment among adjacent and repetitive images in real time. Compared with conventional intensity based registration algorithms, the throughput of the developed algorithm is improved by 50 times. After training, the new deep learning method performs better than conventional feature based image registration algorithms. The results show that the proposed method can accurately restore and register the images of fast-scanning photoacoustic microscopy in real time, offering a powerful tool to extract dynamic vascular structural and functional information.
Collapse
Affiliation(s)
- Xiaobin Hong
- School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou, Guangdong, PR China
| | - Furong Tang
- School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou, Guangdong, PR China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China
| | - Jiangbo Chen
- School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou, Guangdong, PR China
| |
Collapse
|
3
|
Dong W, Zhu C, Xie D, Zhang Y, Tao S, Tian C. Image restoration for ring-array photoacoustic tomography system based on blind spatially rotational deconvolution. PHOTOACOUSTICS 2024; 38:100607. [PMID: 38665365 PMCID: PMC11044036 DOI: 10.1016/j.pacs.2024.100607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 03/17/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024]
Abstract
Ring-array photoacoustic tomography (PAT) system has been widely used in noninvasive biomedical imaging. However, the reconstructed image usually suffers from spatially rotational blur and streak artifacts due to the non-ideal imaging conditions. To improve the reconstructed image towards higher quality, we propose a concept of spatially rotational convolution to formulate the image blur process, then we build a regularized restoration problem model accordingly and design an alternating minimization algorithm which is called blind spatially rotational deconvolution to achieve the restored image. Besides, we also present an image preprocessing method based on the proposed algorithm to remove the streak artifacts. We take experiments on phantoms and in vivo biological tissues for evaluation, the results show that our approach can significantly enhance the resolution of the image obtained from ring-array PAT system and remove the streak artifacts effectively.
Collapse
Affiliation(s)
- Wende Dong
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
- Key Laboratory of Space Photoelectric Detection and Perception (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, Jiangsu 211106, China
| | - Chenlong Zhu
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
- Key Laboratory of Space Photoelectric Detection and Perception (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, Jiangsu 211106, China
| | - Dan Xie
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Yanli Zhang
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
- Key Laboratory of Space Photoelectric Detection and Perception (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, Jiangsu 211106, China
| | - Shuyin Tao
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China
| | - Chao Tian
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
| |
Collapse
|
4
|
Zhong W, Li T, Hou S, Zhang H, Li Z, Wang G, Liu Q, Song X. Unsupervised disentanglement strategy for mitigating artifact in photoacoustic tomography under extremely sparse view. PHOTOACOUSTICS 2024; 38:100613. [PMID: 38764521 PMCID: PMC11101706 DOI: 10.1016/j.pacs.2024.100613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/15/2024] [Accepted: 04/30/2024] [Indexed: 05/21/2024]
Abstract
Traditional methods under sparse view for reconstruction of photoacoustic tomography (PAT) often result in significant artifacts. Here, a novel image to image transformation method based on unsupervised learning artifact disentanglement network (ADN), named PAT-ADN, was proposed to address the issue. This network is equipped with specialized encoders and decoders that are responsible for encoding and decoding the artifacts and content components of unpaired images, respectively. The performance of the proposed PAT-ADN was evaluated using circular phantom data and the animal in vivo experimental data. The results demonstrate that PAT-ADN exhibits excellent performance in effectively removing artifacts. In particular, under extremely sparse view (e.g., 16 projections), structural similarity index and peak signal-to-noise ratio are improved by ∼188 % and ∼85 % in in vivo experimental data using the proposed method compared to traditional reconstruction methods. PAT-ADN improves the imaging performance of PAT, opening up possibilities for its application in multiple domains.
Collapse
Affiliation(s)
- Wenhua Zhong
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Tianle Li
- Nanchang University, Jiluan Academy, Nanchang, China
| | - Shangkun Hou
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Hongyu Zhang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Zilong Li
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Guijun Wang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Qiegen Liu
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Xianlin Song
- Nanchang University, School of Information Engineering, Nanchang, China
| |
Collapse
|
5
|
Zhang S, Miao J, Li LS. Challenges and advances in two-dimensional photoacoustic computed tomography: a review. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:070901. [PMID: 39006312 PMCID: PMC11245175 DOI: 10.1117/1.jbo.29.7.070901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 06/18/2024] [Accepted: 06/19/2024] [Indexed: 07/16/2024]
Abstract
Significance Photoacoustic computed tomography (PACT), a hybrid imaging modality combining optical excitation with acoustic detection, has rapidly emerged as a prominent biomedical imaging technique. Aim We review the challenges and advances of PACT, including (1) limited view, (2) anisotropy resolution, (3) spatial aliasing, (4) acoustic heterogeneity (speed of sound mismatch), and (5) fluence correction of spectral unmixing. Approach We performed a comprehensive literature review to summarize the key challenges in PACT toward practical applications and discuss various solutions. Results There is a wide range of contributions from both industry and academic spaces. Various approaches, including emerging deep learning methods, are proposed to improve the performance of PACT further. Conclusions We outline contemporary technologies aimed at tackling the challenges in PACT applications.
Collapse
Affiliation(s)
- Shunyao Zhang
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Jingyi Miao
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Lei S. Li
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| |
Collapse
|
6
|
Paul A, Mallidi S. U-Net enhanced real-time LED-based photoacoustic imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300465. [PMID: 38622811 PMCID: PMC11164633 DOI: 10.1002/jbio.202300465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/18/2024] [Accepted: 03/17/2024] [Indexed: 04/17/2024]
Abstract
Photoacoustic (PA) imaging is hybrid imaging modality with good optical contrast and spatial resolution. Portable, cost-effective, smaller footprint light emitting diodes (LEDs) are rapidly becoming important PA optical sources. However, the key challenge faced by the LED-based systems is the low light fluence that is generally compensated by high frame averaging, consequently reducing acquisition frame-rate. In this study, we present a simple deep learning U-Net framework that enhances the signal-to-noise ratio (SNR) and contrast of PA image obtained by averaging low number of frames. The SNR increased by approximately four-fold for both in-class in vitro phantoms (4.39 ± 2.55) and out-of-class in vivo models (4.27 ± 0.87). We also demonstrate the noise invariancy of the network and discuss the downsides (blurry outcome and failure to reduce the salt & pepper noise). Overall, the developed U-Net framework can provide a real-time image enhancement platform for clinically translatable low-cost and low-energy light source-based PA imaging systems.
Collapse
Affiliation(s)
- Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | |
Collapse
|
7
|
Poimala J, Cox B, Hauptmann A. Compensating unknown speed of sound in learned fast 3D limited-view photoacoustic tomography. PHOTOACOUSTICS 2024; 37:100597. [PMID: 38425677 PMCID: PMC10901832 DOI: 10.1016/j.pacs.2024.100597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/15/2023] [Accepted: 02/16/2024] [Indexed: 03/02/2024]
Abstract
Real-time applications in three-dimensional photoacoustic tomography from planar sensors rely on fast reconstruction algorithms that assume the speed of sound (SoS) in the tissue is homogeneous. Moreover, the reconstruction quality depends on the correct choice for the constant SoS. In this study, we discuss the possibility of ameliorating the problem of unknown or heterogeneous SoS distributions by using learned reconstruction methods. This can be done by modelling the uncertainties in the training data. In addition, a correction term can be included in the learned reconstruction method. We investigate the influence of both and while a learned correction component can improve reconstruction quality further, we show that a careful choice of uncertainties in the training data is the primary factor to overcome unknown SoS. We support our findings with simulated and in vivo measurements in 3D.
Collapse
Affiliation(s)
- Jenni Poimala
- Research Unit of Mathematical Sciences, University of Oulu, Finland
| | - Ben Cox
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Finland
- Department of Computer Science, University College London, UK
| |
Collapse
|
8
|
Chang KW, Karthikesh MS, Zhu Y, Hudson HM, Barbay S, Bundy D, Guggenmos DJ, Frost S, Nudo RJ, Wang X, Yang X. Photoacoustic imaging of squirrel monkey cortical responses induced by peripheral mechanical stimulation. JOURNAL OF BIOPHOTONICS 2024; 17:e202300347. [PMID: 38171947 PMCID: PMC10961203 DOI: 10.1002/jbio.202300347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/08/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024]
Abstract
Non-human primates (NHPs) are crucial models for studies of neuronal activity. Emerging photoacoustic imaging modalities offer excellent tools for studying NHP brains with high sensitivity and high spatial resolution. In this research, a photoacoustic microscopy (PAM) device was used to provide a label-free quantitative characterization of cerebral hemodynamic changes due to peripheral mechanical stimulation. A 5 × 5 mm area within the somatosensory cortex region of an adult squirrel monkey was imaged. A deep, fully connected neural network was characterized and applied to the PAM images of the cortex to enhance the vessel structures after mechanical stimulation on the forelimb digits. The quality of the PAM images was improved significantly with a neural network while preserving the hemodynamic responses. The functional responses to the mechanical stimulation were characterized based on the improved PAM images. This study demonstrates capability of PAM combined with machine learning for functional imaging of the NHP brain.
Collapse
Affiliation(s)
- Kai-Wei Chang
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | | | - Yunhao Zhu
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | - Heather M. Hudson
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Scott Barbay
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - David Bundy
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - David J. Guggenmos
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Shawn Frost
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Randolph J. Nudo
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Xueding Wang
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | - Xinmai Yang
- Bioengineering Graduate Program and Institute for Bioengineering Research, University of Kansas, Lawrence, Kansas, 66045, United States
- Department of Mechanical Engineering, University of Kansas, Lawrence, Kansas, 66045, United States
| |
Collapse
|
9
|
Susmelj AK, Lafci B, Ozdemir F, Davoudi N, Deán-Ben XL, Perez-Cruz F, Razansky D. Signal domain adaptation network for limited-view optoacoustic tomography. Med Image Anal 2024; 91:103012. [PMID: 37922769 DOI: 10.1016/j.media.2023.103012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 09/19/2023] [Accepted: 10/18/2023] [Indexed: 11/07/2023]
Abstract
Optoacoustic (OA) imaging is based on optical excitation of biological tissues with nanosecond-duration laser pulses and detection of ultrasound (US) waves generated by thermoelastic expansion following light absorption. The image quality and fidelity of OA images critically depend on the extent of tomographic coverage provided by the US detector arrays. However, full tomographic coverage is not always possible due to experimental constraints. One major challenge concerns an efficient integration between OA and pulse-echo US measurements using the same transducer array. A common approach toward the hybridization consists in using standard linear transducer arrays, which readily results in arc-type artifacts and distorted shapes in OA images due to the limited angular coverage. Deep learning methods have been proposed to mitigate limited-view artifacts in OA reconstructions by mapping artifactual to artifact-free (ground truth) images. However, acquisition of ground truth data with full angular coverage is not always possible, particularly when using handheld probes in a clinical setting. Deep learning methods operating in the image domain are then commonly based on networks trained on simulated data. This approach is yet incapable of transferring the learned features between two domains, which results in poor performance on experimental data. Here, we propose a signal domain adaptation network (SDAN) consisting of i) a domain adaptation network to reduce the domain gap between simulated and experimental signals and ii) a sides prediction network to complement the missing signals in limited-view OA datasets acquired from a human forearm by means of a handheld linear transducer array. The proposed method showed improved performance in reducing limited-view artifacts without the need for ground truth signals from full tomographic acquisitions.
Collapse
Affiliation(s)
| | - Berkan Lafci
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Firat Ozdemir
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland
| | - Neda Davoudi
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Fernando Perez-Cruz
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland; Institute for Machine Learning, Department of Computer Science, ETH Zurich, Switzerland
| | - Daniel Razansky
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland.
| |
Collapse
|
10
|
Wang R, Zhu J, Meng Y, Wang X, Chen R, Wang K, Li C, Shi J. Adaptive machine learning method for photoacoustic computed tomography based on sparse array sensor data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107822. [PMID: 37832425 DOI: 10.1016/j.cmpb.2023.107822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 08/18/2023] [Accepted: 09/17/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Photoacoustic computed tomography (PACT) is a non-invasive biomedical imaging technology that has developed rapidly in recent decades, especially has shown potential for small animal studies and early diagnosis of human diseases. To obtain high-quality images, the photoacoustic imaging system needs a high-element-density detector array. However, in practical applications, due to the cost limitation, manufacturing technology, and the system requirement in miniaturization and robustness, it is challenging to achieve sufficient elements and high-quality reconstructed images, which may even suffer from artifacts. Different from the latest machine learning methods based on removing distortions and artifacts to recover high-quality images, this paper proposes an adaptive machine learning method to firstly predict and complement the photoacoustic sensor channel data from sparse array sampling and then reconstruct images through conventional reconstruction algorithms. METHODS We develop an adaptive machine learning method to predict and complement the photoacoustic sensor channel data. The model consists of XGBoost and a neural network named SS-net. To handle data sets of different sizes and improve the generalization, a tunable parameter is used to control the weights of XGBoost and SS-net outputs. RESULTS The proposed method achieved superior performance as demonstrated by simulation, phantom experiments, and in vivo experiment results. Compared with linear interpolation, XGBoost, CAE, and U-net, the simulation results show that the SSIM value is increased by 12.83%, 6.78%, 21.46%, and 12.33%. Moreover, the median R2 is increased by 34.4%, 8.1%, 28.6%, and 84.1% with the in vivo data. CONCLUSIONS This model provides a framework to predict the missed photoacoustic sensor data on a sparse ring-shaped array for PACT imaging and has achieved considerable improvements in reconstructing the objects. Compared with linear interpolation and other deep learning methods qualitatively and quantitatively, our proposed methods can effectively suppress artifacts and improve image quality. The advantage of our methods is that there is no need for preparing a large number of images as the training dataset, and the data for training is directly from the sensors. It has the potential to be applied to a wide range of photoacoustic imaging detector arrays for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
| | - Jing Zhu
- Zhejiang Lab, Hangzhou 311100, China
| | | | | | | | | | - Chiye Li
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| | - Junhui Shi
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| |
Collapse
|
11
|
Riksen JJM, Nikolaev AV, van Soest G. Photoacoustic imaging on its way toward clinical utility: a tutorial review focusing on practical application in medicine. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:121205. [PMID: 37304059 PMCID: PMC10249868 DOI: 10.1117/1.jbo.28.12.121205] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 05/12/2023] [Accepted: 05/18/2023] [Indexed: 06/13/2023]
Abstract
Significance Photoacoustic imaging (PAI) enables the visualization of optical contrast with ultrasonic imaging. It is a field of intense research, with great promise for clinical application. Understanding the principles of PAI is important for engineering research and image interpretation. Aim In this tutorial review, we lay out the imaging physics, instrumentation requirements, standardization, and some practical examples for (junior) researchers, who have an interest in developing PAI systems and applications for clinical translation or applying PAI in clinical research. Approach We discuss PAI principles and implementation in a shared context, emphasizing technical solutions that are amenable to broad clinical deployment, considering factors such as robustness, mobility, and cost in addition to image quality and quantification. Results Photoacoustics, capitalizing on endogenous contrast or administered contrast agents that are approved for human use, yields highly informative images in clinical settings, which can support diagnosis and interventions in the future. Conclusion PAI offers unique image contrast that has been demonstrated in a broad set of clinical scenarios. The transition of PAI from a "nice-to-have" to a "need-to-have" modality will require dedicated clinical studies that evaluate therapeutic decision-making based on PAI and consideration of the actual value for patients and clinicians, compared with the associated cost.
Collapse
Affiliation(s)
- Jonas J. M. Riksen
- Erasmus University Medical Center, Department of Cardiology, Rotterdam, The Netherlands
| | - Anton V. Nikolaev
- Erasmus University Medical Center, Department of Cardiology, Rotterdam, The Netherlands
| | - Gijs van Soest
- Erasmus University Medical Center, Department of Cardiology, Rotterdam, The Netherlands
| |
Collapse
|
12
|
Kim M, Pelivanov I, O'Donnell M. Review of Deep Learning Approaches for Interleaved Photoacoustic and Ultrasound (PAUS) Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1591-1606. [PMID: 37910419 PMCID: PMC10788151 DOI: 10.1109/tuffc.2023.3329119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Photoacoustic (PA) imaging provides optical contrast at relatively large depths within the human body, compared to other optical methods, at ultrasound (US) spatial resolution. By integrating real-time PA and US (PAUS) modalities, PAUS imaging has the potential to become a routine clinical modality bringing the molecular sensitivity of optics to medical US imaging. For applications where the full capabilities of clinical US scanners must be maintained in PAUS, conventional limited view and bandwidth transducers must be used. This approach, however, cannot provide high-quality maps of PA sources, especially vascular structures. Deep learning (DL) using data-driven modeling with minimal human design has been very effective in medical imaging, medical data analysis, and disease diagnosis, and has the potential to overcome many of the technical limitations of current PAUS imaging systems. The primary purpose of this article is to summarize the background and current status of DL applications in PAUS imaging. It also looks beyond current approaches to identify remaining challenges and opportunities for robust translation of PAUS technologies to the clinic.
Collapse
|
13
|
Li J, Meng YC. Multikernel positional embedding convolutional neural network for photoacoustic reconstruction with sparse data. APPLIED OPTICS 2023; 62:8506-8516. [PMID: 38037963 DOI: 10.1364/ao.504094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 10/14/2023] [Indexed: 12/02/2023]
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive imaging modality that merges the high contrast of optical imaging with the high resolution of ultrasonic imaging. Low-quality photoacoustic reconstruction with sparse data due to sparse spatial sampling and limited view detection is a major obstacle to the popularization of PAI for medical applications. Deep learning has been considered as the best solution to this problem in the past decade. In this paper, we propose what we believe to be a novel architecture, named DPM-UNet, which consists of the U-Net backbone with additional position embedding block and two multi-kernel-size convolution blocks, a dilated dense block and dilated multi-kernel-size convolution block. Our method was experimentally validated with both simulated data and in vivo data, achieving a SSIM of 0.9824 and a PSNR of 33.2744 dB. Furthermore, the reconstructed images of our proposed method were compared with those obtained by other advanced methods. The results have shown that our proposed DPM-UNet has a great advantage in PAI over other methods with respect to the imaging effect and memory consumption.
Collapse
|
14
|
Wang R, Zhang Z, Chen R, Yu X, Zhang H, Hu G, Liu Q, Song X. Noise-insensitive defocused signal and resolution enhancement for optical-resolution photoacoustic microscopy via deep learning. JOURNAL OF BIOPHOTONICS 2023; 16:e202300149. [PMID: 37491832 DOI: 10.1002/jbio.202300149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 06/30/2023] [Accepted: 07/22/2023] [Indexed: 07/27/2023]
Abstract
Optical-resolution photoacoustic microscopy suffers from narrow depth of field and a significant deterioration in defocused signal intensity and spatial resolution. Here, a method based on deep learning was proposed to enhance the defocused resolution and signal-to-noise ratio. A virtual optical-resolution photoacoustic microscopy based on k-wave was used to obtain the datasets of deep learning with different noise levels. A fully dense U-Net was trained with randomly distributed sources to improve the quality of photoacoustic images. The results show that the PSNR of defocused signal was enhanced by more than 1.2 times. An over 2.6-fold enhancement in lateral resolution and an over 3.4-fold enhancement in axial resolution of defocused regions were achieved. The large volumetric and high-resolution imaging of blood vessels further verified that the proposed method can effectively overcome the deterioration of the signal and the spatial resolution due to the narrow depth of field of optical-resolution photoacoustic microscopy.
Collapse
Affiliation(s)
- Rui Wang
- School of Information Engineering, Nanchang University, Nanchang, China
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Zhipeng Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Ruiyi Chen
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xiaohai Yu
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Hongyu Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Gang Hu
- Jiangxi Medical College, Nanchang University, Nanchang, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
15
|
Zheng W, Zhang H, Huang C, Shijo V, Xu C, Xu W, Xia J. Deep Learning Enhanced Volumetric Photoacoustic Imaging of Vasculature in Human. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301277. [PMID: 37530209 PMCID: PMC10582405 DOI: 10.1002/advs.202301277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 06/26/2023] [Indexed: 08/03/2023]
Abstract
The development of high-performance imaging processing algorithms is a core area of photoacoustic tomography. While various deep learning based image processing techniques have been developed in the area, their applications in 3D imaging are still limited due to challenges in computational cost and memory allocation. To address those limitations, this work implements a 3D fully-dense (3DFD) U-net to linear array based photoacoustic tomography and utilizes volumetric simulation and mixed precision training to increase efficiency and training size. Through numerical simulation, phantom imaging, and in vivo experiments, this work demonstrates that the trained network restores the true object size, reduces the noise level and artifacts, improves the contrast at deep regions, and reveals vessels subject to limited view distortion. With these enhancements, 3DFD U-net successfully produces clear 3D vascular images of the palm, arms, breasts, and feet of human subjects. These enhanced vascular images offer improved capabilities for biometric identification, foot ulcer evaluation, and breast cancer imaging. These results indicate that the new algorithm will have a significant impact on preclinical and clinical photoacoustic tomography.
Collapse
Affiliation(s)
- Wenhan Zheng
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Huijuan Zhang
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Chuqin Huang
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Varun Shijo
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Chenhan Xu
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Wenyao Xu
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Jun Xia
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| |
Collapse
|
16
|
Le TD, Min JJ, Lee C. Enhanced resolution and sensitivity acoustic-resolution photoacoustic microscopy with semi/unsupervised GANs. Sci Rep 2023; 13:13423. [PMID: 37591911 PMCID: PMC10435476 DOI: 10.1038/s41598-023-40583-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/13/2023] [Indexed: 08/19/2023] Open
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) enables visualization of biological tissues at depths of several millimeters with superior optical absorption contrast. However, the lateral resolution and sensitivity of AR-PAM are generally lower than those of optical-resolution PAM (OR-PAM) owing to the intrinsic physical acoustic focusing mechanism. Here, we demonstrate a computational strategy with two generative adversarial networks (GANs) to perform semi/unsupervised reconstruction with high resolution and sensitivity in AR-PAM by maintaining its imaging capability at enhanced depths. The b-scan PAM images were prepared as paired (for semi-supervised conditional GAN) and unpaired (for unsupervised CycleGAN) groups for label-free reconstructed AR-PAM b-scan image generation and training. The semi/unsupervised GANs successfully improved resolution and sensitivity in a phantom and in vivo mouse ear test with ground truth. We also confirmed that GANs could enhance resolution and sensitivity of deep tissues without the ground truth.
Collapse
Affiliation(s)
- Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea
| | - Jung-Joon Min
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea
| | - Changho Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea.
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea.
| |
Collapse
|
17
|
Gao R, Chen T, Ren Y, Liu L, Chen N, Wong KK, Song L, Ma X, Liu C. Restoring the imaging quality of circular transducer array-based PACT using synthetic aperture focusing technique integrated with 2nd-derivative-based back projection scheme. PHOTOACOUSTICS 2023; 32:100537. [PMID: 37559663 PMCID: PMC10407438 DOI: 10.1016/j.pacs.2023.100537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/16/2023] [Accepted: 07/19/2023] [Indexed: 08/11/2023]
Abstract
Circular-array-based photoacoustic computed tomography (CA-PACT) is a promising imaging tool owing to its broad acoustic detection coverage and fidelity. However, CA-PACT suffers from poor image quality outside the focal zone along both elevational and lateral dimensions. To address this challenge, we proposed a novel reconstruction strategy by integrating the synthetic aperture focusing technique (SAFT) with the 2nd derivative-based back projection (2nd D-BP) algorithm to restore the image quality outside the focal zone along both the elevational and lateral axes. The proposed solution is a two-phase reconstruction scheme. In the first phase, with the assistance of an acoustic lens, we designed a circular array-based SAFT algorithm to restore the resolution and SNR along the elevational axis. The acoustic lens pushes the boundary of the upper limit of the SAFT scheme to achieve enhanced elevational resolution. In the second phase, we proposed a 2nd D-BP scheme to improve the lateral resolution and suppress noises in 3D imaging results. The 2nd D-BP strategy enhances the image quality along the lateral dimension by up-converting the high spatial frequencies of the object's absorption pattern. We validated the effectiveness of the proposed strategy using both phantoms and in vivo human experiments. The experimental results demonstrated superior image quality (7-fold enhancement in elevational resolution, 3-fold enhancement in lateral resolution, and an 11-dB increase in SNR). This strategy provides a new paradigm in the PACT system as it significantly enhances the spatial resolution and imaging contrast in both the elevational and lateral dimensions while maintaining a large focal zone.
Collapse
Affiliation(s)
- Rongkang Gao
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Tao Chen
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaguang Ren
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Liangjian Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ningbo Chen
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- The University of Hong Kong, Department of Electrical and Electronic Engineering, Hong Kong China
| | - Kenneth K.Y. Wong
- The University of Hong Kong, Department of Electrical and Electronic Engineering, Hong Kong China
| | - Liang Song
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaohui Ma
- The first medical center of Chinese PLA General Hospital, the Department of Vascular and Endovascular Surgery, Beijing, China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
18
|
Menozzi L, Del Águila Á, Vu T, Ma C, Yang W, Yao J. Integrated Photoacoustic, Ultrasound, and Angiographic Tomography (PAUSAT) for NonInvasive Whole-Brain Imaging of Ischemic Stroke. J Vis Exp 2023:10.3791/65319. [PMID: 37335115 PMCID: PMC10411115 DOI: 10.3791/65319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2023] Open
Abstract
Presented here is an experimental ischemic stroke study using our newly developed noninvasive imaging system that integrates three acoustic-based imaging technologies: photoacoustic, ultrasound, and angiographic tomography (PAUSAT). Combining these three modalities helps acquire multi-spectral photoacoustic tomography (PAT) of the brain blood oxygenation, high-frequency ultrasound imaging of the brain tissue, and acoustic angiography of the cerebral blood perfusion. The multi-modal imaging platform allows the study of cerebral perfusion and oxygenation changes in the whole mouse brain after stroke. Two commonly used ischemic stroke models were evaluated: the permanent middle cerebral artery occlusion (pMCAO) model and the photothrombotic (PT) model. PAUSAT was used to image the same mouse brains before and after a stroke and quantitatively analyze both stroke models. This imaging system was able to clearly show the brain vascular changes after ischemic stroke, including significantly reduced blood perfusion and oxygenation in the stroke infarct region (ipsilateral) compared to the uninjured tissue (contralateral). The results were confirmed by both laser speckle contrast imaging and triphenyltetrazolium chloride (TTC) staining. Furthermore, stroke infarct volume in both stroke models was measured and validated by TTC staining as the ground truth. Through this study, we have demonstrated that PAUSAT can be a powerful tool in noninvasive and longitudinal preclinical studies of ischemic stroke.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University
| | - Ángela Del Águila
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University School of Medicine
| | - Tri Vu
- Department of Biomedical Engineering, Duke University
| | - Chenshuo Ma
- Department of Biomedical Engineering, Duke University
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University School of Medicine;
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University;
| |
Collapse
|
19
|
Hacker L, Brown EL, Lefebvre TL, Sweeney PW, Bohndiek SE. Performance evaluation of mesoscopic photoacoustic imaging. PHOTOACOUSTICS 2023; 31:100505. [PMID: 37214427 PMCID: PMC10199419 DOI: 10.1016/j.pacs.2023.100505] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/05/2023] [Accepted: 05/05/2023] [Indexed: 05/24/2023]
Abstract
Photoacoustic mesoscopy visualises vascular architecture at high-resolution up to ~3 mm depth. Despite promise in preclinical and clinical imaging studies, with applications in oncology and dermatology, the accuracy and precision of photoacoustic mesoscopy is not well established. Here, we evaluate a commercial photoacoustic mesoscopy system for imaging vascular structures. Typical artefact types are first highlighted and limitations due to non-isotropic illumination and detection are evaluated with respect to rotation, angularity, and depth of the target. Then, using tailored phantoms and mouse models, we investigate system precision, showing coefficients of variation (COV) between repeated scans [short term (1 h): COV= 1.2%; long term (25 days): COV= 9.6%], from target repositioning (without: COV=1.2%, with: COV=4.1%), or from varying in vivo user experience (experienced: COV=15.9%, unexperienced: COV=20.2%). Our findings show robustness of the technique, but also underscore general challenges of limited-view photoacoustic systems in accurately imaging vessel-like structures, thereby guiding users when interpreting biologically-relevant information.
Collapse
Affiliation(s)
- Lina Hacker
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge CB2 0RE, UK
| | - Emma L. Brown
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge CB2 0RE, UK
| | - Thierry L. Lefebvre
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge CB2 0RE, UK
| | - Paul W. Sweeney
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge CB2 0RE, UK
| | - Sarah E. Bohndiek
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge CB2 0RE, UK
| |
Collapse
|
20
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
21
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
22
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. PHOTOACOUSTICS 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
23
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
24
|
Menozzi L, del Águila Á, Vu T, Ma C, Yang W, Yao J. Three-dimensional non-invasive brain imaging of ischemic stroke by integrated photoacoustic, ultrasound and angiographic tomography (PAUSAT). PHOTOACOUSTICS 2023; 29:100444. [PMID: 36620854 PMCID: PMC9813577 DOI: 10.1016/j.pacs.2022.100444] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/09/2022] [Accepted: 12/23/2022] [Indexed: 06/17/2023]
Abstract
We present an ischemic stroke study using our newly-developed PAUSAT system that integrates photoacoustic computed tomography (PACT), high-frequency ultrasound imaging, and acoustic angiographic tomography. PAUSAT is capable of three-dimensional (3D) imaging of the brain morphology, blood perfusion, and blood oxygenation. Using PAUSAT, we studied the hemodynamic changes in the whole mouse brain induced by two common ischemic stroke models: the permanent middle cerebral artery occlusion (pMCAO) model and the photothrombotic (PT) model. We imaged the same mouse brains before and after stroke, and quantitatively compared the two stroke models. We observed clear hemodynamic changes after ischemic stroke, including reduced blood perfusion and oxygenation. Such changes were spatially heterogenous. We also quantified the tissue infarct volume in both stroke models. The PAUSAT measurements were validated by laser speckle imaging and histology. Our results have collectively demonstrated that PAUSAT can be a valuable tool for non-invasive longitudinal studies of neurological diseases at the whole-brain scale.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham 27708, NC, USA
| | - Ángela del Águila
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University School of Medicine, Durham 27710, NC, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham 27708, NC, USA
| | - Chenshuo Ma
- Department of Biomedical Engineering, Duke University, Durham 27708, NC, USA
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University School of Medicine, Durham 27710, NC, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham 27708, NC, USA
| |
Collapse
|
25
|
Zheng S, Jiejie D, Yue Y, Qi M, Huifeng S. A Deep Learning Method for Motion Artifact Correction in Intravascular Photoacoustic Image Sequence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:66-78. [PMID: 36037455 DOI: 10.1109/tmi.2022.3202910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In vivo application of intravascular photoacoustic (IVPA) imaging for coronary arteries is hampered by motion artifacts associated with the cardiac cycle. Gating is a common strategy to mitigate motion artifacts. However, a large amount of diagnostically valuable information might be lost due to one frame per cycle. In this work, we present a deep learning-based method for directly correcting motion artifacts in non-gated IVPA pullback sequences. The raw signal frames are classified into dynamic and static frames by clustering. Then, a neural network named Motion Artifact Correction (MAC)-Net is designed to correct motion in dynamic frames. Given the lack of the ground truth information on the underlying dynamics of coronary arteries, we trained and tested the network using a computer-generated dataset. Based on the results, it has been observed that the trained network can directly correct motion in successive frames while preserving the original structures without discarding any frames. The improvement in the visual effect of the longitudinal view has been demonstrated based on quantitative evaluation of the inter-frame dissimilarity. The comparison results validated the motion-suppression ability of our method comparable to gating and image registration-based non-learning methods, while maintaining the integrity of the pullbacks without image preprocessing. Experimental results from in vivo intravascular ultrasound and optical coherence tomography pullbacks validated the feasibility of our method in the in vivo intracoronary imaging scenario.
Collapse
|
26
|
Menozzi L, Yang W, Feng W, Yao J. Sound out the impaired perfusion: Photoacoustic imaging in preclinical ischemic stroke. Front Neurosci 2022; 16:1055552. [PMID: 36532279 PMCID: PMC9751426 DOI: 10.3389/fnins.2022.1055552] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/17/2022] [Indexed: 09/19/2023] Open
Abstract
Acoustically detecting the optical absorption contrast, photoacoustic imaging (PAI) is a highly versatile imaging modality that can provide anatomical, functional, molecular, and metabolic information of biological tissues. PAI is highly scalable and can probe the same biological process at various length scales ranging from single cells (microscopic) to the whole organ (macroscopic). Using hemoglobin as the endogenous contrast, PAI is capable of label-free imaging of blood vessels in the brain and mapping hemodynamic functions such as blood oxygenation and blood flow. These imaging merits make PAI a great tool for studying ischemic stroke, particularly for probing into hemodynamic changes and impaired cerebral blood perfusion as a consequence of stroke. In this narrative review, we aim to summarize the scientific progresses in the past decade by using PAI to monitor cerebral blood vessel impairment and restoration after ischemic stroke, mostly in the preclinical setting. We also outline and discuss the major technological barriers and challenges that need to be overcome so that PAI can play a more significant role in preclinical stroke research, and more importantly, accelerate its translation to be a useful clinical diagnosis and management tool for human strokes.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University, Durham, NC, United States
| | - Wuwei Feng
- Department of Neurology, Duke University School of Medicine, Durham, NC, United States
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| |
Collapse
|
27
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
28
|
Shahid H, Khalid A, Yue Y, Liu X, Ta D. Feasibility of a Generative Adversarial Network for Artifact Removal in Experimental Photoacoustic Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:1628-1643. [PMID: 35660105 DOI: 10.1016/j.ultrasmedbio.2022.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 03/06/2022] [Accepted: 04/16/2022] [Indexed: 06/15/2023]
Abstract
Photoacoustic tomography (PAT) reconstruction is an expeditiously growing interest among biomedical researchers because of its possible transition from laboratory to clinical pre-eminence. Nonetheless, the PAT inverse problem is yet to achieve an optimal solution in rapid and precise reconstruction under practical constraints. Precisely, the sparse sampling problem and random noise are the main impediments to attaining accuracy but in support of rapid PAT reconstruction. The limitations are associated with acquiring undersampled artifacts that deteriorate the optimality of the reconstruction task. Therefore, the former achievements of fast image formation limit the modality for clinical settings. Delving into the problem, here we explore a deep learning-based generative adversarial network (GAN) to improve the image quality by denoising and removing these artifacts. The specially designed attributes and unique manner of optimizing the problem, such as incorporating the data set limitations and providing stable training performance, constitute the main motivation behind the employment of GAN. Moreover, exploitation of the U-net variant as a generator network offers robust performance in terms of quality and computational cost, which is further validated with the detailed quantitative and qualitative analysis. The quantitatively evaluated structured similarity indexing method = 0.980 ± 0.043 and peak signal-to-noise ratio = 31 ± 0.002 dB state that the proposed solution provides the high-resolution image at the output, even training with a low-quality data set.
Collapse
Affiliation(s)
- Husnain Shahid
- Center for Biomedical Engineering, Fudan University, China
| | - Adnan Khalid
- School of Information and Communication Engineering, Tianjin University, China
| | - Yaoting Yue
- Center for Biomedical Engineering, Fudan University, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China.
| | - Dean Ta
- Center for Biomedical Engineering, Fudan University, China; Academy for Engineering and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
29
|
Gao Y, Xu W, Chen Y, Xie W, Cheng Q. Deep Learning-Based Photoacoustic Imaging of Vascular Network Through Thick Porous Media. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2191-2204. [PMID: 35294347 DOI: 10.1109/tmi.2022.3158474] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging is a promising approach used to realize in vivo transcranial cerebral vascular imaging. However, the strong attenuation and distortion of the photoacoustic wave caused by the thick porous skull greatly affect the imaging quality. In this study, we developed a convolutional neural network based on U-Net to extract the effective photoacoustic information hidden in the speckle patterns obtained from vascular network images datasets under porous media. Our simulation and experimental results show that the proposed neural network can learn the mapping relationship between the speckle pattern and the target, and extract the photoacoustic signals of the vessels submerged in noise to reconstruct high-quality images of the vessels with a sharp outline and a clean background. Compared with the traditional photoacoustic reconstruction methods, the proposed deep learning-based reconstruction algorithm has a better performance with a lower mean absolute error, higher structural similarity, and higher peak signal-to-noise ratio of reconstructed images. In conclusion, the proposed neural network can effectively extract valid information from highly blurred speckle patterns for the rapid reconstruction of target images, which offers promising applications in transcranial photoacoustic imaging.
Collapse
|
30
|
Zhang H, Bo W, Wang D, DiSpirito A, Huang C, Nyayapathi N, Zheng E, Vu T, Gong Y, Yao J, Xu W, Xia J. Deep-E: A Fully-Dense Neural Network for Improving the Elevation Resolution in Linear-Array-Based Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1279-1288. [PMID: 34928793 PMCID: PMC9161237 DOI: 10.1109/tmi.2021.3137060] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Linear-array-based photoacoustic tomography has shown broad applications in biomedical research and preclinical imaging. However, the elevational resolution of a linear array is fundamentally limited due to the weak cylindrical focus of the transducer element. While several methods have been proposed to address this issue, they have all handled the problem in a less time-efficient way. In this work, we propose to improve the elevational resolution of a linear array through Deep-E, a fully dense neural network based on U-net. Deep-E exhibits high computational efficiency by converting the three-dimensional problem into a two-dimension problem: it focused on training a model to enhance the resolution along elevational direction by only using the 2D slices in the axial and elevational plane and thereby reducing the computational burden in simulation and training. We demonstrated the efficacy of Deep-E using various datasets, including simulation, phantom, and human subject results. We found that Deep-E could improve elevational resolution by at least four times and recover the object's true size. We envision that Deep-E will have a significant impact in linear-array-based photoacoustic imaging studies by providing high-speed and high-resolution image enhancement.
Collapse
|
31
|
Sun Z, Sun H. Image reconstruction for endoscopic photoacoustic tomography including effects of detector responses. Exp Biol Med (Maywood) 2022; 247:881-897. [PMID: 35232296 DOI: 10.1177/15353702221079570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
In photoacoustic tomography (PAT), conventional image reconstruction methods are generally based on the assumption of an ideal point-like ultrasonic detector. This assumption is appropriate when the receiving surface of the detector is sufficiently small and/or the distance between the imaged object and the detector is large enough. However, it does not hold in endoscopic applications of PAT. In this study, we propose a model-based image reconstruction method for endoscopic photoacoustic tomography (EPAT), considering the effect of detector responses on image quality. We construct a forward model to physically describe the imaging process of EPAT, including the generation of the initial pressure due to optical absorption and thermoelastic expansion, the propagation of photoacoustic waves in tissues, and the acoustic measurement. The model outputs the theoretical sampling voltage signal, which is the response of the ultrasonic detector to the acoustic pressure reaching its receiving surface. The images representing the distribution map of the optical absorption energy density on cross-sections of the imaged luminal structures are reconstructed from the sampling voltage signals output by the detector through iterative inversion of the forward model. Compared with the conventional approaches based on back-projection and other imaging models, our method improved the quality and spatial resolution of the resulting images.
Collapse
Affiliation(s)
- Zheng Sun
- Department of Electronic and Communication Engineering, North China Electric Power University, Baoding 071003, China.,Hebei Key Laboratory of Power Internet of Things Technology, North China Electric Power University, Baoding 071003, China
| | - Huifeng Sun
- Department of Electronic and Communication Engineering, North China Electric Power University, Baoding 071003, China.,Hebei Key Laboratory of Power Internet of Things Technology, North China Electric Power University, Baoding 071003, China
| |
Collapse
|
32
|
Cheng S, Zhou Y, Chen J, Li H, Wang L, Lai P. High-resolution photoacoustic microscopy with deep penetration through learning. PHOTOACOUSTICS 2022; 25:100314. [PMID: 34824976 PMCID: PMC8604673 DOI: 10.1016/j.pacs.2021.100314] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 05/18/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) enjoys superior spatial resolution and has received intense attention in recent years. The application, however, has been limited to shallow depths because of strong scattering of light in biological tissues. In this work, we propose to achieve deep-penetrating OR-PAM performance by using deep learning enabled image transformation on blurry living mouse vascular images that were acquired with an acoustic-resolution photoacoustic microscopy (AR-PAM) setup. A generative adversarial network (GAN) was trained in this study and improved the imaging lateral resolution of AR-PAM from 54.0 µm to 5.1 µm, comparable to that of a typical OR-PAM (4.7 µm). The feasibility of the network was evaluated with living mouse ear data, producing superior microvasculature images that outperforms blind deconvolution. The generalization of the network was validated with in vivo mouse brain data. Moreover, it was shown experimentally that the deep-learning method can retain high resolution at tissue depths beyond one optical transport mean free path. Whilst it can be further improved, the proposed method provides new horizons to expand the scope of OR-PAM towards deep-tissue imaging and wide applications in biomedicine.
Collapse
Affiliation(s)
- Shengfu Cheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Yingying Zhou
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Huanhao Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Puxiang Lai
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
33
|
An Intelligent Music Production Technology Based on Generation Confrontation Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5083146. [PMID: 35186065 PMCID: PMC8853763 DOI: 10.1155/2022/5083146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 12/22/2021] [Indexed: 12/04/2022]
Abstract
In recent years, with the development of deep neural network becoming more and more mature, especially after the proposal of generative confrontation mechanism, academia has made many achievements in the research of image, video and text generation. Therefore, scholars began to use similar attempts in the research of music generation. Therefore, based on the existing theoretical technology and research work, this paper studies music production, and then proposes an intelligent music production technology based on generation confrontation mechanism to enrich the research in the field of computer music generation. This paper takes the music generation method based on generation countermeasure mechanism as the research topic, and mainly studies the following: after studying the existing music generation model based on generation countermeasure network, a time structure model for maintaining music coherence is proposed. In music generation, avoid manual input and ensure the interdependence between tracks. At the same time, this paper studies and implements the generation method of discrete music events based on multi track, including multi track correlation model and discrete processing. The lakh MIDI data set is studied. On this basis, the lakh MIDI is pre-processed to obtain the LMD piano roll data set, which is used in the music generation experiment of MCT-GAN. When studying the multi track music generation based on generation countermeasure network, this paper studies and analyzes three models, and puts forward the multi track music generation method based on CT-GAN, which mainly improves the existing music generation model based on GAN. Finally, the generation results of MCT-GAN are compared with those of Muse-GAN, so as to reflect the improvement effect of MCT-GAN. Select 20 auditees to listen to the generated music and real music and distinguish them. Finally, analyze them according to the evaluation results. After evaluation, it is concluded that the research effect of multi track music generation based on CT-GAN is improved.
Collapse
|
34
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
35
|
Refaee A, Kelly CJ, Moradi H, Salcudean SE. Denoising of pre-beamformed photoacoustic data using generative adversarial networks. BIOMEDICAL OPTICS EXPRESS 2021; 12:6184-6204. [PMID: 34745729 PMCID: PMC8547982 DOI: 10.1364/boe.431997] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 07/27/2021] [Accepted: 08/25/2021] [Indexed: 05/19/2023]
Abstract
We have trained generative adversarial networks (GANs) to mimic both the effect of temporal averaging and of singular value decomposition (SVD) denoising. This effectively removes noise and acquisition artifacts and improves signal-to-noise ratio (SNR) in both the radio-frequency (RF) data and in the corresponding photoacoustic reconstructions. The method allows a single frame acquisition instead of averaging multiple frames, reducing scan time and total laser dose significantly. We have tested this method on experimental data, and quantified the improvement over using either SVD denoising or frame averaging individually for both the RF data and the reconstructed images. We achieve a mean squared error (MSE) of 0.05%, structural similarity index measure (SSIM) of 0.78, and a feature similarity index measure (FSIM) of 0.85 compared to our ground-truth RF results. In the subsequent reconstructions using the denoised data we achieve a MSE of 0.05%, SSIM of 0.80, and a FSIM of 0.80 compared to our ground-truth reconstructions.
Collapse
Affiliation(s)
- Amir Refaee
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, British Columbia, Canada
- Equal Authorship Contribution
| | - Corey J. Kelly
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, British Columbia, Canada
- Equal Authorship Contribution
| | - Hamid Moradi
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, British Columbia, Canada
| | - Septimiu E. Salcudean
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, British Columbia, Canada
| |
Collapse
|
36
|
Lu M, Liu X, Liu C, Li B, Gu W, Jiang J, Ta D. Artifact removal in photoacoustic tomography with an unsupervised method. BIOMEDICAL OPTICS EXPRESS 2021; 12:6284-6299. [PMID: 34745737 PMCID: PMC8548009 DOI: 10.1364/boe.434172] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 08/13/2021] [Accepted: 09/07/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic tomography (PAT) is an emerging biomedical imaging technology that can realize high contrast imaging with a penetration depth of the acoustic. Recently, deep learning (DL) methods have also been successfully applied to PAT for improving the image reconstruction quality. However, the current DL-based PAT methods are implemented by the supervised learning strategy, and the imaging performance is dependent on the available ground-truth data. To overcome the limitation, this work introduces a new image domain transformation method based on cyclic generative adversarial network (CycleGAN), termed as PA-GAN, which is used to remove artifacts in PAT images caused by the use of the limited-view measurement data in an unsupervised learning way. A series of data from phantom and in vivo experiments are used to evaluate the performance of the proposed PA-GAN. The experimental results show that PA-GAN provides a good performance in removing artifacts existing in photoacoustic tomographic images. In particular, when dealing with extremely sparse measurement data (e.g., 8 projections in circle phantom experiments), higher imaging performance is achieved by the proposed unsupervised PA-GAN, with an improvement of ∼14% in structural similarity (SSIM) and ∼66% in peak signal to noise ratio (PSNR), compared with the supervised-learning U-Net method. With an increasing number of projections (e.g., 128 projections), U-Net, especially FD U-Net, shows a slight improvement in artifact removal capability, in terms of SSIM and PSNR. Furthermore, the computational time obtained by PA-GAN and U-Net is similar (∼60 ms/frame), once the network is trained. More importantly, PA-GAN is more flexible than U-Net that allows the model to be effectively trained with unpaired data. As a result, PA-GAN makes it possible to implement PAT with higher flexibility without compromising imaging performance.
Collapse
Affiliation(s)
- Mengyang Lu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- State Key Laboratory of Medical Neurobiology, Institutes of Brain Science, Fudan University, Shanghai 200433, China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Wenting Gu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiehui Jiang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, China
| |
Collapse
|
37
|
Hsu KT, Guan S, Chitnis PV. Comparing Deep Learning Frameworks for Photoacoustic Tomography Image Reconstruction. PHOTOACOUSTICS 2021; 23:100271. [PMID: 34094851 PMCID: PMC8165448 DOI: 10.1016/j.pacs.2021.100271] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/08/2021] [Accepted: 05/11/2021] [Indexed: 05/02/2023]
Abstract
Conventional reconstruction methods for photoacoustic images are not suitable for the scenario of sparse sensing and geometrical limitation. To overcome these challenges and enhance the quality of reconstruction, several learning-based methods have recently been introduced for photoacoustic tomography reconstruction. The goal of this study is to compare and systematically evaluate the recently proposed learning-based methods and modified networks for photoacoustic image reconstruction. Specifically, learning-based post-processing methods and model-based learned iterative reconstruction methods are investigated. In addition to comparing the differences inherently brought by the models, we also study the impact of different inputs on the reconstruction effect. Our results demonstrate that the reconstruction performance mainly stems from the effective amount of information carried by the input. The inherent difference of the models based on the learning-based post-processing method does not provide a significant difference in photoacoustic image reconstruction. Furthermore, the results indicate that the model-based learned iterative reconstruction method outperforms all other learning-based post-processing methods in terms of generalizability and robustness.
Collapse
|
38
|
Vu T, Tang Y, Li M, Sankin G, Tang S, Chen S, Zhong P, Yao J. Photoacoustic computed tomography of mechanical HIFU-induced vascular injury. BIOMEDICAL OPTICS EXPRESS 2021; 12:5489-5498. [PMID: 34692196 PMCID: PMC8515986 DOI: 10.1364/boe.426660] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 06/14/2021] [Accepted: 06/17/2021] [Indexed: 06/13/2023]
Abstract
Mechanical high-intensity focused ultrasound (HIFU) has been used for cancer treatment and drug delivery. Existing monitoring methods for mechanical HIFU therapies such as MRI and ultrasound imaging often suffer from high cost, poor spatial-temporal resolution, and/or low sensitivity to tissue's hemodynamic changes. Evaluating vascular injury during mechanical HIFU treatment, therefore, remains challenging. Photoacoustic computed tomography (PACT) is a promising tool to meet this need. Intrinsically sensitive to optical absorption, PACT provides high-resolution imaging of blood vessels using hemoglobin as the endogenous contrast. In this study, we have developed an integrated HIFU-PACT system for detecting vascular rupture in mechanical HIFU treatment. We have demonstrated singular value decomposition for enhancing hemorrhage detection. We have validated the HIFU-PACT performance on phantoms and in vivo animal tumor models. We expect that PACT-HIFU will find practical applications in oncology research using small animal models.
Collapse
Affiliation(s)
- Tri Vu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Yuqi Tang
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mucong Li
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Georgii Sankin
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA
| | - Shanshan Tang
- Department of Radiology, Mayo Clinic College of Medicine, Rochester, MN 55905, USA
| | - Shigao Chen
- Department of Radiology, Mayo Clinic College of Medicine, Rochester, MN 55905, USA
| | - Pei Zhong
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
39
|
Ai M, Cheng J, Karimi D, Salcudean SE, Rohling R, Abolmaesumi P, Tang S. Investigation of photoacoustic tomography reconstruction with a limited view from linear array. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210083RR. [PMID: 34585543 PMCID: PMC8477256 DOI: 10.1117/1.jbo.26.9.096009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 09/08/2021] [Indexed: 06/13/2023]
Abstract
SIGNIFICANCE As linear array transducers are widely used in clinical ultrasound imaging, photoacoustic tomography (PAT) with linear arrays is similarly suitable for clinical applications. However, due to the limited-view problem, a linear array has limited performance and leads to artifacts and blurring, which has hindered its broader application. There is a need to address the limited-view problem in PAT imaging with linear arrays. AIM We investigate potential approaches for improving PAT reconstruction from linear array, by optimizing the detection geometry and implementing iterative reconstruction. APPROACH PAT imaging with a single-array, dual-probe configurations in parallel-shape and L-shape, and square-shape configuration are compared in simulations and phantom experiments. An iterative model-based algorithm based on the variance-reduced stochastic gradient descent (VR-SGD) method is implemented. The optimum configuration found in simulation is validated on phantom experiments. RESULTS PAT imaging with dual-probe detection and VR-SGD algorithm is found to improve the limited-view problem compared to a single probe and provide comparable performance as full-view geometry in simulation. This configuration is validated in experiments where more complete structure is obtained with reduced artifacts compared with a single array. CONCLUSIONS PAT with dual-probe detection and iterative reconstruction is a promising solution to the limited-view problem of linear arrays.
Collapse
Affiliation(s)
- Min Ai
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Jiayi Cheng
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Davood Karimi
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Septimiu E. Salcudean
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Robert Rohling
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
- University of British Columbia, Department of Mechanical Engineering, Vancouver, Canada
| | - Purang Abolmaesumi
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Shuo Tang
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| |
Collapse
|
40
|
Prakash J, Kalva SK, Pramanik M, Yalavarthy PK. Binary photoacoustic tomography for improved vasculature imaging. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210135R. [PMID: 34405599 PMCID: PMC8370884 DOI: 10.1117/1.jbo.26.8.086004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 06/29/2021] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE The proposed binary tomography approach was able to recover the vasculature structures accurately, which could potentially enable the utilization of binary tomography algorithm in scenarios such as therapy monitoring and hemorrhage detection in different organs. AIM Photoacoustic tomography (PAT) involves reconstruction of vascular networks having direct implications in cancer research, cardiovascular studies, and neuroimaging. Various methods have been proposed for recovering vascular networks in photoacoustic imaging; however, most methods are two-step (image reconstruction and image segmentation) in nature. We propose a binary PAT approach wherein direct reconstruction of vascular network from the acquired photoacoustic sinogram data is plausible. APPROACH Binary tomography approach relies on solving a dual-optimization problem to reconstruct images with every pixel resulting in a binary outcome (i.e., either background or the absorber). Further, the binary tomography approach was compared against backprojection, Tikhonov regularization, and sparse recovery-based schemes. RESULTS Numerical simulations, physical phantom experiment, and in-vivo rat brain vasculature data were used to compare the performance of different algorithms. The results indicate that the binary tomography approach improved the vasculature recovery by 10% using in-silico data with respect to the Dice similarity coefficient against the other reconstruction methods. CONCLUSION The proposed algorithm demonstrates superior vasculature recovery with limited data both visually and based on quantitative image metrics.
Collapse
Affiliation(s)
- Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bangalore, Karnataka, India
- Address all correspondence to Jaya Prakash,
| | - Sandeep Kumar Kalva
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Phaneendra K. Yalavarthy
- Indian Institute of Science, Department of Computational and Data Sciences, Bangalore, Karnataka, India
| |
Collapse
|
41
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
42
|
Mukaddim RA, Ahmed R, Varghese T. Subaperture Processing-Based Adaptive Beamforming for Photoacoustic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2336-2350. [PMID: 33606629 PMCID: PMC8330397 DOI: 10.1109/tuffc.2021.3060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Delay-and-sum (DAS) beamformers, when applied to photoacoustic (PA) image reconstruction, produce strong sidelobes due to the absence of transmit focusing. Consequently, DAS PA images are often severely degraded by strong off-axis clutter. For preclinical in vivo cardiac PA imaging, the presence of these noise artifacts hampers the detectability and interpretation of PA signals from the myocardial wall, crucial for studying blood-dominated cardiac pathological information and to complement functional information derived from ultrasound imaging. In this article, we present PA subaperture processing (PSAP), an adaptive beamforming method, to mitigate these image degrading effects. In PSAP, a pair of DAS reconstructed images is formed by splitting the received channel data into two complementary nonoverlapping subapertures. Then, a weighting matrix is derived by analyzing the correlation between subaperture beamformed images and multiplied with the full-aperture DAS PA image to reduce sidelobes and incoherent clutter. We validated PSAP using numerical simulation studies using point target, diffuse inclusion and microvasculature imaging, and in vivo feasibility studies on five healthy murine models. Qualitative and quantitative analysis demonstrate improvements in PAI image quality with PSAP compared to DAS and coherence factor weighted DAS (DAS CF ). PSAP demonstrated improved target detectability with a higher generalized contrast-to-noise (gCNR) ratio in vasculature simulations where PSAP produces 19.61% and 19.53% higher gCNRs than DAS and DAS CF , respectively. Furthermore, PSAP provided higher image contrast quantified using contrast ratio (CR) (e.g., PSAP produces 89.26% and 11.90% higher CR than DAS and DAS CF in vasculature simulations) and improved clutter suppression.
Collapse
|
43
|
Na S, Wang LV. Photoacoustic computed tomography for functional human brain imaging [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:4056-4083. [PMID: 34457399 PMCID: PMC8367226 DOI: 10.1364/boe.423707] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 06/05/2021] [Accepted: 06/08/2021] [Indexed: 05/02/2023]
Abstract
The successes of magnetic resonance imaging and modern optical imaging of human brain function have stimulated the development of complementary modalities that offer molecular specificity, fine spatiotemporal resolution, and sufficient penetration simultaneously. By virtue of its rich optical contrast, acoustic resolution, and imaging depth far beyond the optical transport mean free path (∼1 mm in biological tissues), photoacoustic computed tomography (PACT) offers a promising complementary modality. In this article, PACT for functional human brain imaging is reviewed in its hardware, reconstruction algorithms, in vivo demonstration, and potential roadmap.
Collapse
Affiliation(s)
- Shuai Na
- Caltech Optical Imaging Laboratory, Andrew
and Peggy Cherng Department of Medical Engineering,
California Institute of Technology, 1200
East California Boulevard, Pasadena, CA 91125, USA
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew
and Peggy Cherng Department of Medical Engineering,
California Institute of Technology, 1200
East California Boulevard, Pasadena, CA 91125, USA
- Caltech Optical Imaging Laboratory,
Department of Electrical Engineering, California
Institute of Technology, 1200 East California Boulevard,
Pasadena, CA 91125, USA
| |
Collapse
|
44
|
Vu T, DiSpirito A, Li D, Wang Z, Zhu X, Chen M, Jiang L, Zhang D, Luo J, Zhang YS, Zhou Q, Horstmeyer R, Yao J. Deep image prior for undersampling high-speed photoacoustic microscopy. PHOTOACOUSTICS 2021; 22:100266. [PMID: 33898247 PMCID: PMC8056431 DOI: 10.1016/j.pacs.2021.100266] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/15/2021] [Accepted: 03/23/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser's repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density (i.e., undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | | | - Daiwei Li
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Zixuan Wang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Xiaoyi Zhu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Maomao Chen
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Laiming Jiang
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | - Dong Zhang
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Jianwen Luo
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Qifa Zhou
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | | | - Junjie Yao
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
45
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
46
|
Yao J, Wang LV. Perspective on fast-evolving photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210105-PERR. [PMID: 34196136 PMCID: PMC8244998 DOI: 10.1117/1.jbo.26.6.060602] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/17/2021] [Indexed: 05/19/2023]
Abstract
SIGNIFICANCE Acoustically detecting the rich optical absorption contrast in biological tissues, photoacoustic tomography (PAT) seamlessly bridges the functional and molecular sensitivity of optical excitation with the deep penetration and high scalability of ultrasound detection. As a result of continuous technological innovations and commercial development, PAT has been playing an increasingly important role in life sciences and patient care, including functional brain imaging, smart drug delivery, early cancer diagnosis, and interventional therapy guidance. AIM Built on our 2016 tutorial article that focused on the principles and implementations of PAT, this perspective aims to provide an update on the exciting technical advances in PAT. APPROACH This perspective focuses on the recent PAT innovations in volumetric deep-tissue imaging, high-speed wide-field microscopic imaging, high-sensitivity optical ultrasound detection, and machine-learning enhanced image reconstruction and data processing. Representative applications are introduced to demonstrate these enabling technical breakthroughs in biomedical research. CONCLUSIONS We conclude the perspective by discussing the future development of PAT technologies.
Collapse
Affiliation(s)
- Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| |
Collapse
|
47
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
48
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
49
|
Wiacek A, Lediju Bell MA. Photoacoustic-guided surgery from head to toe [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:2079-2117. [PMID: 33996218 PMCID: PMC8086464 DOI: 10.1364/boe.417984] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/17/2021] [Accepted: 02/18/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging-the combination of optics and acoustics to visualize differences in optical absorption - has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.
Collapse
Affiliation(s)
- Alycen Wiacek
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
50
|
Bao Y, Deng H, Wang X, Zuo H, Ma C. Development of a digital breast phantom for photoacoustic computed tomography. BIOMEDICAL OPTICS EXPRESS 2021; 12:1391-1406. [PMID: 33796361 PMCID: PMC7984796 DOI: 10.1364/boe.416406] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 01/16/2021] [Accepted: 01/26/2021] [Indexed: 05/18/2023]
Abstract
Photoacoustic (PA) imaging provides morphological and functional information about angiogenesis and thus is potentially suitable for breast cancer diagnosis. However, the development of PA breast imaging has been hindered by inadequate patients and a lack of ground truth images. Here, we report a digital breast phantom with realistic acoustic and optical properties, with which a digital PA-ultrasound imaging pipeline is developed to create a diverse pool of virtual patients with three types of masses: ductal carcinoma in situ, invasive breast cancer, and fibroadenoma. The experimental results demonstrate that our model is realistic, flexible, and can be potentially useful for accelerating the development of PA breast imaging technology.
Collapse
Affiliation(s)
- Youwei Bao
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Handi Deng
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Xuanhao Wang
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Hongzhi Zuo
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Cheng Ma
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
- Beijing Innovation Center for Future Chip, Beijing, 100084, China
- Corresponding author:
| |
Collapse
|