1
|
Guo K, Zheng Z, Zhong W, Li Z, Wang G, Li J, Cao Y, Wang Y, Lin J, Liu Q, Song X. Score-based generative model-assisted information compensation for high-quality limited-view reconstruction in photoacoustic tomography. PHOTOACOUSTICS 2024; 38:100623. [PMID: 38832333 PMCID: PMC11144813 DOI: 10.1016/j.pacs.2024.100623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/11/2024] [Accepted: 05/17/2024] [Indexed: 06/05/2024]
Abstract
Photoacoustic tomography (PAT) regularly operates in limited-view cases owing to data acquisition limitations. The results using traditional methods in limited-view PAT exhibit distortions and numerous artifacts. Here, a novel limited-view PAT reconstruction strategy that combines model-based iteration with score-based generative model was proposed. By incrementally adding noise to the training samples, prior knowledge can be learned from the complex probability distribution. The acquired prior is then utilized as constraint in model-based iteration. The information of missing views can be gradually compensated by cyclic iteration to achieve high-quality reconstruction. The performance of the proposed method was evaluated with the circular phantom and in vivo experimental data. Experimental results demonstrate the outstanding effectiveness of the proposed method in limited-view cases. Notably, the proposed method exhibits excellent performance in limited-view case of 70° compared with traditional method. It achieves a remarkable improvement of 203% in PSNR and 48% in SSIM for the circular phantom experimental data, and an enhancement of 81% in PSNR and 65% in SSIM for in vivo experimental data, respectively. The proposed method has capability of reconstructing PAT images in extremely limited-view cases, which will further expand the application in clinical scenarios.
Collapse
Affiliation(s)
| | | | | | | | - Guijun Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yubin Cao
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yiguang Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiabin Lin
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
2
|
Sun W, Wang C, Tian C, Li X, Hu X, Liu S. Nanotechnology for brain tumor imaging and therapy based on π-conjugated materials: state-of-the-art advances and prospects. Front Chem 2023; 11:1301496. [PMID: 38025074 PMCID: PMC10663370 DOI: 10.3389/fchem.2023.1301496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
In contemporary biomedical research, the development of nanotechnology has brought forth numerous possibilities for brain tumor imaging and therapy. Among these, π-conjugated materials have garnered significant attention as a special class of nanomaterials in brain tumor-related studies. With their excellent optical and electronic properties, π-conjugated materials can be tailored in structure and nature to facilitate applications in multimodal imaging, nano-drug delivery, photothermal therapy, and other related fields. This review focuses on presenting the cutting-edge advances and application prospects of π-conjugated materials in brain tumor imaging and therapeutic nanotechnology.
Collapse
Affiliation(s)
- Wenshe Sun
- Department of Interventional Medical Center, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
- Qingdao Cancer Institute, Qingdao University, Qingdao, China
| | - Congxiao Wang
- Department of Interventional Medical Center, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Chuan Tian
- Department of Interventional Medical Center, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xueda Li
- Department of Interventional Medical Center, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaokun Hu
- Department of Interventional Medical Center, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Shifeng Liu
- Department of Interventional Medical Center, Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
3
|
Song X, Wang G, Zhong W, Guo K, Li Z, Liu X, Dong J, Liu Q. Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration. PHOTOACOUSTICS 2023; 33:100558. [PMID: 38021282 PMCID: PMC10658608 DOI: 10.1016/j.pacs.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 09/16/2023] [Indexed: 12/01/2023]
Abstract
As a non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction under sparse view could result in low-quality image in photoacoustic tomography. Here, a novel model-based sparse reconstruction method for photoacoustic tomography via diffusion model was proposed. A score-based diffusion model is designed for learning the prior information of the data distribution. The learned prior information is utilized as a constraint for the data consistency term of an optimization problem based on the least-square method in the model-based iterative reconstruction, aiming to achieve the optimal solution. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net. In particular, under the extreme sparse projection (e.g., 32 projections), the proposed method achieves an improvement of ∼ 260 % in structural similarity and ∼ 30 % in peak signal-to-noise ratio for in vivo data, compared with the conventional delay-and-sum method. This method has the potential to reduce the acquisition time and cost of photoacoustic tomography, which will further expand the application range.
Collapse
Affiliation(s)
| | | | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
4
|
Unal S, Musicki B, Burnett AL. Cavernous nerve mapping methods for radical prostatectomy. Sex Med Rev 2023; 11:421-430. [PMID: 37500541 DOI: 10.1093/sxmrev/qead030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 06/21/2023] [Accepted: 06/23/2023] [Indexed: 07/29/2023]
Abstract
INTRODUCTION Preserving the cavernous nerves, the main autonomic nerve supply of the penis, is a major challenge of radical prostatectomy. Cavernous nerve injury during radical prostatectomy predominantly accounts for post-radical prostatectomy erectile dysfunction. The cavernous nerve is a bilateral structure that branches in a weblike distribution over the prostate surface and varies anatomically in individuals, such that standard nerve-sparing methods do not sufficiently sustain penile erection ability. As a consequence, researchers have focused on developing personalized cavernous nerve mapping methods applied to the surgical procedure aiming to improve postoperative sexual function outcomes. OBJECTIVES We provide an updated overview of preclinical and clinical data of cavernous nerve mapping methods, emphasizing their strengths, limitations, and future directions. METHODS A literature review was performed via Scopus, PubMed, and Google Scholar for studies that describe cavernous nerve mapping/localization. RESULTS Several cavernous nerve mapping methods have been investigated based on various properties of the nerve structures including stimulation techniques, spectroscopy/imaging techniques, and assorted combinations of these methods. More recent methods have portrayed the course of the main cavernous nerve as well as its branches based on real-time mapping, high-resolution imaging, and functional imaging. However, each of these methods has distinctive limitations, including low spatial accuracy, lack of standardization for stimulation and response measurement, superficial imaging depth, toxicity risk, and lack of suitability for intraoperative use. CONCLUSION While various cavernous nerve mapping methods have provided improvements in identification and preservation of the cavernous nerve during radical prostatectomy, no method has been implemented in clinical practice due to their distinctive limitations. To overcome the limitations of existing cavernous nerve mapping methods, the development of new imaging techniques and mapping methods is in progress. There is a need for further research in this area to improve sexual function outcomes and quality of life after radical prostatectomy.
Collapse
Affiliation(s)
- Selman Unal
- The James Buchanan Brady Urological Institute and Department of Urology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
- Department of Urology, Ankara Yildirim Beyazit University School of Medicine, Ankara 06800, Turkey
| | - Biljana Musicki
- The James Buchanan Brady Urological Institute and Department of Urology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
| | - Arthur L Burnett
- The James Buchanan Brady Urological Institute and Department of Urology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
| |
Collapse
|
5
|
Vousten V, Moradi H, Wu Z, Boctor EM, Salcudean SE. Laser diode photoacoustic point source detection: machine learning-based denoising and reconstruction. OPTICS EXPRESS 2023; 31:13895-13910. [PMID: 37157265 DOI: 10.1364/oe.483892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A new development in photoacoustic (PA) imaging has been the use of compact, portable and low-cost laser diodes (LDs), but LD-based PA imaging suffers from low signal intensity recorded by the conventional transducers. A common method to improve signal strength is temporal averaging, which reduces frame rate and increases laser exposure to patients. To tackle this problem, we propose a deep learning method that will denoise point source PA radio-frequency (RF) data before beamforming with a very few frames, even one. We also present a deep learning method to automatically reconstruct point sources from noisy pre-beamformed data. Finally, we employ a strategy of combined denoising and reconstruction, which can supplement the reconstruction algorithm for very low signal-to-noise ratio inputs.
Collapse
|
6
|
Song H, Kang J, Boctor EM. Synthetic radial aperture focusing to regulate manual volumetric scanning for economic transrectal ultrasound imaging. ULTRASONICS 2023; 129:106908. [PMID: 36527822 PMCID: PMC10043828 DOI: 10.1016/j.ultras.2022.106908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/18/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
In this paper, we present a volumetric transrectal ultrasound (TRUS) imaging under the presence of radial scanning angle disorientation (SAD) in a resource-limited diagnostic setting. Herein, we test our hypothesis that a synthetic radial aperture focusing (TRUS-rSAF) technique, in which a radial plane in target volume is reconstructed by coherent compounding of multiple transmittance/reception events, will reject a randomized SAD in a free-hand scanning setup based on external angular tracking. Based on an analytical model of the TRUS-rSAF technique, we first tested specific scenarios using a clinically available TRUS transducer under different SADs in a range of normal distributions (σ = 0.1°, 0.2°, 0.5°, 1°, 2°, and 5°). We found a benefit of the TRUS-rSAF technique for higher robustness when the SAD is contained within the radial synthetic aperture window, i.e., ±0.71° from a target scanning angle. However, no enhancement was found in spatial resolution because of the limited transmit beam field of the clinical TRUS transducer, limiting the synthetic aperture window. We further evaluated the TRUS-rSAF technique with a modified TRUS transducer for an extended synthetic aperture window to test whether higher spatial resolution and robustness to SAD can be obtained in the same evaluation setup. Widening of the synthetic aperture window (±3.54°, ± 5.91°, ± 8.27°, ± 10.63°, ± 12.99°, ± 15.35°) resulted in proportional enhancements of spatial resolution, but it also progressively built up sidelobe artifacts due to randomized synthesis with limited phase cancellations. The results suggest the need for careful calibration of the TRUS-rSAF technique to enable TRUS imaging with free-hand radial scanning and external angle tracking in resource-limited settings.
Collapse
Affiliation(s)
- Hyunwoo Song
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jeeun Kang
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Emad M Boctor
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218, USA; Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
7
|
Hsu KT, Guan S, Chitnis PV. Fast iterative reconstruction for photoacoustic tomography using learned physical model: Theoretical validation. PHOTOACOUSTICS 2023; 29:100452. [PMID: 36700132 PMCID: PMC9867977 DOI: 10.1016/j.pacs.2023.100452] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/21/2022] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
Iterative reconstruction has demonstrated superior performance in medical imaging under compressed, sparse, and limited-view sensing scenarios. However, iterative reconstruction algorithms are slow to converge and rely heavily on hand-crafted parameters to achieve good performance. Many iterations are usually required to reconstruct a high-quality image, which is computationally expensive due to repeated evaluations of the physical model. While learned iterative reconstruction approaches such as model-based learning (MBLr) can reduce the number of iterations through convolutional neural networks, it still requires repeated evaluations of the physical models at each iteration. Therefore, the goal of this study is to develop a Fast Iterative Reconstruction (FIRe) algorithm that incorporates a learned physical model into the learned iterative reconstruction scheme to further reduce the reconstruction time while maintaining robust reconstruction performance. We also propose an efficient training scheme for FIRe, which releases the enormous memory footprint required by learned iterative reconstruction methods through the concept of recursive training. The results of our proposed method demonstrate comparable reconstruction performance to learned iterative reconstruction methods with a 9x reduction in computation time and a 620x reduction in computation time compared to variational reconstruction.
Collapse
|
8
|
Choi W, Park B, Choi S, Oh D, Kim J, Kim C. Recent Advances in Contrast-Enhanced Photoacoustic Imaging: Overcoming the Physical and Practical Challenges. Chem Rev 2023. [PMID: 36642892 DOI: 10.1021/acs.chemrev.2c00627] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
For decades now, photoacoustic imaging (PAI) has been investigated to realize its potential as a niche biomedical imaging modality. Despite its highly desirable optical contrast and ultrasonic spatiotemporal resolution, PAI is challenged by such physical limitations as a low signal-to-noise ratio (SNR), diminished image contrast due to strong optical attenuation, and a lower-bound on spatial resolution in deep tissue. In addition, contrast-enhanced PAI has faced practical limitations such as insufficient cell-specific targeting due to low delivery efficiency and difficulties in developing clinically translatable agents. Identifying these limitations is essential to the continuing expansion of the field, and substantial advances in developing contrast-enhancing agents, complemented by high-performance image acquisition systems, have synergistically dealt with the challenges of conventional PAI. This review covers the past four years of research on pushing the physical and practical challenges of PAI in terms of SNR/contrast, spatial resolution, targeted delivery, and clinical application. Promising strategies for dealing with each challenge are reviewed in detail, and future research directions for next generation contrast-enhanced PAI are discussed.
Collapse
Affiliation(s)
- Wonseok Choi
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Byullee Park
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Seongwook Choi
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Donghyeon Oh
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Jongbeom Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| |
Collapse
|