1
|
Li B, Lu M, Zhou T, Bu M, Gu W, Wang J, Zhu Q, Liu X, Ta D. Removing Artifacts in Transcranial Photoacoustic Imaging With Polarized Self-Attention Dense-UNet. ULTRASOUND IN MEDICINE & BIOLOGY 2024:S0301-5629(24)00251-5. [PMID: 39013725 DOI: 10.1016/j.ultrasmedbio.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/28/2024] [Accepted: 06/16/2024] [Indexed: 07/18/2024]
Abstract
OBJECTIVE Photoacoustic imaging (PAI) is a promising transcranial imaging technique. However, the distortion of photoacoustic signals induced by the skull significantly influences its imaging quality. We aimed to use deep learning for removing artifacts in PAI. METHODS In this study, we propose a polarized self-attention dense U-Net, termed PSAD-UNet, to correct the distortion and accurately recover imaged objects beneath bone plates. To evaluate the performance of the proposed method, a series of experiments was performed using a custom-built PAI system. RESULTS The experimental results showed that the proposed PSAD-UNet method could effectively implement transcranial PAI through a one- or two-layer bone plate. Compared with the conventional delay-and-sum and classical U-Net methods, PSAD-UNet can diminish the influence of bone plates and provide high-quality PAI results in terms of structural similarity and peak signal-to-noise ratio. The 3-D experimental results further confirm the feasibility of PSAD-UNet in 3-D transcranial imaging. CONCLUSION PSAD-UNet paves the way for implementing transcranial PAI with high imaging accuracy, which reveals broad application prospects in preclinical and clinical fields.
Collapse
Affiliation(s)
- Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Mengyang Lu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Tianhua Zhou
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Mengxu Bu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Wenting Gu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Junyi Wang
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Qiuchen Zhu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China.
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China; Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| |
Collapse
|
2
|
Sweeney PW, Hacker L, Lefebvre TL, Brown EL, Gröhl J, Bohndiek SE. Unsupervised Segmentation of 3D Microvascular Photoacoustic Images Using Deep Generative Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024:e2402195. [PMID: 38923324 DOI: 10.1002/advs.202402195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 05/27/2024] [Indexed: 06/28/2024]
Abstract
Mesoscopic photoacoustic imaging (PAI) enables label-free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time-consuming and error-prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human-annotated ground-truth labels. To address this, an unsupervised image-to-image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient-derived breast cancer xenograft models and 3D clinical angiograms, VAN-GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN-GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high-quality blood vessel segmentation (F1 score: VAN-GAN vs. U-Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
Collapse
Affiliation(s)
- Paul W Sweeney
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Lina Hacker
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Thierry L Lefebvre
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Emma L Brown
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Janek Gröhl
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Sarah E Bohndiek
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| |
Collapse
|
3
|
Eleni Karakatsani M, Estrada H, Chen Z, Shoham S, Deán-Ben XL, Razansky D. Shedding light on ultrasound in action: Optical and optoacoustic monitoring of ultrasound brain interventions. Adv Drug Deliv Rev 2024; 205:115177. [PMID: 38184194 PMCID: PMC11298795 DOI: 10.1016/j.addr.2023.115177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 12/27/2023] [Accepted: 12/31/2023] [Indexed: 01/08/2024]
Abstract
Monitoring brain responses to ultrasonic interventions is becoming an important pillar of a growing number of applications employing acoustic waves to actuate and cure the brain. Optical interrogation of living tissues provides a unique means for retrieving functional and molecular information related to brain activity and disease-specific biomarkers. The hybrid optoacoustic imaging methods have further enabled deep-tissue imaging with optical contrast at high spatial and temporal resolution. The marriage between light and sound thus brings together the highly complementary advantages of both modalities toward high precision interrogation, stimulation, and therapy of the brain with strong impact in the fields of ultrasound neuromodulation, gene and drug delivery, or noninvasive treatments of neurological and neurodegenerative disorders. In this review, we elaborate on current advances in optical and optoacoustic monitoring of ultrasound interventions. We describe the main principles and mechanisms underlying each method before diving into the corresponding biomedical applications. We identify areas of improvement as well as promising approaches with clinical translation potential.
Collapse
Affiliation(s)
- Maria Eleni Karakatsani
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland; Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Héctor Estrada
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland; Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Zhenyue Chen
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland; Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Shy Shoham
- Department of Ophthalmology and Tech4Health and Neuroscience Institutes, NYU Langone Health, NY, USA
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland; Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland.
| | - Daniel Razansky
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland; Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland.
| |
Collapse
|
4
|
Li J, Meng YC. Multikernel positional embedding convolutional neural network for photoacoustic reconstruction with sparse data. APPLIED OPTICS 2023; 62:8506-8516. [PMID: 38037963 DOI: 10.1364/ao.504094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 10/14/2023] [Indexed: 12/02/2023]
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive imaging modality that merges the high contrast of optical imaging with the high resolution of ultrasonic imaging. Low-quality photoacoustic reconstruction with sparse data due to sparse spatial sampling and limited view detection is a major obstacle to the popularization of PAI for medical applications. Deep learning has been considered as the best solution to this problem in the past decade. In this paper, we propose what we believe to be a novel architecture, named DPM-UNet, which consists of the U-Net backbone with additional position embedding block and two multi-kernel-size convolution blocks, a dilated dense block and dilated multi-kernel-size convolution block. Our method was experimentally validated with both simulated data and in vivo data, achieving a SSIM of 0.9824 and a PSNR of 33.2744 dB. Furthermore, the reconstructed images of our proposed method were compared with those obtained by other advanced methods. The results have shown that our proposed DPM-UNet has a great advantage in PAI over other methods with respect to the imaging effect and memory consumption.
Collapse
|
5
|
Zhang F, Zhang J, Shen Y, Gao Z, Yang C, Liang M, Gao F, Liu L, Zhao H, Gao F. Photoacoustic digital brain and deep-learning-assisted image reconstruction. PHOTOACOUSTICS 2023; 31:100517. [PMID: 37292518 PMCID: PMC10244697 DOI: 10.1016/j.pacs.2023.100517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 05/29/2023] [Accepted: 05/30/2023] [Indexed: 06/10/2023]
Abstract
Photoacoustic tomography (PAT) is a newly developed medical imaging modality, which combines the advantages of pure optical imaging and ultrasound imaging, owning both high optical contrast and deep penetration depth. Very recently, PAT is studied in human brain imaging. Nevertheless, while ultrasound waves are passing through the human skull tissues, the strong acoustic attenuation and aberration will happen, which causes photoacoustic signals' distortion. In this work, we use 180 T1 weighted magnetic resonance imaging (MRI) human brain volumes along with the corresponding magnetic resonance angiography (MRA) brain volumes, and segment them to generate the 2D human brain numerical phantoms for PAT. The numerical phantoms contain six kinds of tissues, which are scalp, skull, white matter, gray matter, blood vessel and cerebrospinal fluid. For every numerical phantom, Monte-Carlo based optical simulation is deployed to obtain the photoacoustic initial pressure based on optical properties of human brain. Then, two different k-wave models are used for the skull-involved acoustic simulation, which are fluid media model and viscoelastic media model. The former one only considers longitudinal wave propagation, and the latter model takes shear wave into consideration. Then, the PA sinograms with skull-induced aberration is taken as the input of U-net, and the skull-stripped ones are regarded as the supervision of U-net to train the network. Experimental result shows that the skull's acoustic aberration can be effectively alleviated after U-net correction, achieving conspicuous improvement in quality of PAT human brain images reconstructed from the corrected PA signals, which can clearly show the cerebral artery distribution inside the human skull.
Collapse
Affiliation(s)
- Fan Zhang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Jiadong Zhang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Yuting Shen
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Zijian Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Mingtao Liang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Li Liu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hulin Zhao
- Department of Neural Surgery, Chinese PLA General Hospital, Beijing, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|