1
|
Kim J, Choi S, Kim C, Kim J, Park B. Review on Photoacoustic Monitoring after Drug Delivery: From Label-Free Biomarkers to Pharmacokinetics Agents. Pharmaceutics 2024; 16:1240. [PMID: 39458572 PMCID: PMC11510789 DOI: 10.3390/pharmaceutics16101240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 09/13/2024] [Accepted: 09/20/2024] [Indexed: 10/28/2024] Open
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive and label-free method for capturing the vasculature, hemodynamics, and physiological responses following drug delivery. PAI combines the advantages of optical and acoustic imaging to provide high-resolution images with multiparametric information. In recent decades, PAI's abilities have been used to determine reactivity after the administration of various drugs. This study investigates photoacoustic imaging as a label-free method of monitoring drug delivery responses by observing changes in the vascular system and oxygen saturation levels across various biological tissues. In addition, we discuss photoacoustic studies that monitor the biodistribution and pharmacokinetics of exogenous contrast agents, offering contrast-enhanced imaging of diseased regions. Finally, we demonstrate the crucial role of photoacoustic imaging in understanding drug delivery mechanisms and treatment processes.
Collapse
Affiliation(s)
- Jiwoong Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Medical Science and Engineering, Mechanical Engineering, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Cheongam-ro 77, Nam-gu, Pohang 37673, Republic of Korea; (J.K.); (S.C.); (C.K.)
| | - Seongwook Choi
- Departments of Electrical Engineering, Convergence IT Engineering, Medical Science and Engineering, Mechanical Engineering, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Cheongam-ro 77, Nam-gu, Pohang 37673, Republic of Korea; (J.K.); (S.C.); (C.K.)
| | - Chulhong Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Medical Science and Engineering, Mechanical Engineering, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Cheongam-ro 77, Nam-gu, Pohang 37673, Republic of Korea; (J.K.); (S.C.); (C.K.)
| | - Jeesu Kim
- Departments of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Byullee Park
- Department of Biophysics, Institute of Quantum Biophysics, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
2
|
Wang Z, Yang F, Zhang W, Xiong K, Yang S. Towards in vivo photoacoustic human imaging: Shining a new light on clinical diagnostics. FUNDAMENTAL RESEARCH 2024; 4:1314-1330. [PMID: 39431136 PMCID: PMC11489505 DOI: 10.1016/j.fmre.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/14/2022] [Accepted: 01/12/2023] [Indexed: 02/16/2023] Open
Abstract
Multiscale visualization of human anatomical structures is revolutionizing clinical diagnosis and treatment. As one of the most promising clinical diagnostic techniques, photoacoustic imaging (PAI), or optoacoustic imaging, bridges the spatial-resolution gap between pure optical and ultrasonic imaging techniques, by the modes of optical illumination and acoustic detection. PAI can non-invasively capture multiple optical contrasts from the endogenous agents such as oxygenated/deoxygenated hemoglobin, lipid and melanin or a variety of exogenous specific biomarkers to reveal anatomy, function, and molecular for biological tissues in vivo, showing significant potential in clinical diagnostics. In 2001, the worldwide first clinical prototype of the photoacoustic system was used to screen breast cancer in vivo, which opened the prelude to photoacoustic clinical diagnostics. Over the past two decades, PAI has achieved monumental discoveries and applications in human imaging. Progress towards preclinical/clinical applications includes breast, skin, lymphatics, bowel, thyroid, ovarian, prostate, and brain imaging, etc., and there is no doubt that PAI is opening new avenues to realize early diagnosis and precise treatment of human diseases. In this review, the breakthrough researches and key applications of photoacoustic human imaging in vivo are emphatically summarized, which demonstrates the technical superiorities and emerging applications of photoacoustic human imaging in clinical diagnostics, providing clinical translational orientations for the photoacoustic community and clinicians. The perspectives on potential improvements of photoacoustic human imaging are finally highlighted.
Collapse
Affiliation(s)
- Zhiyang Wang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
| | - Fei Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
| | - Wuyu Zhang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
| | - Kedi Xiong
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, School of Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510631, China
| |
Collapse
|
3
|
Loc I, Unlu MB. Accelerating photoacoustic microscopy by reconstructing undersampled images using diffusion models. Sci Rep 2024; 14:16996. [PMID: 39043802 PMCID: PMC11266665 DOI: 10.1038/s41598-024-67957-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 07/17/2024] [Indexed: 07/25/2024] Open
Abstract
Photoacoustic Microscopy (PAM) integrates optical and acoustic imaging, offering enhanced penetration depth for detecting optical-absorbing components in tissues. Nonetheless, challenges arise in scanning large areas with high spatial resolution. With speed limitations imposed by laser pulse repetition rates, the potential role of computational methods is highlighted in accelerating PAM imaging. We propose a novel and highly adaptable algorithm named DiffPam that utilizes diffusion models to speed up the photoacoustic imaging process. We leveraged a diffusion model trained exclusively on natural images, comparing its performance with an in-domain trained U-Net model using a dataset focused on PAM images of mice brain microvasculature. Our findings indicate that DiffPam performs similarly to a dedicated U-Net model without needing a large dataset. We demonstrate that scanning can be accelerated fivefold with limited information loss. We achieved a 24.70 % increase in peak signal-to-noise ratio and a 27.54 % increase in structural similarity index compared to the baseline bilinear interpolation method. The study also introduces the efficacy of shortened diffusion processes for reducing computing time without compromising accuracy. DiffPam stands out from existing methods as it does not require supervised training or detailed parameter optimization typically needed for other unsupervised methods. This study underscores the significance of DiffPam as a practical algorithm for reconstructing undersampled PAM images, particularly for researchers with limited artificial intelligence expertise and computational resources.
Collapse
Affiliation(s)
- Irem Loc
- Bogazici University Physics Department, Istanbul, Turkey.
| | - M Burcin Unlu
- Faculty of Engineering, Ozyegin University, Istanbul, Turkey
- Faculty of Aviation and Aeronautical Sciences, Ozyegin University, Istanbul, Turkey
| |
Collapse
|
4
|
Yang S, Hu S. Perspectives on endoscopic functional photoacoustic microscopy. APPLIED PHYSICS LETTERS 2024; 125:030502. [PMID: 39022117 PMCID: PMC11251735 DOI: 10.1063/5.0201691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 06/27/2024] [Indexed: 07/20/2024]
Abstract
Endoscopy, enabling high-resolution imaging of deep tissues and internal organs, plays an important role in basic research and clinical practice. Recent advances in photoacoustic microscopy (PAM), demonstrating excellent capabilities in high-resolution functional imaging, have sparked significant interest in its integration into the field of endoscopy. However, there are challenges in achieving functional PAM in the endoscopic setting. This Perspective article discusses current progress in the development of endoscopic PAM and the challenges related to functional measurements. Then, it points out potential directions to advance endoscopic PAM for functional imaging by leveraging fiber optics, microfabrication, optical engineering, and computational approaches. Finally, it highlights emerging opportunities for functional endoscopic PAM in basic and translational biomedicine.
Collapse
Affiliation(s)
- Shuo Yang
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| | - Song Hu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| |
Collapse
|
5
|
Paul A, Mallidi S. U-Net enhanced real-time LED-based photoacoustic imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300465. [PMID: 38622811 PMCID: PMC11164633 DOI: 10.1002/jbio.202300465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/18/2024] [Accepted: 03/17/2024] [Indexed: 04/17/2024]
Abstract
Photoacoustic (PA) imaging is hybrid imaging modality with good optical contrast and spatial resolution. Portable, cost-effective, smaller footprint light emitting diodes (LEDs) are rapidly becoming important PA optical sources. However, the key challenge faced by the LED-based systems is the low light fluence that is generally compensated by high frame averaging, consequently reducing acquisition frame-rate. In this study, we present a simple deep learning U-Net framework that enhances the signal-to-noise ratio (SNR) and contrast of PA image obtained by averaging low number of frames. The SNR increased by approximately four-fold for both in-class in vitro phantoms (4.39 ± 2.55) and out-of-class in vivo models (4.27 ± 0.87). We also demonstrate the noise invariancy of the network and discuss the downsides (blurry outcome and failure to reduce the salt & pepper noise). Overall, the developed U-Net framework can provide a real-time image enhancement platform for clinically translatable low-cost and low-energy light source-based PA imaging systems.
Collapse
Affiliation(s)
- Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | |
Collapse
|
6
|
Cho SW, Nguyen VT, DiSpirito A, Yang J, Kim CS, Yao J. Sounding out the dynamics: a concise review of high-speed photoacoustic microscopy. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11521. [PMID: 38323297 PMCID: PMC10846286 DOI: 10.1117/1.jbo.29.s1.s11521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/15/2023] [Accepted: 01/11/2024] [Indexed: 02/08/2024]
Abstract
Significance Photoacoustic microscopy (PAM) offers advantages in high-resolution and high-contrast imaging of biomedical chromophores. The speed of imaging is critical for leveraging these benefits in both preclinical and clinical settings. Ongoing technological innovations have substantially boosted PAM's imaging speed, enabling real-time monitoring of dynamic biological processes. Aim This concise review synthesizes historical context and current advancements in high-speed PAM, with an emphasis on developments enabled by ultrafast lasers, scanning mechanisms, and advanced imaging processing methods. Approach We examine cutting-edge innovations across multiple facets of PAM, including light sources, scanning and detection systems, and computational techniques and explore their representative applications in biomedical research. Results This work delineates the challenges that persist in achieving optimal high-speed PAM performance and forecasts its prospective impact on biomedical imaging. Conclusions Recognizing the current limitations, breaking through the drawbacks, and adopting the optimal combination of each technology will lead to the realization of ultimate high-speed PAM for both fundamental research and clinical translation.
Collapse
Affiliation(s)
- Soon-Woo Cho
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
- Pusan National University, Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Busan, Republic of Korea
| | - Van Tu Nguyen
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Anthony DiSpirito
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Joseph Yang
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Chang-Seok Kim
- Pusan National University, Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Busan, Republic of Korea
| | - Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| |
Collapse
|
7
|
Le TD, Min JJ, Lee C. Enhanced resolution and sensitivity acoustic-resolution photoacoustic microscopy with semi/unsupervised GANs. Sci Rep 2023; 13:13423. [PMID: 37591911 PMCID: PMC10435476 DOI: 10.1038/s41598-023-40583-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/13/2023] [Indexed: 08/19/2023] Open
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) enables visualization of biological tissues at depths of several millimeters with superior optical absorption contrast. However, the lateral resolution and sensitivity of AR-PAM are generally lower than those of optical-resolution PAM (OR-PAM) owing to the intrinsic physical acoustic focusing mechanism. Here, we demonstrate a computational strategy with two generative adversarial networks (GANs) to perform semi/unsupervised reconstruction with high resolution and sensitivity in AR-PAM by maintaining its imaging capability at enhanced depths. The b-scan PAM images were prepared as paired (for semi-supervised conditional GAN) and unpaired (for unsupervised CycleGAN) groups for label-free reconstructed AR-PAM b-scan image generation and training. The semi/unsupervised GANs successfully improved resolution and sensitivity in a phantom and in vivo mouse ear test with ground truth. We also confirmed that GANs could enhance resolution and sensitivity of deep tissues without the ground truth.
Collapse
Affiliation(s)
- Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea
| | - Jung-Joon Min
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea
| | - Changho Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea.
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea.
| |
Collapse
|
8
|
Zhang Y, Chen J, Zhang J, Zhu J, Liu C, Sun H, Wang L. Super-Low-Dose Functional and Molecular Photoacoustic Microscopy. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2302486. [PMID: 37310419 PMCID: PMC10427362 DOI: 10.1002/advs.202302486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 05/21/2023] [Indexed: 06/14/2023]
Abstract
Photoacoustic microscopy can image many biological molecules and nano-agents in vivo via low-scattering ultrasonic sensing. Insufficient sensitivity is a long-standing obstacle for imaging low-absorbing chromophores with less photobleaching or toxicity, reduced perturbation to delicate organs, and more choices of low-power lasers. Here, the photoacoustic probe design is collaboratively optimized and a spectral-spatial filter is implemented. A multi-spectral super-low-dose photoacoustic microscopy (SLD-PAM) is presented that improves the sensitivity by ≈33 times. SLD-PAM can visualize microvessels and quantify oxygen saturation in vivo with ≈1% of the maximum permissible exposure, dramatically reducing potential phototoxicity or perturbation to normal tissue function, especially in imaging of delicate tissues, such as the eye and the brain. Capitalizing on the high sensitivity, direct imaging of deoxyhemoglobin concentration is achieved without spectral unmixing, avoiding wavelength-dependent errors and computational noises. With reduced laser power, SLD-PAM can reduce photobleaching by ≈85%. It is also demonstrated that SLD-PAM achieves similar molecular imaging quality using 80% fewer contrast agents. Therefore, SLD-PAM enables the use of a broader range of low-absorbing nano-agents, small molecules, and genetically encoded biomarkers, as well as more types of low-power light sources in wide spectra. It is believed that SLD-PAM offers a powerful tool for anatomical, functional, and molecular imaging.
Collapse
Affiliation(s)
- Yachao Zhang
- Department of Biomedical EngineeringCity University of Hong KongHong KongSAR999077China
| | - Jiangbo Chen
- Department of Biomedical EngineeringCity University of Hong KongHong KongSAR999077China
| | - Jie Zhang
- Department of Chemistry and COSADAF (Centre of Super‐Diamond and Advanced Films)City University of Hong KongHong KongSAR999077China
| | - Jingyi Zhu
- Department of Biomedical EngineeringCity University of Hong KongHong KongSAR999077China
| | - Chao Liu
- Department of Biomedical EngineeringCity University of Hong KongHong KongSAR999077China
| | - Hongyan Sun
- Department of Chemistry and COSADAF (Centre of Super‐Diamond and Advanced Films)City University of Hong KongHong KongSAR999077China
| | - Lidai Wang
- Department of Biomedical EngineeringCity University of Hong KongHong KongSAR999077China
- City University of Hong Kong Shenzhen Research InstituteShenzhenChina518057
| |
Collapse
|
9
|
Guo T, Xiong K, Yuan B, Zhang Z, Wang L, Zhang Y, Liang C, Liu Z. Homogeneous-resolution photoacoustic microscopy for ultrawide field-of-view neurovascular imaging in Alzheimer's disease. PHOTOACOUSTICS 2023; 31:100516. [PMID: 37313359 PMCID: PMC10258506 DOI: 10.1016/j.pacs.2023.100516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Revised: 05/25/2023] [Accepted: 05/25/2023] [Indexed: 06/15/2023]
Abstract
Neurovascular imaging is essential for investigating neurodegenerative diseases. However, the existing neurovascular imaging technology suffers from a trade-off between a field of view (FOV) and resolution in the whole brain, resulting in an inhomogeneous resolution and lack of information. Here, homogeneous-resolution arched-scanning photoacoustic microscopy (AS-PAM), which has an ultrawide FOV to cover the entire mouse cerebral cortex, was developed. Imaging of the neurovasculature was performed with a homogenous resolution of 6.9 µm from the superior sagittal sinus to the middle cerebral artery and caudal rhinal vein in an FOV of 12 × 12 mm2. Moreover, using AS-PAM, vascular features of the meninges and cortex were quantified in early Alzheimer's disease (AD) and wild-type (WT) mice. The results demonstrated high sensitivity to the pathological progression of AD on tortuosity and branch index. The high-fidelity imaging capability in large FOV enables AS-PAM to be a promising tool for precise brain neurovascular visualization and quantification.
Collapse
Affiliation(s)
- Ting Guo
- School of Medicine South China University of Technology, Guangzhou 510006, China
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Kedi Xiong
- MOE Key Laboratory of Laser Life Science Institute of Laser Life Science, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Bo Yuan
- MOE Key Laboratory of Laser Life Science Institute of Laser Life Science, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Zhenhui Zhang
- MOE Key Laboratory of Laser Life Science Institute of Laser Life Science, South China Normal University, Guangzhou 510631, China
- Guangdong Provincial Key Laboratory of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Lijuan Wang
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, 510080, China
- Guangzhou Key Laboratory of Diagnosis and Treatment for Neurodegenerative Diseases, Guangzhou 510080, China
| | - Yuhu Zhang
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, 510080, China
- Guangzhou Key Laboratory of Diagnosis and Treatment for Neurodegenerative Diseases, Guangzhou 510080, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, 510080, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| |
Collapse
|
10
|
Zhou LX, Xia Y, Dai R, Liu AR, Zhu SW, Shi P, Song W, Yuan XC. Non-uniform image reconstruction for fast photoacoustic microscopy of histology imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:2080-2090. [PMID: 37206133 PMCID: PMC10191656 DOI: 10.1364/boe.487622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/18/2023] [Accepted: 04/02/2023] [Indexed: 05/21/2023]
Abstract
Photoacoustic microscopic imaging utilizes the characteristic optical absorption properties of pigmented materials in tissues to enable label-free observation of fine morphological and structural features. Since DNA/RNA can strongly absorb ultraviolet light, ultraviolet photoacoustic microscopy can highlight the cell nucleus without complicated sample preparations such as staining, which is comparable to the standard pathological images. Further improvements in the imaging acquisition speed are critical to advancing the clinical translation of photoacoustic histology imaging technology. However, improving the imaging speed with additional hardware is hampered by considerable costs and complex design. In this work, considering heavy redundancy in the biological photoacoustic images that overconsume the computing power, we propose an image reconstruction framework called non-uniform image reconstruction (NFSR), which exploits an object detection network to reconstruct low-sampled photoacoustic histology images into high-resolution images. The sampling speed of photoacoustic histology imaging is significantly improved, saving 90% of the time cost. Furthermore, NFSR focuses on the reconstruction of the region of interest while maintaining high PSNR and SSIM evaluation indicators of more than 99% but reducing the overall computation by 60%.
Collapse
Affiliation(s)
- Ling Xiao Zhou
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Yu Xia
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Renxiang Dai
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - An Ran Liu
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Si Wei Zhu
- The Institute of Translational Medicine, Tianjin Union Medical Center of Nankai University, Tianjin, 300121, China
| | - Peng Shi
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Wei Song
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Xiao Cong Yuan
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| |
Collapse
|
11
|
He D, Zhou J, Shang X, Tang X, Luo J, Chen SL. De-Noising of Photoacoustic Microscopy Images by Attentive Generative Adversarial Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1349-1362. [PMID: 37015584 DOI: 10.1109/tmi.2022.3227105] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
As a hybrid imaging technology, photoacoustic microscopy (PAM) imaging suffers from noise due to the maximum permissible exposure of laser intensity, attenuation of ultrasound in the tissue, and the inherent noise of the transducer. De-noising is an image processing method to reduce noise, and PAM image quality can be recovered. However, previous de-noising techniques usually heavily rely on manually selected parameters, resulting in unsatisfactory and slow de-noising performance for different noisy images, which greatly hinders practical and clinical applications. In this work, we propose a deep learning-based method to remove noise from PAM images without manual selection of settings for different noisy images. An attention enhanced generative adversarial network is used to extract image features and adaptively remove various levels of Gaussian, Poisson, and Rayleigh noise. The proposed method is demonstrated on both synthetic and real datasets, including phantom (leaf veins) and in vivo (mouse ear blood vessels and zebrafish pigment) experiments. In the in vivo experiments using synthetic datasets, our method achieves the improvement of 6.53 dB and 0.26 in peak signal-to-noise ratio and structural similarity metrics, respectively. The results show that compared with previous PAM de-noising methods, our method exhibits good performance in recovering images qualitatively and quantitatively. In addition, the de-noising processing speed of 0.016 s is achieved for an image with 256×256 pixels, which has the potential for real-time applications. Our approach is effective and practical for the de-noising of PAM images.
Collapse
|
12
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
13
|
Zhang J, Sun X, Li H, Ma H, Duan F, Wu Z, Zhu B, Chen R, Nie L. In vivo characterization and analysis of glioblastoma at different stages using multiscale photoacoustic molecular imaging. PHOTOACOUSTICS 2023; 30:100462. [PMID: 36865670 PMCID: PMC9972568 DOI: 10.1016/j.pacs.2023.100462] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 12/17/2022] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Simultaneous spatio-temporal description of tumor microvasculature, blood-brain barrier, and immune activity is pivotal to understanding the evolution mechanisms of highly aggressive glioblastoma, one of the most common primary brain tumors in adults. However, the existing intravital imaging modalities are still difficult to achieve it in one step. Here, we present a dual-scale multi-wavelength photoacoustic imaging approach cooperative with/without unique optical dyes to overcome this dilemma. Label-free photoacoustic imaging depicted the multiple heterogeneous features of neovascularization in tumor progression. In combination with classic Evans blue assay, the microelectromechanical system based photoacoustic microscopy enabled dynamic quantification of BBB dysfunction. Concurrently, using self-fabricated targeted protein probe (αCD11b-HSA@A1094) for tumor-associated myeloid cells, unparalleled imaging contrast of cells infiltration associated with tumor progression was visualized by differential photoacoustic imaging in the second near-infrared window at dual scale. Our photoacoustic imaging approach has great potential for tumor-immune microenvironment visualization to systematically reveal the tumor infiltration, heterogeneity, and metastasis in intracranial tumors.
Collapse
Affiliation(s)
- Jinde Zhang
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
| | - Xiang Sun
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
| | - Honghui Li
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
- Guangdong Cardiovascular Institute, 510000 Guangzhou, China
| | - Haosong Ma
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
| | - Fei Duan
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
| | - Zhiyou Wu
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
| | - Bowen Zhu
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Ronghe Chen
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
| | - Liming Nie
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics & Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen 361102 China
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| |
Collapse
|
14
|
Zhao H, Zhou Z, Wu F, Xiang D, Zhao H, Zhang W, Li L, Li Z, Huang J, Hu H, Liu C, Wang T, Liu W, Ma J, Yang F, Wang X, Zheng C. Self-supervised learning enables 3D digital subtraction angiography reconstruction from ultra-sparse 2D projection views: A multicenter study. Cell Rep Med 2022; 3:100775. [PMID: 36208630 PMCID: PMC9589028 DOI: 10.1016/j.xcrm.2022.100775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 08/04/2022] [Accepted: 09/17/2022] [Indexed: 11/04/2022]
Abstract
3D digital subtraction angiography (DSA) reconstruction from rotational 2D projection X-ray angiography is an important basis for diagnosis and treatment of intracranial aneurysms (IAs). The gold standard requires approximately 133 different projection views for 3D reconstruction. A method to significantly reduce the radiation dosage while ensuring the reconstruction quality is yet to be developed. We propose a self-supervised learning method to realize 3D-DSA reconstruction using ultra-sparse 2D projections. 202 cases (100 from one hospital for training and testing, 102 from two other hospitals for external validation) suspected to be suffering from IAs were conducted to analyze the reconstructed images. Two radiologists scored the reconstructed images from internal and external datasets using eight projections and identified all 82 lesions with high diagnostic confidence. The radiation dosages are approximately 1/16.7 compared with the gold standard method. Our proposed method can help develop a revolutionary 3D-DSA reconstruction method for use in clinic.
Collapse
Affiliation(s)
- Huangxuan Zhao
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zhenghong Zhou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Feihong Wu
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Dongqiao Xiang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Hui Zhao
- Department of Interventional Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Wei Zhang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Lin Li
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zhong Li
- Department of Interventional Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Jia Huang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Hongyao Hu
- Department of Interventional Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Tao Wang
- Department of Respiratory and Critical Care Medicine, University of Chinese Academy of Sciences Shenzhen Hospital, Shenzhen 518107, China
| | - Wenyu Liu
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jinqiang Ma
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China.
| | - Fan Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China.
| | - Xinggang Wang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China.
| |
Collapse
|
15
|
Deep learning alignment of bidirectional raster scanning in high speed photoacoustic microscopy. Sci Rep 2022; 12:16238. [PMID: 36171249 PMCID: PMC9519743 DOI: 10.1038/s41598-022-20378-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 09/13/2022] [Indexed: 11/08/2022] Open
Abstract
Simultaneous point-by-point raster scanning of optical and acoustic beams has been widely adapted to high-speed photoacoustic microscopy (PAM) using a water-immersible microelectromechanical system or galvanometer scanner. However, when using high-speed water-immersible scanners, the two consecutively acquired bidirectional PAM images are misaligned with each other because of unstable performance, which causes a non-uniform time interval between scanning points. Therefore, only one unidirectionally acquired image is typically used; consequently, the imaging speed is reduced by half. Here, we demonstrate a scanning framework based on a deep neural network (DNN) to correct misaligned PAM images acquired via bidirectional raster scanning. The proposed method doubles the imaging speed compared to that of conventional methods by aligning nonlinear mismatched cross-sectional B-scan photoacoustic images during bidirectional raster scanning. Our DNN-assisted raster scanning framework can further potentially be applied to other raster scanning-based biomedical imaging tools, such as optical coherence tomography, ultrasound microscopy, and confocal microscopy.
Collapse
|
16
|
Meng J, Zhang X, Liu L, Zeng S, Fang C, Liu C. Depth-extended acoustic-resolution photoacoustic microscopy based on a two-stage deep learning network. BIOMEDICAL OPTICS EXPRESS 2022; 13:4386-4397. [PMID: 36032586 PMCID: PMC9408237 DOI: 10.1364/boe.461183] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/25/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a major modality of photoacoustic imaging. It can non-invasively provide high-resolution morphological and functional information about biological tissues. However, the image quality of AR-PAM degrades rapidly when the targets move far away from the focus. Although some works have been conducted to extend the high-resolution imaging depth of AR-PAM, most of them have a small focal point requirement, which is generally not satisfied in a regular AR-PAM system. Therefore, we propose a two-stage deep learning (DL) reconstruction strategy for AR-PAM to recover high-resolution photoacoustic images at different out-of-focus depths adaptively. The residual U-Net with attention gate was developed to implement the image reconstruction. We carried out phantom and in vivo experiments to optimize the proposed DL network and verify the performance of the proposed reconstruction method. Experimental results demonstrated that our approach extends the depth-of-focus of AR-PAM from 1mm to 3mm under the 4 mJ/cm2 light energy used in the imaging system. In addition, the imaging resolution of the region 2 mm far away from the focus can be improved, similar to the in-focus area. The proposed method effectively improves the imaging ability of AR-PAM and thus could be used in various biomedical studies needing deeper depth.
Collapse
Affiliation(s)
- Jing Meng
- School of Computer, Qufu Normal University, Rizhao 276826, China
- These authors contributed equally to this work
| | - Xueting Zhang
- School of Computer, Qufu Normal University, Rizhao 276826, China
- These authors contributed equally to this work
| | - Liangjian Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- These authors contributed equally to this work
| | - Silue Zeng
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chengbo Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
17
|
Kim J, Kim G, Li L, Zhang P, Kim JY, Kim Y, Kim HH, Wang LV, Lee S, Kim C. Deep learning acceleration of multiscale superresolution localization photoacoustic imaging. LIGHT, SCIENCE & APPLICATIONS 2022; 11:131. [PMID: 35545614 PMCID: PMC9095876 DOI: 10.1038/s41377-022-00820-w] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 04/24/2022] [Accepted: 04/26/2022] [Indexed: 05/02/2023]
Abstract
A superresolution imaging approach that localizes very small targets, such as red blood cells or droplets of injected photoacoustic dye, has significantly improved spatial resolution in various biological and medical imaging modalities. However, this superior spatial resolution is achieved by sacrificing temporal resolution because many raw image frames, each containing the localization target, must be superimposed to form a sufficiently sampled high-density superresolution image. Here, we demonstrate a computational strategy based on deep neural networks (DNNs) to reconstruct high-density superresolution images from far fewer raw image frames. The localization strategy can be applied for both 3D label-free localization optical-resolution photoacoustic microscopy (OR-PAM) and 2D labeled localization photoacoustic computed tomography (PACT). For the former, the required number of raw volumetric frames is reduced from tens to fewer than ten. For the latter, the required number of raw 2D frames is reduced by 12 fold. Therefore, our proposed method has simultaneously improved temporal (via the DNN) and spatial (via the localization method) resolutions in both label-free microscopy and labeled tomography. Deep-learning powered localization PA imaging can potentially provide a practical tool in preclinical and clinical studies requiring fast temporal and fine spatial resolutions.
Collapse
Affiliation(s)
- Jongbeom Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Gyuwon Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Lei Li
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., MC 138-78, Pasadena, CA, 91125, USA
| | - Pengfei Zhang
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Jin Young Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
- Opticho, 532, CHANGeUP GROUND, 87 Cheongam-ro, Nam-gu, Pohang, Gyeongsangbuk, 37673, Republic of Korea
| | - Yeonggeun Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Hyung Ham Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Lihong V Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., MC 138-78, Pasadena, CA, 91125, USA.
| | - Seungchul Lee
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea.
| | - Chulhong Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea.
- Opticho, 532, CHANGeUP GROUND, 87 Cheongam-ro, Nam-gu, Pohang, Gyeongsangbuk, 37673, Republic of Korea.
| |
Collapse
|
18
|
Wang Z, Zhou Y, Hu S. Sparse Coding-Enabled Low-Fluence Multi-Parametric Photoacoustic Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:805-814. [PMID: 34710042 PMCID: PMC9036083 DOI: 10.1109/tmi.2021.3124124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Uniquely capable of simultaneous imaging of the hemoglobin concentration, blood oxygenation, and flow speed at the microvascular level in vivo, multi-parametric photoacoustic microscopy (PAM) has shown considerable impact in biomedicine. However, the multi-parametric PAM acquisition requires dense sampling and thus a high laser pulse repetition rate (up to MHz), which sets a strict limit on the applicable pulse energy due to safety considerations. A similar limitation is shared by high-speed PAM, which also uses lasers with high pulse repetition rates. To achieve high quantitative accuracy besides good structural visualization at low levels of laser fluence in PAM, we have developed a new, sparse coding-based two-step denoising technique. In the setting of intravital brain imaging, we demonstrated that this unsupervised learning approach enabled the reduction of the laser fluence in PAM by 5 times without compromise of the image quality (structural similarity index measure or SSIM: >0.92) and the quantitative accuracy (errors: <4.9%). Offering a significant relaxation in the requirement of PAM on laser fluence while maintaining the quality of structural imaging and accuracy of quantitative measurements, this sparse coding-based approach is expected to facilitate the application and clinical translation of multi-parametric PAM and high-speed PAM, which have a tight photon budget due to either safety considerations or laser source limitations.
Collapse
|
19
|
Cheng S, Zhou Y, Chen J, Li H, Wang L, Lai P. High-resolution photoacoustic microscopy with deep penetration through learning. PHOTOACOUSTICS 2022; 25:100314. [PMID: 34824976 PMCID: PMC8604673 DOI: 10.1016/j.pacs.2021.100314] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 05/18/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) enjoys superior spatial resolution and has received intense attention in recent years. The application, however, has been limited to shallow depths because of strong scattering of light in biological tissues. In this work, we propose to achieve deep-penetrating OR-PAM performance by using deep learning enabled image transformation on blurry living mouse vascular images that were acquired with an acoustic-resolution photoacoustic microscopy (AR-PAM) setup. A generative adversarial network (GAN) was trained in this study and improved the imaging lateral resolution of AR-PAM from 54.0 µm to 5.1 µm, comparable to that of a typical OR-PAM (4.7 µm). The feasibility of the network was evaluated with living mouse ear data, producing superior microvasculature images that outperforms blind deconvolution. The generalization of the network was validated with in vivo mouse brain data. Moreover, it was shown experimentally that the deep-learning method can retain high resolution at tissue depths beyond one optical transport mean free path. Whilst it can be further improved, the proposed method provides new horizons to expand the scope of OR-PAM towards deep-tissue imaging and wide applications in biomedicine.
Collapse
Affiliation(s)
- Shengfu Cheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Yingying Zhou
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Huanhao Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Puxiang Lai
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
20
|
Targeting visualization of malignant tumor based on the alteration of DWI signal generated by hTERT promoter–driven AQP1 overexpression. Eur J Nucl Med Mol Imaging 2022; 49:2310-2322. [DOI: 10.1007/s00259-022-05684-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 01/09/2022] [Indexed: 02/07/2023]
|
21
|
Ahn J, Kim JY, Choi W, Kim C. High-resolution functional photoacoustic monitoring of vascular dynamics in human fingers. PHOTOACOUSTICS 2021; 23:100282. [PMID: 34258222 PMCID: PMC8259315 DOI: 10.1016/j.pacs.2021.100282] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 04/18/2021] [Accepted: 06/23/2021] [Indexed: 05/09/2023]
Abstract
Functional imaging of microvascular dynamics in extremities delivers intuitive information for early detection, diagnosis, and prognosis of vascular diseases. High-resolution and high-speed photoacoustic microscopy (PAM) visualizes and measures multiparametric information of microvessel networks in vivo such as morphology, flow, oxygen saturation, and metabolic rate. Here, we demonstrate high-resolution photoacoustic monitoring of vascular dynamics in human fingers. We photoacoustically monitored the position displacement of blood vessels associated with arterial pulsation in human fingers. Then, during and after arterial occlusion, we photoacoustically quantified oxygen consumption and blood perfusion in the fingertips. The results demonstrate that high-resolution functional PAM could be a vital tool in peripheral vascular examination for measuring heart rate, oxygen consumption, and/or blood perfusion.
Collapse
|
22
|
Vu T, DiSpirito A, Li D, Wang Z, Zhu X, Chen M, Jiang L, Zhang D, Luo J, Zhang YS, Zhou Q, Horstmeyer R, Yao J. Deep image prior for undersampling high-speed photoacoustic microscopy. PHOTOACOUSTICS 2021; 22:100266. [PMID: 33898247 PMCID: PMC8056431 DOI: 10.1016/j.pacs.2021.100266] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/15/2021] [Accepted: 03/23/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser's repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density (i.e., undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | | | - Daiwei Li
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Zixuan Wang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Xiaoyi Zhu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Maomao Chen
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Laiming Jiang
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | - Dong Zhang
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Jianwen Luo
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Qifa Zhou
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | | | - Junjie Yao
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
23
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|