1
|
Wang N, Chen T, Liu C, Meng J. Intelligent skin-removal photoacoustic computed tomography for human based on deep learning. JOURNAL OF BIOPHOTONICS 2024:e202400197. [PMID: 39092484 DOI: 10.1002/jbio.202400197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 07/13/2024] [Accepted: 07/15/2024] [Indexed: 08/04/2024]
Abstract
Photoacoustic computed tomography (PACT) has centimeter-level imaging ability and can be used to detect the human body. However, strong photoacoustic signals from skin cover deep tissue information, hindering the frontal display and analysis of photoacoustic images of deep regions of interest. Therefore, we propose a 2.5 D deep learning model based on feature pyramid structure and single-type skin annotation to extract the skin region, and design a mask generation algorithm to remove skin automatically. PACT imaging experiments on the human periphery blood vessel verified the correctness our proposed skin-removal method. Compared with previous studies, our method exhibits high robustness to the uneven illumination, irregular skin boundary, and reconstruction artifacts in the images, and the reconstruction errors of PACT images decreased by 20% ~ 90% with a 1.65 dB improvement in the signal-to-noise ratio at the same time. This study may provide a promising way for high-definition PACT imaging of deep tissues.
Collapse
Affiliation(s)
- Ning Wang
- School of Computer, Qufu Normal University, Rizhao, China
| | - Tao Chen
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chengbo Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jing Meng
- School of Computer, Qufu Normal University, Rizhao, China
| |
Collapse
|
2
|
Sankepalle DM, Anthony B, Mallidi S. Visual inertial odometry enabled 3D ultrasound and photoacoustic imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:2756-2772. [PMID: 37342691 PMCID: PMC10278605 DOI: 10.1364/boe.489614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 05/04/2023] [Accepted: 05/04/2023] [Indexed: 06/23/2023]
Abstract
There is an increasing need for 3D ultrasound and photoacoustic (USPA) imaging technology for real-time monitoring of dynamic changes in vasculature or molecular markers in various malignancies. Current 3D USPA systems utilize expensive 3D transducer arrays, mechanical arms or limited-range linear stages to reconstruct the 3D volume of the object being imaged. In this study, we developed, characterized, and demonstrated an economical, portable, and clinically translatable handheld device for 3D USPA imaging. An off-the-shelf, low-cost visual odometry system (the Intel RealSense T265 camera equipped with simultaneous localization and mapping technology) to track free hand movements during imaging was attached to the USPA transducer. Specifically, we integrated the T265 camera into a commercially available USPA imaging probe to acquire 3D images and compared it to the reconstructed 3D volume acquired using a linear stage (ground truth). We were able to reliably detect 500 µm step sizes with 90.46% accuracy. Various users evaluated the potential of handheld scanning, and the volume calculated from the motion-compensated image was not significantly different from the ground truth. Overall, our results, for the first time, established the use of an off-the-shelf and low-cost visual odometry system for freehand 3D USPA imaging that can be seamlessly integrated into several photoacoustic imaging systems for various clinical applications.
Collapse
Affiliation(s)
| | - Brian Anthony
- Institute of Medical Engineering and Sciences, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, 02155, USA
- Wellman Center for Photomedicine, Harvard Medical School, Boston, MA, 02115, USA
| |
Collapse
|
3
|
Almas T, Haider R, Malik J, Mehmood A, Alvi A, Naz H, Satti DI, Zaidi SMJ, AlSubai AK, AlNajdi S, Alsufyani R, Ramtohul RK, Almesri A, Alsufyani M, H. Al-Bunnia A, Alghamdi HAS, Sattar Y, Alraies MC, Raina S. Nanotechnology in interventional cardiology: A state-of-the-art review. IJC HEART & VASCULATURE 2022; 43:101149. [DOI: 10.1016/j.ijcha.2022.101149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 11/03/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022]
|
4
|
Meng J, Zhang X, Liu L, Zeng S, Fang C, Liu C. Depth-extended acoustic-resolution photoacoustic microscopy based on a two-stage deep learning network. BIOMEDICAL OPTICS EXPRESS 2022; 13:4386-4397. [PMID: 36032586 PMCID: PMC9408237 DOI: 10.1364/boe.461183] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/25/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a major modality of photoacoustic imaging. It can non-invasively provide high-resolution morphological and functional information about biological tissues. However, the image quality of AR-PAM degrades rapidly when the targets move far away from the focus. Although some works have been conducted to extend the high-resolution imaging depth of AR-PAM, most of them have a small focal point requirement, which is generally not satisfied in a regular AR-PAM system. Therefore, we propose a two-stage deep learning (DL) reconstruction strategy for AR-PAM to recover high-resolution photoacoustic images at different out-of-focus depths adaptively. The residual U-Net with attention gate was developed to implement the image reconstruction. We carried out phantom and in vivo experiments to optimize the proposed DL network and verify the performance of the proposed reconstruction method. Experimental results demonstrated that our approach extends the depth-of-focus of AR-PAM from 1mm to 3mm under the 4 mJ/cm2 light energy used in the imaging system. In addition, the imaging resolution of the region 2 mm far away from the focus can be improved, similar to the in-focus area. The proposed method effectively improves the imaging ability of AR-PAM and thus could be used in various biomedical studies needing deeper depth.
Collapse
Affiliation(s)
- Jing Meng
- School of Computer, Qufu Normal University, Rizhao 276826, China
- These authors contributed equally to this work
| | - Xueting Zhang
- School of Computer, Qufu Normal University, Rizhao 276826, China
- These authors contributed equally to this work
| | - Liangjian Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- These authors contributed equally to this work
| | - Silue Zeng
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chengbo Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
5
|
Jin Y, Yin Y, Li C, Liu H, Shi J. Non-Invasive Monitoring of Human Health by Photoacoustic Spectroscopy. SENSORS 2022; 22:s22031155. [PMID: 35161900 PMCID: PMC8839463 DOI: 10.3390/s22031155] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 01/27/2022] [Accepted: 01/27/2022] [Indexed: 12/24/2022]
Abstract
For certain diseases, the continuous long-term monitoring of the physiological condition is crucial. Therefore, non-invasive monitoring methods have attracted widespread attention in health care. This review aims to discuss the non-invasive monitoring technologies for human health based on photoacoustic spectroscopy. First, the theoretical basis of photoacoustic spectroscopy and related devices are reported. Furthermore, this article introduces the monitoring methods for blood glucose, blood oxygen, lipid, and tumors, including differential continuous-wave photoacoustic spectroscopy, microscopic photoacoustic spectroscopy, mid-infrared photoacoustic detection, wavelength-modulated differential photoacoustic spectroscopy, and others. Finally, we present the limitations and prospects of photoacoustic spectroscopy.
Collapse
Affiliation(s)
- Yongyong Jin
- College of Automation, Hangzhou Dianzi University, Hangzhou 310018, Zhejiang, China;
- Zhejiang Lab, Hangzhou 311121, Zhejiang, China; (Y.Y.); (C.L.)
| | - Yonggang Yin
- Zhejiang Lab, Hangzhou 311121, Zhejiang, China; (Y.Y.); (C.L.)
| | - Chiye Li
- Zhejiang Lab, Hangzhou 311121, Zhejiang, China; (Y.Y.); (C.L.)
| | - Hongying Liu
- College of Automation, Hangzhou Dianzi University, Hangzhou 310018, Zhejiang, China;
- Correspondence: (H.L.); (J.S.)
| | - Junhui Shi
- Zhejiang Lab, Hangzhou 311121, Zhejiang, China; (Y.Y.); (C.L.)
- Correspondence: (H.L.); (J.S.)
| |
Collapse
|
6
|
Wu M, Awasthi N, Rad NM, Pluim JPW, Lopata RGP. Advanced Ultrasound and Photoacoustic Imaging in Cardiology. SENSORS (BASEL, SWITZERLAND) 2021; 21:7947. [PMID: 34883951 PMCID: PMC8659598 DOI: 10.3390/s21237947] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 12/26/2022]
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of death worldwide. An effective management and treatment of CVDs highly relies on accurate diagnosis of the disease. As the most common imaging technique for clinical diagnosis of the CVDs, US imaging has been intensively explored. Especially with the introduction of deep learning (DL) techniques, US imaging has advanced tremendously in recent years. Photoacoustic imaging (PAI) is one of the most promising new imaging methods in addition to the existing clinical imaging methods. It can characterize different tissue compositions based on optical absorption contrast and thus can assess the functionality of the tissue. This paper reviews some major technological developments in both US (combined with deep learning techniques) and PA imaging in the application of diagnosis of CVDs.
Collapse
Affiliation(s)
- Min Wu
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| | - Navchetan Awasthi
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Nastaran Mohammadian Rad
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Josien P. W. Pluim
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Richard G. P. Lopata
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| |
Collapse
|
7
|
Yang X, Chen YH, Xia F, Sawan M. Photoacoustic imaging for monitoring of stroke diseases: A review. PHOTOACOUSTICS 2021; 23:100287. [PMID: 34401324 PMCID: PMC8353507 DOI: 10.1016/j.pacs.2021.100287] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 07/02/2021] [Accepted: 07/16/2021] [Indexed: 05/14/2023]
Abstract
Stroke is the leading cause of death and disability after ischemic heart disease. However, there is lacking a non-invasive long-time monitoring technique for stroke diagnosis and therapy. The photoacoustic imaging approach reconstructs images of an object based on the energy excitation by optical absorption and its conversion to acoustic waves, due to corresponding thermoelastic expansion, which has optical resolution and acoustic propagation. This emerging functional imaging method is a non-invasive technique. Due to its precision, this method is particularly attractive for stroke monitoring purpose. In this paper, we review the achievements of this technology and its applications on stroke, as well as the development status in both animal and human applications. Also, various photoacoustic systems and multi-modality photoacoustic imaging are introduced as for potential clinical applications. Finally, the challenges of photoacoustic imaging for monitoring stroke are discussed.
Collapse
Affiliation(s)
- Xi Yang
- Zhejiang University, Hangzhou, 310024, Zhejiang, China
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
| | - Yun-Hsuan Chen
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
| | - Fen Xia
- Zhejiang University, Hangzhou, 310024, Zhejiang, China
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
| | - Mohamad Sawan
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
- Corresponding author at: CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| |
Collapse
|
8
|
Mukaddim RA, Ahmed R, Varghese T. Subaperture Processing-Based Adaptive Beamforming for Photoacoustic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2336-2350. [PMID: 33606629 PMCID: PMC8330397 DOI: 10.1109/tuffc.2021.3060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Delay-and-sum (DAS) beamformers, when applied to photoacoustic (PA) image reconstruction, produce strong sidelobes due to the absence of transmit focusing. Consequently, DAS PA images are often severely degraded by strong off-axis clutter. For preclinical in vivo cardiac PA imaging, the presence of these noise artifacts hampers the detectability and interpretation of PA signals from the myocardial wall, crucial for studying blood-dominated cardiac pathological information and to complement functional information derived from ultrasound imaging. In this article, we present PA subaperture processing (PSAP), an adaptive beamforming method, to mitigate these image degrading effects. In PSAP, a pair of DAS reconstructed images is formed by splitting the received channel data into two complementary nonoverlapping subapertures. Then, a weighting matrix is derived by analyzing the correlation between subaperture beamformed images and multiplied with the full-aperture DAS PA image to reduce sidelobes and incoherent clutter. We validated PSAP using numerical simulation studies using point target, diffuse inclusion and microvasculature imaging, and in vivo feasibility studies on five healthy murine models. Qualitative and quantitative analysis demonstrate improvements in PAI image quality with PSAP compared to DAS and coherence factor weighted DAS (DAS CF ). PSAP demonstrated improved target detectability with a higher generalized contrast-to-noise (gCNR) ratio in vasculature simulations where PSAP produces 19.61% and 19.53% higher gCNRs than DAS and DAS CF , respectively. Furthermore, PSAP provided higher image contrast quantified using contrast ratio (CR) (e.g., PSAP produces 89.26% and 11.90% higher CR than DAS and DAS CF in vasculature simulations) and improved clutter suppression.
Collapse
|
9
|
Kumar R, Gulia K. The convergence of nanotechnology‐stem cell, nanotopography‐mechanobiology, and biotic‐abiotic interfaces: Nanoscale tools for tackling the top killer, arteriosclerosis, strokes, and heart attacks. NANO SELECT 2021. [DOI: 10.1002/nano.202000192] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Affiliation(s)
- Rajiv Kumar
- NIET National Institute of Medical Science Rajasthan India
| | - Kiran Gulia
- Materials and Manufacturing School of Engineering University of Wolverhampton Wolverhampton England, UK
| |
Collapse
|
10
|
Wiacek A, Lediju Bell MA. Photoacoustic-guided surgery from head to toe [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:2079-2117. [PMID: 33996218 PMCID: PMC8086464 DOI: 10.1364/boe.417984] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/17/2021] [Accepted: 02/18/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging-the combination of optics and acoustics to visualize differences in optical absorption - has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.
Collapse
Affiliation(s)
- Alycen Wiacek
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
11
|
Mukaddim RA, Varghese T. Spatiotemporal Coherence Weighting for In Vivo Cardiac Photoacoustic Image Beamformation. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:586-598. [PMID: 32795968 PMCID: PMC8011040 DOI: 10.1109/tuffc.2020.3016900] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Photoacoustic (PA) image reconstruction generally utilizes delay-and-sum (DAS) beamforming of received acoustic waves from tissue irradiated with optical illumination. However, nonadaptive DAS reconstructed cardiac PA images exhibit temporally varying noise which causes reduced myocardial PA signal specificity, making image interpretation difficult. Adaptive beamforming algorithms such as minimum variance (MV) with coherence factor (CF) weighting have been previously reported to improve the DAS image quality. In this article, we report on an adaptive beamforming algorithm by extending CF weighting to the temporal domain for preclinical cardiac PA imaging (PAI). The proposed spatiotemporal coherence factor (STCF) considers multiple temporally adjacent image acquisition events during beamforming and cancels out signals with low spatial coherence and temporal coherence, resulting in higher background noise cancellation while preserving the main features of interest (myocardial wall) in the resultant PA images. STCF has been validated using the numerical simulations and in vivo ECG and respiratory-signal-gated cardiac PAI in healthy murine hearts. The numerical simulation results demonstrate that STCF weighting outperforms DAS and MV beamforming with and without CF weighting under different levels of inherent contrast, acoustic attenuation, optical scattering, and signal-to-noise (SNR) of channel data. Performance improvement is attributed to higher sidelobe reduction (at least 5 dB) and SNR improvement (at least 10 dB). Improved myocardial signal specificity and higher signal rejection in the left ventricular chamber and acoustic gel region are observed with STCF in cardiac PAI.
Collapse
|
12
|
Lafci B, Mercep E, Morscher S, Dean-Ben XL, Razansky D. Deep Learning for Automatic Segmentation of Hybrid Optoacoustic Ultrasound (OPUS) Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:688-696. [PMID: 32894712 DOI: 10.1109/tuffc.2020.3022324] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The highly complementary information provided by multispectral optoacoustics and pulse-echo ultrasound have recently prompted development of hybrid imaging instruments bringing together the unique contrast advantages of both modalities. In the hybrid optoacoustic ultrasound (OPUS) combination, images retrieved by one modality may further be used to improve the reconstruction accuracy of the other. In this regard, image segmentation plays a major role as it can aid improving the image quality and quantification abilities by facilitating modeling of light and sound propagation through the imaged tissues and surrounding coupling medium. Here, we propose an automated approach for surface segmentation in whole-body mouse OPUS imaging using a deep convolutional neural network (CNN). The method has shown robust performance, attaining accurate segmentation of the animal boundary in both optoacoustic and pulse-echo ultrasound images, as evinced by quantitative performance evaluation using Dice coefficient metrics.
Collapse
|
13
|
Abstract
Photoacoustic imaging has demonstrated its potential for diagnosis over the last few decades. In recent years, its unique imaging capabilities, such as detecting structural, functional and molecular information in deep regions with optical contrast and ultrasound resolution, have opened up many opportunities for photoacoustic imaging to be used during image-guided interventions. Numerous studies have investigated the capability of photoacoustic imaging to guide various interventions such as drug delivery, therapies, surgeries, and biopsies. These studies have demonstrated that photoacoustic imaging can guide these interventions effectively and non-invasively in real-time. In this minireview, we will elucidate the potential of photoacoustic imaging in guiding active and passive drug deliveries, photothermal therapy, and other surgeries and therapies using endogenous and exogenous contrast agents including organic, inorganic, and hybrid nanoparticles, as well as needle-based biopsy procedures. The advantages of photoacoustic imaging in guided interventions will be discussed. It will, therefore, show that photoacoustic imaging has great potential in real-time interventions due to its advantages over current imaging modalities like computed tomography, magnetic resonance imaging, and ultrasound imaging.
Collapse
Affiliation(s)
- Madhumithra S Karthikesh
- Bioengineering Program and Institute for Bioengineering Research, University of Kansas, Lawrence, KS 66045, USA
| | - Xinmai Yang
- Bioengineering Program and Institute for Bioengineering Research, University of Kansas, Lawrence, KS 66045, USA
- Department of Mechanical Engineering, University of Kansas, Lawrence, KS 66045, USA
| |
Collapse
|
14
|
van Soest G, Desjardins A. Interventional photoacoustics: using light to sound out the path to safe, effective interventions. Phys Med Biol 2019; 64:220401. [DOI: 10.1088/1361-6560/ab50d8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|