1
|
Prakash R, Manwar R, Avanaki K. Evaluation of 10 current image reconstruction algorithms for linear array photoacoustic imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300117. [PMID: 38010300 DOI: 10.1002/jbio.202300117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 10/15/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023]
Abstract
Various reconstruction algorithms have been implemented for linear array photoacoustic imaging systems with the goal of accurately reconstructing the strength absorbers within the tissue being imaged. Since the existing algorithms have been introduced by different research groups and the context of performance evaluation was not consistent, it is difficult to make a fair comparison between them. In this study, we systematically compared the performance of 10 published image reconstruction algorithms (DAS, UBP, pDAS, DMAS, MV, EIGMV, SLSC, GSC, TR, and FD) using in-vitro phantom data. Evaluations were conducted based on lateral resolution of the reconstructed images, computational time, target detectability, and noise sensitivity. We anticipate the outcome of this study will assist researchers in selecting appropriate algorithms for their linear array PA imaging applications.
Collapse
Affiliation(s)
- Ravi Prakash
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Rayyan Manwar
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Kamran Avanaki
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Dermatology, University of Illinois at Chicago, Chicago, Illinois, USA
| |
Collapse
|
2
|
Gubbi MR, Assis F, Chrispin J, Bell MAL. Deep learning in vivo catheter tip locations for photoacoustic-guided cardiac interventions. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11505. [PMID: 38076439 PMCID: PMC10704189 DOI: 10.1117/1.jbo.29.s1.s11505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/27/2023] [Accepted: 10/23/2023] [Indexed: 12/18/2023]
Abstract
Significance Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, photoacoustic imaging can potentially be combined with robotic visual servoing, with initial demonstrations requiring segmentation of catheter tips. However, typical segmentation algorithms applied to conventional image formation methods are susceptible to problematic reflection artifacts, which compromise the required detectability and localization of the catheter tip. Aim We describe a convolutional neural network and the associated customizations required to successfully detect and localize in vivo photoacoustic signals from a catheter tip received by a phased array transducer, which is a common transducer for transthoracic cardiac imaging applications. Approach We trained a network with simulated photoacoustic channel data to identify point sources, which appropriately model photoacoustic signals from the tip of an optical fiber inserted in a cardiac catheter. The network was validated with an independent simulated dataset, then tested on data from the tips of cardiac catheters housing optical fibers and inserted into ex vivo and in vivo swine hearts. Results When validated with simulated data, the network achieved an F 1 score of 98.3% and Euclidean errors (mean ± one standard deviation) of 1.02 ± 0.84 mm for target depths of 20 to 100 mm. When tested on ex vivo and in vivo data, the network achieved F 1 scores as large as 100.0%. In addition, for target depths of 40 to 90 mm in the ex vivo and in vivo data, up to 86.7% of axial and 100.0% of lateral position errors were lower than the axial and lateral resolution, respectively, of the phased array transducer. Conclusions These results demonstrate the promise of the proposed method to identify photoacoustic sources in future interventional cardiology and cardiac electrophysiology applications.
Collapse
Affiliation(s)
- Mardava R. Gubbi
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| | - Fabrizio Assis
- Johns Hopkins Medical Institutions, Division of Cardiology, Baltimore, Maryland, United States
| | - Jonathan Chrispin
- Johns Hopkins Medical Institutions, Division of Cardiology, Baltimore, Maryland, United States
| | - Muyinatu A. Lediju Bell
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
3
|
Sharma A, Oluyemi E, Myers K, Ambinder E, Bell MAL. Spatial Coherence Approaches to Distinguish Suspicious Mass Contents in Fundamental and Harmonic Breast Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:70-84. [PMID: 37956000 PMCID: PMC10851341 DOI: 10.1109/tuffc.2023.3332207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
When compared to fundamental B-mode imaging, coherence-based beamforming, and harmonic imaging are independently known to reduce acoustic clutter, distinguish solid from fluid content in indeterminate breast masses, and thereby reduce unnecessary biopsies during a breast cancer diagnosis. However, a systematic investigation of independent and combined coherence beamforming and harmonic imaging approaches is necessary for the clinical deployment of the most optimal approach. Therefore, we compare the performance of fundamental and harmonic images created with short-lag spatial coherence (SLSC), M-weighted SLSC (M-SLSC), SLSC combined with robust principal component analysis with no M-weighting (r-SLSC), and r-SLSC with M-weighting (R-SLSC), relative to traditional fundamental and harmonic B-mode images, when distinguishing solid from fluid breast masses. Raw channel data acquired from 40 total breast masses (28 solid, 7 fluid, 5 mixed) were beamformed and analyzed. The contrast of fluid masses was better with fundamental rather than harmonic coherence imaging, due to the lower spatial coherence within the fluid masses in the fundamental coherence images. Relative to SLSC imaging, M-SLSC, r-SLSC, and R-SLSC imaging provided similar contrast across multiple masses (with the exception of clinically challenging complicated cysts) and minimized the range of generalized contrast-to-noise ratios (gCNRs) of fluid masses, yet required additional computational resources. Among the eight coherence imaging modes compared, fundamental SLSC imaging best identified fluid versus solid breast mass contents, outperforming fundamental and harmonic B-mode imaging. With fundamental SLSC images, the specificity and sensitivity to identify fluid masses using the reader-independent metrics of contrast difference, mean lag one coherence (LOC), and gCNR were 0.86 and 1, 1 and 0.89, and 1 and 1, respectively. Results demonstrate that fundamental SLSC imaging and gCNR (or LOC if no coherence image or background region of interest is introduced) have the greatest potential to impact clinical decisions and improve the diagnostic certainty of breast mass contents. These observations are additionally anticipated to extend to masses in other organs.
Collapse
|
4
|
Jin H, Zheng Z, Cui Z, Jiang Y, Chen G, Li W, Wang Z, Wang J, Yang C, Song W, Chen X, Zheng Y. A flexible optoacoustic blood 'stethoscope' for noninvasive multiparametric cardiovascular monitoring. Nat Commun 2023; 14:4692. [PMID: 37542045 PMCID: PMC10403590 DOI: 10.1038/s41467-023-40181-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 07/13/2023] [Indexed: 08/06/2023] Open
Abstract
Quantitative and multiparametric blood analysis is of great clinical importance in cardiovascular disease diagnosis. Although there are various methods to extract blood information, they often require invasive procedures, lack continuity, involve bulky instruments, or have complicated testing procedures. Flexible sensors can realize on-skin assessment of several vital signals, but generally exhibit limited function to monitor blood characteristics. Here, we report a flexible optoacoustic blood 'stethoscope' for noninvasive, multiparametric, and continuous cardiovascular monitoring, without requiring complicated procedures. The optoacoustic blood 'stethoscope' features the light delivery elements to illuminate blood and the piezoelectric acoustic elements to capture light-induced acoustic waves. We show that the optoacoustic blood 'stethoscope' can adhere to the skin for continuous and non-invasive in-situ monitoring of multiple cardiovascular biomarkers, including hypoxia, intravascular exogenous agent concentration decay, and hemodynamics, which can be further visualized with a tailored 3D algorithm. Demonstrations on both in-vivo animal trials and human subjects highlight the optoacoustic blood 'stethoscope''s potential for cardiovascular disease diagnosis and prediction.
Collapse
Affiliation(s)
- Haoran Jin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
- The State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zesheng Zheng
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
- Institute of Microelectronics, Agency for Science, Technology and Research, Singapore, 138634, Singapore
| | - Zequn Cui
- School of Materials Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Ying Jiang
- School of Materials Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Geng Chen
- School of Materials Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Wenlong Li
- School of Materials Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Zhimin Wang
- School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, 637371, Singapore
| | - Jilei Wang
- School of Materials Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Chuanshi Yang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Weitao Song
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore
| | - Xiaodong Chen
- School of Materials Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
| | - Yuanjin Zheng
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
| |
Collapse
|
5
|
Zhang J, Wiacek A, Feng Z, Ding K, Lediju Bell MA. Flexible array transducer for photoacoustic-guided interventions: phantom and ex vivo demonstrations. BIOMEDICAL OPTICS EXPRESS 2023; 14:4349-4368. [PMID: 37799699 PMCID: PMC10549736 DOI: 10.1364/boe.491406] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/29/2023] [Accepted: 07/06/2023] [Indexed: 10/07/2023]
Abstract
Photoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom and ex vivo bovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600 μm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92-24.42 dB, 46.50-67.51 dB, and 0.76-1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.
Collapse
Affiliation(s)
- Jiaxin Zhang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alycen Wiacek
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ziwei Feng
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
6
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|