1
|
Wang Y, Bao L, Li B, Ma Z, Zhao Y, Liu J, Luan J, Yu Y. Compact photoacoustic endoscopy by measuring initial photoacoustic pressure using phase-shift interferometry. PHOTOACOUSTICS 2025; 43:100710. [PMID: 40124585 PMCID: PMC11930450 DOI: 10.1016/j.pacs.2025.100710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 12/19/2024] [Accepted: 03/03/2025] [Indexed: 03/25/2025]
Abstract
This paper demonstrates a novel photoacoustic endoscopy method using phase-shift interferometry (PSI) for photoacoustic signal detection. The method employs a three-step phase-shifting interferometry technique to detect changes in light intensity induced by the initial photoacoustic pressure. This enables identical optical structures for photoacoustic excitation and detection, simplifying the probe design and facilitating miniaturization. The proposed method eliminates the need for acoustic coupling agents, enabling non-contact detection. We employ a phase-shifting technique to mitigate the influence of random phase variations caused by environmental disturbances. The performance of the proposed method is verified by endoscopic imaging of carbon rods, leaf skeletons, and ex vivo mouse rectum tissues.
Collapse
Affiliation(s)
- Yi Wang
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
- Hebei Key Laboratory of Micro -Nano Precision Optical Sensing and Measurement Technology, Qinhuangdao 066004,China
| | - Lei Bao
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
| | - Benhong Li
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
| | - Zhenhe Ma
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
- Hebei Key Laboratory of Micro -Nano Precision Optical Sensing and Measurement Technology, Qinhuangdao 066004,China
| | - Yuqian Zhao
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
| | - Jian Liu
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
- Hebei Key Laboratory of Micro -Nano Precision Optical Sensing and Measurement Technology, Qinhuangdao 066004,China
| | - Jingmin Luan
- School of Computer and Communication Engineering, Northeastern University at Qinhuangdao,Qinhuangdao 066004,China
| | - Yao Yu
- School of Control Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
- Hebei Key Laboratory of Micro -Nano Precision Optical Sensing and Measurement Technology, Qinhuangdao 066004,China
| |
Collapse
|
2
|
Zhang J, Arroyo J, Lediju Bell MA. Multispectral photoacoustic imaging of breast cancer tissue with histopathology validation. BIOMEDICAL OPTICS EXPRESS 2025; 16:995-1005. [PMID: 40109539 PMCID: PMC11919340 DOI: 10.1364/boe.547262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 01/19/2025] [Accepted: 01/25/2025] [Indexed: 03/22/2025]
Abstract
Intraoperative multispectral photoacoustic pathology assessment presents a promising approach to guide biopsy resection. In this study, we developed and validated a novel photoacoustic technique to differentiate between healthy and cancerous tissues. Our method consisted of photoacoustic contrast calculations as a function of wavelength, followed by projections of the resulting spectra from training data into a two-dimensional space using principal component analysis to create representative spectra, then calculation of the average cosine similarity between the spectrum of each pixel in test data and the representative spectra. The test healthy tissue region had a 0.967 mean correlation with the representative healthy tissue spectrum and a lower mean correlation (0.801) with the cancer tissue spectrum. The test cancer tissue region had a 0.954 mean correlation with the cancer tissue spectrum and a lower mean correlation (0.762) with the healthy tissue spectrum. Our method was further validated through qualitative comparison with high-resolution hematoxylin and eosin histopathology scans. Healthy tissue was primarily correlated with the optical absorption of blood (i.e., deoxyhemoglobin), while invasive ductal carcinoma breast cancer tissue was primarily correlated with the optical absorption of lipids. Our label-free histopathology approach utilizing multispectral photoacoustic imaging has the potential to enable real-time tumor margin determination during biopsy or surgery.
Collapse
Affiliation(s)
- Junhao Zhang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junior Arroyo
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Oncology, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
3
|
Zhang Y, Zou Y, Liu PX. Point Cloud Registration in Laparoscopic Liver Surgery Using Keypoint Correspondence Registration Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:749-760. [PMID: 39255087 DOI: 10.1109/tmi.2024.3457228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Laparoscopic liver surgery is a newly developed minimally invasive technique and represents an inevitable trend in the future development of surgical methods. By using augmented reality (AR) technology to overlay preoperative CT models with intraoperative laparoscopic videos, surgeons can accurately locate blood vessels and tumors, significantly enhancing the safety and precision of surgeries. Point cloud registration technology is key to achieving this effect. However, there are two major challenges in registering the CT model with the point cloud surface reconstructed from intraoperative laparoscopy. First, the surface features of the organ are not prominent. Second, due to the limited field of view of the laparoscope, the reconstructed surface typically represents only a very small portion of the entire organ. To address these issues, this paper proposes the keypoint correspondence registration network (KCR-Net). This network first uses the neighborhood feature fusion module (NFFM) to aggregate and interact features from different regions and structures within a pair of point clouds to obtain comprehensive feature representations. Then, through correspondence generation, it directly generates keypoints and their corresponding weights, with keypoints located in the common structures of the point clouds to be registered, and corresponding weights learned automatically by the network. This approach enables accurate point cloud registration even under conditions of extremely low overlap. Experiments conducted on the ModelNet40, 3Dircadb, DePoLL demonstrate that our method achieves excellent registration accuracy and is capable of meeting the requirements of real-world scenarios.
Collapse
|
4
|
Yang S, Hu S. Perspectives on endoscopic functional photoacoustic microscopy. APPLIED PHYSICS LETTERS 2024; 125:030502. [PMID: 39022117 PMCID: PMC11251735 DOI: 10.1063/5.0201691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 06/27/2024] [Indexed: 07/20/2024]
Abstract
Endoscopy, enabling high-resolution imaging of deep tissues and internal organs, plays an important role in basic research and clinical practice. Recent advances in photoacoustic microscopy (PAM), demonstrating excellent capabilities in high-resolution functional imaging, have sparked significant interest in its integration into the field of endoscopy. However, there are challenges in achieving functional PAM in the endoscopic setting. This Perspective article discusses current progress in the development of endoscopic PAM and the challenges related to functional measurements. Then, it points out potential directions to advance endoscopic PAM for functional imaging by leveraging fiber optics, microfabrication, optical engineering, and computational approaches. Finally, it highlights emerging opportunities for functional endoscopic PAM in basic and translational biomedicine.
Collapse
Affiliation(s)
- Shuo Yang
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| | - Song Hu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| |
Collapse
|
5
|
Gao S, Jiang Y, Li M, Wang Y, Shen Y, Flegal MC, Nephew BC, Fischer GS, Liu Y, Fichera L, Zhang HK. Laparoscopic Photoacoustic Imaging System Based on Side-Illumination Diffusing Fibers. IEEE Trans Biomed Eng 2023; 70:3187-3196. [PMID: 37224375 PMCID: PMC10592404 DOI: 10.1109/tbme.2023.3279772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVE To develop a flexible miniaturized photoacoustic (PA) imaging probe for detecting anatomical structures during laparoscopic surgery. The proposed probe aimed to facilitate intraoperative detection of blood vessels and nerve bundles embedded in tissue not directly visible to the operating physician to preserve these delicate and vital structures. METHODS We modified a commercially available ultrasound laparoscopic probe by incorporating custom-fabricated side-illumination diffusing fibers that illuminate the probe's field of view. The probe geometry, including the position and orientation of the fibers and the emission angle, was determined using computational models of light propagation in the simulation and subsequently validated through experimental studies. RESULTS In wire phantom studies within an optical scattering medium, the probe achieved an imaging resolution of 0.43 ±0.09 mm and a signal-to-noise ratio of 31.2±1.84 dB. We also conducted an ex vivo study using a rat model, demonstrating the successful detection of blood vessels and nerves. CONCLUSION Our results indicate the viability of a side-illumination diffusing fiber PA imaging system for guidance during laparoscopic surgery. SIGNIFICANCE The potential clinical translation of this technology could enhance the preservation of critical vascular structures and nerves, thereby minimizing post-operative complications.
Collapse
|
6
|
Tao R, Zou X, Zheng G. LAST: LAtent Space-Constrained Transformers for Automatic Surgical Phase Recognition and Tool Presence Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3256-3268. [PMID: 37227905 DOI: 10.1109/tmi.2023.3279838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
When developing context-aware systems, automatic surgical phase recognition and tool presence detection are two essential tasks. There exist previous attempts to develop methods for both tasks but majority of the existing methods utilize a frame-level loss function (e.g., cross-entropy) which does not fully leverage the underlying semantic structure of a surgery, leading to sub-optimal results. In this paper, we propose multi-task learning-based, LAtent Space-constrained Transformers, referred as LAST, for automatic surgical phase recognition and tool presence detection. Our design features a two-branch transformer architecture with a novel and generic way to leverage video-level semantic information during network training. This is done by learning a non-linear compact presentation of the underlying semantic structure information of surgical videos through a transformer variational autoencoder (VAE) and by encouraging models to follow the learned statistical distributions. In other words, LAST is of structure-aware and favors predictions that lie on the extracted low dimensional data manifold. Validated on two public datasets of the cholecystectomy surgery, i.e., the Cholec80 dataset and the M2cai16 dataset, our method achieves better results than other state-of-the-art methods. Specifically, on the Cholec80 dataset, our method achieves an average accuracy of 93.12±4.71%, an average precision of 89.25±5.49%, an average recall of 90.10±5.45% and an average Jaccard of 81.11 ±7.62% for phase recognition, and an average mAP of 95.15±3.87% for tool presence detection. Similar superior performance is also observed when LAST is applied to the M2cai16 dataset.
Collapse
|
7
|
Fernandes GS, Uliana JH, Bachmann L, Carneiro AA, Lediju Bell MA, Pavan TZ. Mitigating skin tone bias in linear array in vivo photoacoustic imaging with short-lag spatial coherence beamforming. PHOTOACOUSTICS 2023; 33:100555. [PMID: 38021286 PMCID: PMC10658615 DOI: 10.1016/j.pacs.2023.100555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 09/01/2023] [Accepted: 09/03/2023] [Indexed: 12/01/2023]
Abstract
Photoacoustic (PA) imaging has the potential to deliver non-invasive diagnostic information. However, skin tone differences bias PA target visualization, as the elevated optical absorption of melanated skin decreases optical fluence within the imaging plane and increases the presence of acoustic clutter. This paper demonstrates that short-lag spatial coherence (SLSC) beamforming mitigates this bias. PA data from the forearm of 18 volunteers were acquired with 750-, 810-, and 870-nm wavelengths. Skin tones ranging from light to dark were objectively quantified using the individual typology angle (ITA° ). The signal-to-noise ratio (SNR) of the radial artery (RA) and surrounding clutter were measured. Clutter was minimal (e.g., -16 dB relative to the RA) with lighter skin tones and increased to -8 dB with darker tones, which compromised RA visualization in conventional PA images. SLSC beamforming achieved a median SNR improvement of 3.8 dB, resulting in better RA visualization for all skin tones.
Collapse
Affiliation(s)
- Guilherme S.P. Fernandes
- Department of Physics, FFCLRP, University of Sao Paulo, Brazil
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| | - João H. Uliana
- Department of Physics, FFCLRP, University of Sao Paulo, Brazil
| | | | | | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
- Department of Biomedical Engineering, Johns Hopkins University, USA
- Department of Computer Science, Johns Hopkins University, USA
| | - Theo Z. Pavan
- Department of Physics, FFCLRP, University of Sao Paulo, Brazil
| |
Collapse
|
8
|
Gao S, Wang Y, Ma X, Zhou H, Jiang Y, Yang K, Lu L, Wang S, Nephew BC, Fichera L, Fischer GS, Zhang HK. Intraoperative laparoscopic photoacoustic image guidance system in the da Vinci surgical system. BIOMEDICAL OPTICS EXPRESS 2023; 14:4914-4928. [PMID: 37791285 PMCID: PMC10545189 DOI: 10.1364/boe.498052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/23/2023] [Accepted: 07/31/2023] [Indexed: 10/05/2023]
Abstract
This paper describes a framework allowing intraoperative photoacoustic (PA) imaging integrated into minimally invasive surgical systems. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. With PA imaging, a surgical robot can provide intraoperative neurovascular guidance to the operating physician, alerting them of the presence of vital substrate anatomy invisible to the naked eye, preventing complications such as hemorrhage and paralysis. Our proposed framework is designed to work with the da Vinci surgical system: real-time PA images produced by the framework are superimposed on the endoscopic video feed with an augmented reality overlay, thus enabling intuitive three-dimensional localization of critical anatomy. To evaluate the accuracy of the proposed framework, we first conducted experimental studies in a phantom with known geometry, which revealed a volumetric reconstruction error of 1.20 ± 0.71 mm. We also conducted an ex vivo study by embedding blood-filled tubes into chicken breast, demonstrating the successful real-time PA-augmented vessel visualization onto the endoscopic view. These results suggest that the proposed framework could provide anatomical and functional feedback to surgeons and it has the potential to be incorporated into robot-assisted minimally invasive surgical procedures.
Collapse
Affiliation(s)
- Shang Gao
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Yang Wang
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Haoying Zhou
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Yiwei Jiang
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Kehan Yang
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Liang Lu
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Computer Science, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Shiyue Wang
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Computer Science, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Benjamin C. Nephew
- Department of Biology & Biotechnology, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Neuroscience Program, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Loris Fichera
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Gregory S. Fischer
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Mechanical & Materials Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Electrical & Computer Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| | - Haichong K. Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Computer Science, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA
| |
Collapse
|
9
|
Zhang J, Wiacek A, Feng Z, Ding K, Lediju Bell MA. Flexible array transducer for photoacoustic-guided interventions: phantom and ex vivo demonstrations. BIOMEDICAL OPTICS EXPRESS 2023; 14:4349-4368. [PMID: 37799699 PMCID: PMC10549736 DOI: 10.1364/boe.491406] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/29/2023] [Accepted: 07/06/2023] [Indexed: 10/07/2023]
Abstract
Photoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom and ex vivo bovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600 μm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92-24.42 dB, 46.50-67.51 dB, and 0.76-1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.
Collapse
Affiliation(s)
- Jiaxin Zhang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alycen Wiacek
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ziwei Feng
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
10
|
Zhang Y, Wang L. Video-rate full-ring ultrasound and photoacoustic computed tomography with real-time sound speed optimization. BIOMEDICAL OPTICS EXPRESS 2022; 13:4398-4413. [PMID: 36032563 PMCID: PMC9408242 DOI: 10.1364/boe.464360] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
Full-ring dual-modal ultrasound and photoacoustic imaging provide complementary contrasts, high spatial resolution, full view angle and are more desirable in pre-clinical and clinical applications. However, two long-standing challenges exist in achieving high-quality video-rate dual-modal imaging. One is the increased data processing burden from the dense acquisition. Another one is the object-dependent speed of sound variation, which may cause blurry, splitting artifacts, and low imaging contrast. Here, we develop a video-rate full-ring ultrasound and photoacoustic computed tomography (VF-USPACT) with real-time optimization of the speed of sound. We improve the imaging speed by selective and parallel image reconstruction. We determine the optimal sound speed via co-registered ultrasound imaging. Equipped with a 256-channel ultrasound array, the dual-modal system can optimize the sound speed and reconstruct dual-modal images at 10 Hz in real-time. The optimized sound speed can effectively enhance the imaging quality under various sample sizes, types, or physiological states. In animal and human imaging, the system shows co-registered dual contrasts, high spatial resolution (140 µm), single-pulse photoacoustic imaging (< 50 µs), deep penetration (> 20 mm), full view, and adaptive sound speed correction. We believe VF-USPACT can advance many real-time biomedical imaging applications, such as vascular disease diagnosing, cancer screening, or neuroimaging.
Collapse
Affiliation(s)
- Yachao Zhang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, 999077, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, 999077, China
- City University of Hong Kong Shenzhen Research Institute, Shen Zhen, 518057, China
| |
Collapse
|
11
|
Wang Y, Yuan C, Jiang J, Peng K, Wang B. Photoacoustic/Ultrasound Endoscopic Imaging Reconstruction Algorithm Based on the Approximate Gaussian Acoustic Field. BIOSENSORS 2022; 12:bios12070463. [PMID: 35884265 PMCID: PMC9312499 DOI: 10.3390/bios12070463] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/18/2022] [Accepted: 06/22/2022] [Indexed: 12/16/2022]
Abstract
This paper aims to propose a new photoacoustic/ultrasound endoscopic imaging reconstruction algorithm based on the approximate Gaussian acoustic field which significantly improves the resolution and signal-to-noise ratio (SNR) of the out-of-focus region. We demonstrated the method by numerical calculations and investigated the applicability of the algorithm in a chicken breast phantom. The validation was finally performed by the rabbit rectal endoscopy experiment. Simulation results show that the lateral resolution of the target point in the out-of-focus region can be well optimized with this new algorithm. Phantom experimental results show that the lateral resolution of the indocyanine green (ICG) tube in the photoacoustic image is reduced from 3.975 mm to 1.857 mm by using our new algorithm, which is a 52.3% improvement. Ultrasound images also show a significant improvement in lateral resolution. The results of the rabbit rectal endoscopy experiment prove that the algorithm we proposed is capable of providing higher-quality photoacoustic/ultrasound images. In conclusion, the algorithm enables fast acoustic resolution photoacoustic/ ultrasonic dynamic focusing and effectively improves the imaging quality of the system, which has significant guidance for the design of acoustic resolution photoacoustic/ultrasound endoscopy systems.
Collapse
Affiliation(s)
| | | | | | | | - Bo Wang
- Correspondence: (K.P.); (B.W.)
| |
Collapse
|
12
|
Gubbi MR, Gonzalez EA, Bell MAL. Theoretical Framework to Predict Generalized Contrast-to-Noise Ratios of Photoacoustic Images With Applications to Computer Vision. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2098-2114. [PMID: 35446763 DOI: 10.1109/tuffc.2022.3169082] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The successful integration of computer vision, robotic actuation, and photoacoustic imaging to find and follow targets of interest during surgical and interventional procedures requires accurate photoacoustic target detectability. This detectability has traditionally been assessed with image quality metrics, such as contrast, contrast-to-noise ratio, and signal-to-noise ratio (SNR). However, predicting target tracking performance expectations when using these traditional metrics is difficult due to unbounded values and sensitivity to image manipulation techniques like thresholding. The generalized contrast-to-noise ratio (gCNR) is a recently introduced alternative target detectability metric, with previous work dedicated to empirical demonstrations of applicability to photoacoustic images. In this article, we present theoretical approaches to model and predict the gCNR of photoacoustic images with an associated theoretical framework to analyze relationships between imaging system parameters and computer vision task performance. Our theoretical gCNR predictions are validated with histogram-based gCNR measurements from simulated, experimental phantom, ex vivo, and in vivo datasets. The mean absolute errors between predicted and measured gCNR values ranged from 3.2 ×10-3 to 2.3 ×10-2 for each dataset, with channel SNRs ranging -40 to 40 dB and laser energies ranging 0.07 [Formula: see text] to 68 mJ. Relationships among gCNR, laser energy, target and background image parameters, target segmentation, and threshold levels were also investigated. Results provide a promising foundation to enable predictions of photoacoustic gCNR and visual servoing segmentation accuracy. The efficiency of precursory surgical and interventional tasks (e.g., energy selection for photoacoustic-guided surgeries) may also be improved with the proposed framework.
Collapse
|