1
|
Liu S, Wang T, Zheng X, Zhu Y, Tian C. On the imaging depth limit of photoacoustic tomography in the visible and first near-infrared windows. OPTICS EXPRESS 2024; 32:5460-5480. [PMID: 38439272 DOI: 10.1364/oe.513538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 01/21/2024] [Indexed: 03/06/2024]
Abstract
It is well known that photoacoustic tomography (PAT) can circumvent the photon scattering problem in optical imaging and achieve high-contrast and high-resolution imaging at centimeter depths. However, after two decades of development, the long-standing question of the imaging depth limit of PAT in biological tissues remains unclear. Here we propose a numerical framework for evaluating the imaging depth limit of PAT in the visible and the first near-infrared windows. The established framework simulates the physical process of PAT and consists of seven modules, including tissue modelling, photon transportation, photon to ultrasound conversion, sound field propagation, signal reception, image reconstruction, and imaging depth evaluation. The framework can simulate the imaging depth limits in general tissues, such as the human breast, the human abdomen-liver tissues, and the rodent whole body and provide accurate evaluation results. The study elucidates the fundamental imaging depth limit of PAT in biological tissues and can provide useful guidance for practical experiments.
Collapse
|
2
|
Brollo PP, Bresadola V. Enhancing visualization and guidance in general surgery: a comprehensive and narrative review of the current cutting-edge technologies and future perspectives. J Gastrointest Surg 2024; 28:179-185. [PMID: 38445941 DOI: 10.1016/j.gassur.2023.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 11/25/2023] [Accepted: 12/08/2023] [Indexed: 03/07/2024]
Abstract
BACKGROUND In the last decade, there has been a great effort in developing new technologies to enhance surgical visualization and guidance. This comprehensive and narrative review aimed to provide a wide and extensive overview of the current state of the art on this topic and their near-future perspectives linked to the development of artificial intelligence (AI), by focusing on the most recent and relevant literature. METHODS A comprehensive and narrative review of the literature was performed by searching specific terms on PubMed/MEDLINE, Scopus, and Embase databases, assessing the current state of the art on this topic. RESULTS Fluorescence-guided surgery, contrast-enhanced ultrasound (CEUS), ultra-high frequency ultrasound (UHFUS), photoacoustic imaging (PAI), and augmented reality (AR) are boosting the field of image-guided techniques as the rapid development of AI in surgery is promising a more automated decision-making and surgical movements in the operating room. CONCLUSION Fluorescence-guided surgery, CEUS, UHFUS, PAI, and AR are becoming crucial to give surgeons a new level of information during the intervention, with the right timing and sequence, and represent the future of surgery. As many more controlled studies are needed to validate the employment of these technologies, the next generation of surgeons must become more familiar with the basics of AI to better incorporate new tools into the daily surgical practice of the future.
Collapse
Affiliation(s)
- Pier Paolo Brollo
- Department of Medicine, General Surgery Department and Simulation Center, Academic Hospital of Udine, University of Udine, Udine, Italy; General Surgical Oncology Department, Istituto di Ricovero e Cura a Carattere Scientifico Centro di Riferimento Oncologico di Aviano (Istituto Nazionale Tumori), Aviano, Italy.
| | - Vittorio Bresadola
- Department of Medicine, General Surgery Department and Simulation Center, Academic Hospital of Udine, University of Udine, Udine, Italy
| |
Collapse
|
3
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
4
|
Zhang J, Wiacek A, Feng Z, Ding K, Lediju Bell MA. Flexible array transducer for photoacoustic-guided interventions: phantom and ex vivo demonstrations. BIOMEDICAL OPTICS EXPRESS 2023; 14:4349-4368. [PMID: 37799699 PMCID: PMC10549736 DOI: 10.1364/boe.491406] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/29/2023] [Accepted: 07/06/2023] [Indexed: 10/07/2023]
Abstract
Photoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom and ex vivo bovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600 μm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92-24.42 dB, 46.50-67.51 dB, and 0.76-1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.
Collapse
Affiliation(s)
- Jiaxin Zhang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alycen Wiacek
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ziwei Feng
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
5
|
Choi W, Park B, Choi S, Oh D, Kim J, Kim C. Recent Advances in Contrast-Enhanced Photoacoustic Imaging: Overcoming the Physical and Practical Challenges. Chem Rev 2023. [PMID: 36642892 DOI: 10.1021/acs.chemrev.2c00627] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
For decades now, photoacoustic imaging (PAI) has been investigated to realize its potential as a niche biomedical imaging modality. Despite its highly desirable optical contrast and ultrasonic spatiotemporal resolution, PAI is challenged by such physical limitations as a low signal-to-noise ratio (SNR), diminished image contrast due to strong optical attenuation, and a lower-bound on spatial resolution in deep tissue. In addition, contrast-enhanced PAI has faced practical limitations such as insufficient cell-specific targeting due to low delivery efficiency and difficulties in developing clinically translatable agents. Identifying these limitations is essential to the continuing expansion of the field, and substantial advances in developing contrast-enhancing agents, complemented by high-performance image acquisition systems, have synergistically dealt with the challenges of conventional PAI. This review covers the past four years of research on pushing the physical and practical challenges of PAI in terms of SNR/contrast, spatial resolution, targeted delivery, and clinical application. Promising strategies for dealing with each challenge are reviewed in detail, and future research directions for next generation contrast-enhanced PAI are discussed.
Collapse
Affiliation(s)
- Wonseok Choi
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Byullee Park
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Seongwook Choi
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Donghyeon Oh
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Jongbeom Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, 77 Cheongam-Ro, Nam-Gu, Pohang37673, Republic of Korea
| |
Collapse
|
6
|
Gubbi MR, Gonzalez EA, Bell MAL. Theoretical Framework to Predict Generalized Contrast-to-Noise Ratios of Photoacoustic Images With Applications to Computer Vision. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2098-2114. [PMID: 35446763 DOI: 10.1109/tuffc.2022.3169082] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The successful integration of computer vision, robotic actuation, and photoacoustic imaging to find and follow targets of interest during surgical and interventional procedures requires accurate photoacoustic target detectability. This detectability has traditionally been assessed with image quality metrics, such as contrast, contrast-to-noise ratio, and signal-to-noise ratio (SNR). However, predicting target tracking performance expectations when using these traditional metrics is difficult due to unbounded values and sensitivity to image manipulation techniques like thresholding. The generalized contrast-to-noise ratio (gCNR) is a recently introduced alternative target detectability metric, with previous work dedicated to empirical demonstrations of applicability to photoacoustic images. In this article, we present theoretical approaches to model and predict the gCNR of photoacoustic images with an associated theoretical framework to analyze relationships between imaging system parameters and computer vision task performance. Our theoretical gCNR predictions are validated with histogram-based gCNR measurements from simulated, experimental phantom, ex vivo, and in vivo datasets. The mean absolute errors between predicted and measured gCNR values ranged from 3.2 ×10-3 to 2.3 ×10-2 for each dataset, with channel SNRs ranging -40 to 40 dB and laser energies ranging 0.07 [Formula: see text] to 68 mJ. Relationships among gCNR, laser energy, target and background image parameters, target segmentation, and threshold levels were also investigated. Results provide a promising foundation to enable predictions of photoacoustic gCNR and visual servoing segmentation accuracy. The efficiency of precursory surgical and interventional tasks (e.g., energy selection for photoacoustic-guided surgeries) may also be improved with the proposed framework.
Collapse
|
7
|
Kim MS, Cha JH, Lee S, Han L, Park W, Ahn JS, Park SC. Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography. Front Neurorobot 2022; 15:735177. [PMID: 35095454 PMCID: PMC8790180 DOI: 10.3389/fnbot.2021.735177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 11/23/2021] [Indexed: 11/18/2022] Open
Abstract
There have been few anatomical structure segmentation studies using deep learning. Numbers of training and ground truth images applied were small and the accuracies of which were low or inconsistent. For a surgical video anatomy analysis, various obstacles, including a variable fast-changing view, large deformations, occlusions, low illumination, and inadequate focus occur. In addition, it is difficult and costly to obtain a large and accurate dataset on operational video anatomical structures, including arteries. In this study, we investigated cerebral artery segmentation using an automatic ground-truth generation method. Indocyanine green (ICG) fluorescence intraoperative cerebral videoangiography was used to create a ground-truth dataset mainly for cerebral arteries and partly for cerebral blood vessels, including veins. Four different neural network models were trained using the dataset and compared. Before augmentation, 35,975 training images and 11,266 validation images were used. After augmentation, 260,499 training and 90,129 validation images were used. A Dice score of 79% for cerebral artery segmentation was achieved using the DeepLabv3+ model trained using an automatically generated dataset. Strict validation in different patient groups was conducted. Arteries were also discerned from the veins using the ICG videoangiography phase. We achieved fair accuracy, which demonstrated the appropriateness of the methodology. This study proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography. Using this method, computer vision can discern blood vessels and arteries from veins in a neurosurgical microscope field of view. Thus, this technique is essential for neurosurgical field vessel anatomy-based navigation. In addition, surgical assistance, safety, and autonomous surgery neurorobotics that can detect or manipulate cerebral vessels would require computer vision to identify blood vessels and arteries.
Collapse
Affiliation(s)
- Min-seok Kim
- Clinical Research Team, Deepnoid, Seoul, South Korea
| | - Joon Hyuk Cha
- Department of Internal Medicine, Inha University Hospital, Incheon, South Korea
| | - Seonhwa Lee
- Department of Bio-convergence Engineering, Korea University, Seoul, South Korea
| | - Lihong Han
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Computer Science and Engineering, Soongsil University, Seoul, South Korea
| | - Wonhyoung Park
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Sung Ahn
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seong-Cheol Park
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Neurosurgery, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung, South Korea
- Department of Neurosurgery, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul, South Korea
- Department of Neurosurgery, Hallym Hospital, Incheon, South Korea
- *Correspondence: Seong-Cheol Park
| |
Collapse
|
8
|
Privitera L, Paraboschi I, Dixit D, Arthurs OJ, Giuliani S. Image-guided surgery and novel intraoperative devices for enhanced visualisation in general and paediatric surgery: a review. Innov Surg Sci 2021; 6:161-172. [PMID: 35937852 PMCID: PMC9294338 DOI: 10.1515/iss-2021-0028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/17/2021] [Indexed: 12/27/2022] Open
Abstract
Fluorescence guided surgery, augmented reality, and intra-operative imaging devices are rapidly pervading the field of surgical interventions, equipping the surgeon with powerful tools capable of enhancing the surgical visualisation of anatomical normal and pathological structures. There is a wide range of possibilities in the adult population to use these novel technologies and devices in the guidance for surgical procedures and minimally invasive surgeries. Their applications and their use have also been increasingly growing in the field of paediatric surgery, where the detailed visualisation of small anatomical structures could reduce procedure time, minimising surgical complications and ultimately improve the outcome of surgery. This review aims to illustrate the mechanisms underlying these innovations and their main applications in the clinical setting.
Collapse
Affiliation(s)
- Laura Privitera
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, London, UK,Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Irene Paraboschi
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, London, UK,Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Divyansh Dixit
- Faculty of Medicine, University of Southampton, Southampton, UK
| | - Owen J Arthurs
- Department of Clinical Radiology, NHS Foundation Trust, Great Ormond Street Hospital for Children, London, UK,NIHR GOSH Biomedical Research Centre, NHS Foundation Trust, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Stefano Giuliani
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, London, UK,Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK,Department of Specialist Neonatal and Paediatric Surgery, NHS Foundation Trust, Great Ormond Street Hospital for Children, London, UK
| |
Collapse
|
9
|
Wiacek A, Wang KC, Wu H, Bell MAL. Photoacoustic-Guided Laparoscopic and Open Hysterectomy Procedures Demonstrated With Human Cadavers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3279-3292. [PMID: 34018931 DOI: 10.1109/tmi.2021.3082555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Hysterectomy (i.e., surgical removal of the uterus) requires severing the main blood supply to the uterus (i.e., the uterine arteries) while preserving the nearby, often overlapping, ureters. In this paper, we investigate dual-wavelength and audiovisual photoacoustic imaging-based approaches to visualize and differentiate the ureter from the uterine artery and to provide the real-time information needed to avoid accidental ureteral injuries during hysterectomies. Dual-wavelength 690/750 nm photoacoustic imaging was implemented during laparoscopic and open hysterectomies performed on human cadavers, with a custom display approach designed to visualize the ureter and uterine artery. The proximity of the surgical tool to the ureter was calculated and conveyed by tracking the surgical tool in photoacoustic images and mapping distance to auditory signals. The dual-wavelength display showed up to 10 dB contrast differences between the ureter and uterine artery at three separation distances (i.e., 4 mm, 5 mm, and 6 mm) during the open hysterectomy. During the laparoscopic hysterectomy, the ureter and uterine artery were visualized in the dual-wavelength image with up to 24 dB contrast differences. Distances between the ureter and the surgical tool ranged from 2.47 to 7.31 mm. These results are promising for the introduction of dual-wavelength photoacoustic imaging to differentiate the ureter from the uterine artery, estimate the position of the ureter relative to a surgical tool tip, map photoacoustic-based distance measurements to auditory signals, and ultimately guide hysterectomy procedures to reduce the risk of accidental ureteral injuries.
Collapse
|
10
|
Prakash J, Kalva SK, Pramanik M, Yalavarthy PK. Binary photoacoustic tomography for improved vasculature imaging. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210135R. [PMID: 34405599 PMCID: PMC8370884 DOI: 10.1117/1.jbo.26.8.086004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 06/29/2021] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE The proposed binary tomography approach was able to recover the vasculature structures accurately, which could potentially enable the utilization of binary tomography algorithm in scenarios such as therapy monitoring and hemorrhage detection in different organs. AIM Photoacoustic tomography (PAT) involves reconstruction of vascular networks having direct implications in cancer research, cardiovascular studies, and neuroimaging. Various methods have been proposed for recovering vascular networks in photoacoustic imaging; however, most methods are two-step (image reconstruction and image segmentation) in nature. We propose a binary PAT approach wherein direct reconstruction of vascular network from the acquired photoacoustic sinogram data is plausible. APPROACH Binary tomography approach relies on solving a dual-optimization problem to reconstruct images with every pixel resulting in a binary outcome (i.e., either background or the absorber). Further, the binary tomography approach was compared against backprojection, Tikhonov regularization, and sparse recovery-based schemes. RESULTS Numerical simulations, physical phantom experiment, and in-vivo rat brain vasculature data were used to compare the performance of different algorithms. The results indicate that the binary tomography approach improved the vasculature recovery by 10% using in-silico data with respect to the Dice similarity coefficient against the other reconstruction methods. CONCLUSION The proposed algorithm demonstrates superior vasculature recovery with limited data both visually and based on quantitative image metrics.
Collapse
Affiliation(s)
- Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bangalore, Karnataka, India
- Address all correspondence to Jaya Prakash,
| | - Sandeep Kumar Kalva
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Phaneendra K. Yalavarthy
- Indian Institute of Science, Department of Computational and Data Sciences, Bangalore, Karnataka, India
| |
Collapse
|
11
|
Annular Fiber Probe for Interstitial Illumination in Photoacoustic Guidance of Radiofrequency Ablation. SENSORS 2021; 21:s21134458. [PMID: 34209996 PMCID: PMC8271966 DOI: 10.3390/s21134458] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/19/2021] [Accepted: 06/24/2021] [Indexed: 12/13/2022]
Abstract
Unresectable liver tumors are commonly treated with percutaneous radiofrequency ablation (RFA). However, this technique is associated with high recurrence rates due to incomplete tumor ablation. Accurate image guidance of the RFA procedure contributes to successful ablation, but currently used imaging modalities have shortcomings in device guidance and treatment monitoring. We explore the potential of using photoacoustic (PA) imaging combined with conventional ultrasound (US) imaging for real-time RFA guidance. To overcome the low penetration depth of light in tissue, we have developed an annular fiber probe (AFP), which can be inserted into tissue enabling interstitial illumination of tissue. The AFP is a cannula with 72 optical fibers that allows an RFA device to slide through its lumen, thereby enabling PA imaging for RFA device guidance and ablation monitoring. We show that the PA signal from interstitial illumination is not affected by absorber-to-surface depth compared to extracorporeal illumination. We also demonstrate successful imaging of the RFA electrodes, a blood vessel mimic, a tumor-mimicking phantom, and ablated liver tissue boundaries in ex vivo chicken and bovine liver samples. PA-assisted needle guidance revealed clear needle tip visualization, a notable improvement to current US needle guidance. Our probe shows potential for RFA device guidance and ablation detection, which potentially aids in real-time monitoring.
Collapse
|
12
|
Regensburger AP, Brown E, Krönke G, Waldner MJ, Knieling F. Optoacoustic Imaging in Inflammation. Biomedicines 2021; 9:483. [PMID: 33924983 PMCID: PMC8145174 DOI: 10.3390/biomedicines9050483] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 04/20/2021] [Accepted: 04/21/2021] [Indexed: 12/11/2022] Open
Abstract
Optoacoustic or photoacoustic imaging (OAI/PAI) is a technology which enables non-invasive visualization of laser-illuminated tissue by the detection of acoustic signals. The combination of "light in" and "sound out" offers unprecedented scalability with a high penetration depth and resolution. The wide range of biomedical applications makes this technology a versatile tool for preclinical and clinical research. Particularly when imaging inflammation, the technology offers advantages over current clinical methods to diagnose, stage, and monitor physiological and pathophysiological processes. This review discusses the clinical perspective of using OAI in the context of imaging inflammation as well as in current and emerging translational applications.
Collapse
Affiliation(s)
- Adrian P. Regensburger
- Department of Pediatrics and Adolescent Medicine, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Loschgestr. 15, D-91054 Erlangen, Germany;
| | - Emma Brown
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, UK;
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, UK
| | - Gerhard Krönke
- Department of Medicine 3, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Ulmenweg 18, D-91054 Erlangen, Germany;
| | - Maximilian J. Waldner
- Department of Medicine 1, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Ulmenweg 18, D-91054 Erlangen, Germany;
| | - Ferdinand Knieling
- Department of Pediatrics and Adolescent Medicine, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Loschgestr. 15, D-91054 Erlangen, Germany;
| |
Collapse
|
13
|
Wiacek A, Lediju Bell MA. Photoacoustic-guided surgery from head to toe [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:2079-2117. [PMID: 33996218 PMCID: PMC8086464 DOI: 10.1364/boe.417984] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/17/2021] [Accepted: 02/18/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging-the combination of optics and acoustics to visualize differences in optical absorption - has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.
Collapse
Affiliation(s)
- Alycen Wiacek
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
14
|
Huang J, Wiacek A, Kempski KM, Palmer T, Izzi J, Beck S, Lediju Bell MA. Empirical assessment of laser safety for photoacoustic-guided liver surgeries. BIOMEDICAL OPTICS EXPRESS 2021; 12:1205-1216. [PMID: 33796347 PMCID: PMC7984790 DOI: 10.1364/boe.415054] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 01/19/2021] [Accepted: 01/19/2021] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging is a promising technique to provide guidance during multiple surgeries and procedures. One challenge with this technique is that major blood vessels in the liver are difficult to differentiate from surrounding tissue within current safety limits, which only exist for human skin and eyes. In this paper, we investigate the safety of raising this limit for liver tissue excited with a 750 nm laser wavelength and approximately 30 mJ laser energy (corresponding to approximately 150 mJ/cm2 fluence). Laparotomies were performed on six swine to empirically investigate potential laser-related liver damage. Laser energy was applied for temporal durations of 1 minute, 10 minutes, and 20 minutes. Lasered liver lobes were excised either immediately after laser application (3 swine) or six weeks after surgery (3 swine). Cell damage was assessed using liver damage blood biomarkers and histopathology analyses of 41 tissue samples total. The biomarkers were generally normal over a 6 week post-surgical in vivo study period. Histopathology revealed no cell death, although additional pathology was present (i.e., hemorrhage, inflammation, fibrosis) due to handling, sample resection, and fibrous adhesions as a result of the laparotomy. These results support a new protocol for studying laser-related liver damage, indicating the potential to raise the safety limit for liver photoacoustic imaging to approximately 150 mJ/cm2 with a laser wavelength of 750 nm and for imaging durations up to 10 minutes without causing cell death. This investigation and protocol may be applied to other tissues and extended to additional wavelengths and energies, which is overall promising for introducing new tissue-specific laser safety limits for photoacoustic-guided surgery.
Collapse
Affiliation(s)
- Jiaqi Huang
- Department of Biomedical Engineering,
Johns Hopkins University, Baltimore, MD
21218, USA
| | - Alycen Wiacek
- Department of Electrical and Computer
Engineering, Johns Hopkins University,
Baltimore, MD 21218, USA
| | - Kelley M. Kempski
- Department of Biomedical Engineering,
Johns Hopkins University, Baltimore, MD
21218, USA
| | - Theron Palmer
- Department of Biomedical Engineering,
Johns Hopkins University, Baltimore, MD
21218, USA
| | - Jessica Izzi
- Department of Molecular and Comparative
Pathobiology, Johns Hopkins University,
Baltimore, MD 21218, USA
| | - Sarah Beck
- Department of Molecular and Comparative
Pathobiology, Johns Hopkins University,
Baltimore, MD 21218, USA
| | - Muyinatu A. Lediju Bell
- Department of Biomedical Engineering,
Johns Hopkins University, Baltimore, MD
21218, USA
- Department of Electrical and Computer
Engineering, Johns Hopkins University,
Baltimore, MD 21218, USA
- Department of Computer Science,
Johns Hopkins University, Baltimore, MD
21218, USA
| |
Collapse
|
15
|
Graham MT, Bell MAL. Photoacoustic Spatial Coherence Theory and Applications to Coherence-Based Image Contrast and Resolution. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2069-2084. [PMID: 32746173 PMCID: PMC8221408 DOI: 10.1109/tuffc.2020.2999343] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The photoacoustic effect relies on optical transmission, which causes thermal expansion and generates acoustic signals. Coherence-based photoacoustic signal processing is often preferred over more traditional signal processing methods due to improved signal-to-noise ratios, imaging depth, and resolution in applications such as cell tracking, blood flow estimation, and imaging. However, these applications lack a theoretical spatial coherence model to support their implementation. In this article, the photoacoustic spatial coherence theory is derived to generate theoretical spatial coherence functions. These theoretical spatial coherence functions are compared with k-Wave simulated data and experimental data from point and circular targets (0.1-12 mm in diameter) with generally good agreement, particularly in the shorter spatial lag region. The derived theory was used to hypothesize and test previously unexplored principles for optimizing photoacoustic short-lag spatial coherence (SLSC) images, including the influence of the incident light profile on photoacoustic spatial coherence functions and associated SLSC image contrast and resolution. Results also confirm previous trends from experimental observations, including changes in SLSC image resolution and contrast as a function of the first M lags summed to create SLSC images. For example, small targets (e.g., <1-4 mm in diameter) can be imaged with larger M values to boost target contrast and resolution, and contrast can be further improved by reducing the illuminating beam to a size that is smaller than the target size. Overall, the presented theory provides a promising foundation to support a variety of coherence-based photoacoustic signal processing methods, and the associated theory-based simulation methods are more straightforward than the existing k-Wave simulation methods for SLSC images.
Collapse
|
16
|
Najafzadeh E, Farnia P, Lavasani SN, Basij M, Yan Y, Ghadiri H, Ahmadian A, Mehrmohammadi M. Photoacoustic image improvement based on a combination of sparse coding and filtering. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200164RR. [PMID: 33029991 PMCID: PMC7540346 DOI: 10.1117/1.jbo.25.10.106001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 09/16/2020] [Indexed: 05/07/2023]
Abstract
SIGNIFICANCE Photoacoustic imaging (PAI) has been greatly developed in a broad range of diagnostic applications. The efficiency of light to sound conversion in PAI is limited by the ubiquitous noise arising from the tissue background, leading to a low signal-to-noise ratio (SNR), and thus a poor quality of images. Frame averaging has been widely used to reduce the noise; however, it compromises the temporal resolution of PAI. AIM We propose an approach for photoacoustic (PA) signal denoising based on a combination of low-pass filtering and sparse coding (LPFSC). APPROACH LPFSC method is based on the fact that PA signal can be modeled as the sum of low frequency and sparse components, which allows for the reduction of noise levels using a hybrid alternating direction method of multipliers in an optimization process. RESULTS LPFSC method was evaluated using in-silico and experimental phantoms. The results show a 26% improvement in the peak SNR of PA signal compared to the averaging method for in-silico data. On average, LPFSC method offers a 63% improvement in the image contrast-to-noise ratio and a 33% improvement in the structural similarity index compared to the averaging method for objects located at three different depths, ranging from 10 to 20 mm, in a porcine tissue phantom. CONCLUSIONS The proposed method is an effective tool for PA signal denoising, whereas it ultimately improves the quality of reconstructed images, especially at higher depths, without limiting the image acquisition speed.
Collapse
Affiliation(s)
- Ebrahim Najafzadeh
- Tehran University of Medical Sciences, Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran, Iran
- Tehran University of Medical Sciences, Research Centre of Biomedical Technology and Robotics, Imam Khomeini Hospital Complex, Tehran, Iran
| | - Parastoo Farnia
- Tehran University of Medical Sciences, Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran, Iran
- Tehran University of Medical Sciences, Research Centre of Biomedical Technology and Robotics, Imam Khomeini Hospital Complex, Tehran, Iran
| | - Saeedeh N. Lavasani
- Tehran University of Medical Sciences, Research Centre of Biomedical Technology and Robotics, Imam Khomeini Hospital Complex, Tehran, Iran
- Shahid Beheshti University of Medical Sciences, Department of Biomedical Engineering and Medical Physics, Faculty of Medicine, Tehran, Iran
| | - Maryam Basij
- Wayne State University, Department of Biomedical Engineering, Detroit, Michigan, United States
| | - Yan Yan
- Wayne State University, Department of Biomedical Engineering, Detroit, Michigan, United States
| | - Hossein Ghadiri
- Tehran University of Medical Sciences, Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran, Iran
- Tehran University of Medical Sciences, Research Center for Molecular and Cellular Imaging, Tehran, Iran
| | - Alireza Ahmadian
- Tehran University of Medical Sciences, Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran, Iran
- Tehran University of Medical Sciences, Research Centre of Biomedical Technology and Robotics, Imam Khomeini Hospital Complex, Tehran, Iran
| | - Mohammad Mehrmohammadi
- Wayne State University, Department of Biomedical Engineering, Detroit, Michigan, United States
- Wayne State University, Department of Electrical and Computer Engineering, Detroit, Michigan, United States
| |
Collapse
|
17
|
Gonzalez EA, Bell MAL. GPU implementation of photoacoustic short-lag spatial coherence imaging for improved image-guided interventions. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:1-19. [PMID: 32713168 PMCID: PMC7381831 DOI: 10.1117/1.jbo.25.7.077002] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 06/29/2020] [Indexed: 05/04/2023]
Abstract
SIGNIFICANCE Photoacoustic-based visual servoing is a promising technique for surgical tool tip tracking and automated visualization of photoacoustic targets during interventional procedures. However, one outstanding challenge has been the reliability of obtaining segmentations using low-energy light sources that operate within existing laser safety limits. AIM We developed the first known graphical processing unit (GPU)-based real-time implementation of short-lag spatial coherence (SLSC) beamforming for photoacoustic imaging and applied this real-time algorithm to improve signal segmentation during photoacoustic-based visual servoing with low-energy lasers. APPROACH A 1-mm-core-diameter optical fiber was inserted into ex vivo bovine tissue. Photoacoustic-based visual servoing was implemented as the fiber was manually displaced by a translation stage, which provided ground truth measurements of the fiber displacement. GPU-SLSC results were compared with a central processing unit (CPU)-SLSC approach and an amplitude-based delay-and-sum (DAS) beamforming approach. Performance was additionally evaluated with in vivo cardiac data. RESULTS The GPU-SLSC implementation achieved frame rates up to 41.2 Hz, representing a factor of 348 speedup when compared with offline CPU-SLSC. In addition, GPU-SLSC successfully recovered low-energy signals (i.e., ≤268 μJ) with mean ± standard deviation of signal-to-noise ratios of 11.2 ± 2.4 (compared with 3.5 ± 0.8 with conventional DAS beamforming). When energies were lower than the safety limit for skin (i.e., 394.6 μJ for 900-nm wavelength laser light), the median and interquartile range (IQR) of visual servoing tracking errors obtained with GPU-SLSC were 0.64 and 0.52 mm, respectively (which were lower than the median and IQR obtained with DAS by 1.39 and 8.45 mm, respectively). GPU-SLSC additionally reduced the percentage of failed segmentations when applied to in vivo cardiac data. CONCLUSIONS Results are promising for the use of low-energy, miniaturized lasers to perform GPU-SLSC photoacoustic-based visual servoing in the operating room with laser pulse repetition frequencies as high as 41.2 Hz.
Collapse
Affiliation(s)
- Eduardo A. Gonzalez
- Johns Hopkins University, School of Medicine, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Muyinatu A. Lediju Bell
- Johns Hopkins University, School of Medicine, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Whiting School of Engineering, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Whiting School of Engineering, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
18
|
Kempski KM, Graham MT, Gubbi MR, Palmer T, Lediju Bell MA. Application of the generalized contrast-to-noise ratio to assess photoacoustic image quality. BIOMEDICAL OPTICS EXPRESS 2020; 11:3684-3698. [PMID: 33014560 PMCID: PMC7510924 DOI: 10.1364/boe.391026] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 05/01/2020] [Accepted: 05/25/2020] [Indexed: 05/10/2023]
Abstract
The generalized contrast-to-noise ratio (gCNR) is a relatively new image quality metric designed to assess the probability of lesion detectability in ultrasound images. Although gCNR was initially demonstrated with ultrasound images, the metric is theoretically applicable to multiple types of medical images. In this paper, the applicability of gCNR to photoacoustic images is investigated. The gCNR was computed for both simulated and experimental photoacoustic images generated by amplitude-based (i.e., delay-and-sum) and coherence-based (i.e., short-lag spatial coherence) beamformers. These gCNR measurements were compared to three more traditional image quality metrics (i.e., contrast, contrast-to-noise ratio, and signal-to-noise ratio) applied to the same datasets. An increase in qualitative target visibility generally corresponded with increased gCNR. In addition, gCNR magnitude was more directly related to the separability of photoacoustic signals from their background, which degraded with the presence of limited bandwidth artifacts and increased levels of channel noise. At high gCNR values (i.e., 0.95-1), contrast, contrast-to-noise ratio, and signal-to-noise ratio varied by up to 23.7-56.2 dB, 2.0-3.4, and 26.5-7.6×1020, respectively, for simulated, experimental phantom, and in vivo data. Therefore, these traditional metrics can experience large variations when a target is fully detectable, and additional increases in these values would have no impact on photoacoustic target detectability. In addition, gCNR is robust to changes in traditional metrics introduced by applying a minimum threshold to image amplitudes. In tandem with other photoacoustic image quality metrics and with a defined range of 0 to 1, gCNR has promising potential to provide additional insight, particularly when designing new beamformers and image formation techniques and when reporting quantitative performance without an opportunity to qualitatively assess corresponding images (e.g., in text-only abstracts).
Collapse
Affiliation(s)
- Kelley M Kempski
- Biomedical Engineering Department, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Michelle T Graham
- Electrical & Computer Engineering Department, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Mardava R Gubbi
- Electrical & Computer Engineering Department, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Theron Palmer
- Biomedical Engineering Department, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Muyinatu A Lediju Bell
- Biomedical Engineering Department, Johns Hopkins University, Baltimore, MD 21218, USA
- Electrical & Computer Engineering Department, Johns Hopkins University, Baltimore, MD 21218, USA
- Computer Science Department, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
19
|
Graham M, Assis F, Allman D, Wiacek A, Gonzalez E, Gubbi M, Dong J, Hou H, Beck S, Chrispin J, Bell MAL. In Vivo Demonstration of Photoacoustic Image Guidance and Robotic Visual Servoing for Cardiac Catheter-Based Interventions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1015-1029. [PMID: 31502964 DOI: 10.1109/tmi.2019.2939568] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Cardiac interventional procedures are often performed under fluoroscopic guidance, exposing both the patient and operators to ionizing radiation. To reduce this risk of radiation exposure, we are exploring the use of photoacoustic imaging paired with robotic visual servoing for cardiac catheter visualization and surgical guidance. A cardiac catheterization procedure was performed on two in vivo swine after inserting an optical fiber into the cardiac catheter to produce photoacoustic signals from the tip of the fiber-catheter pair. A combination of photoacoustic imaging and robotic visual servoing was employed to visualize and maintain constant sight of the catheter tip in order to guide the catheter through the femoral or jugular vein, toward the heart. Fluoroscopy provided initial ground truth estimates for 1D validation of the catheter tip positions, and these estimates were refined using a 3D electromagnetic-based cardiac mapping system as the ground truth. The 1D and 3D root mean square errors ranged 0.25-2.28 mm and 1.24-1.54 mm, respectively. The catheter tip was additionally visualized at three locations within the heart: (1) inside the right atrium, (2) in contact with the right ventricular outflow tract, and (3) inside the right ventricle. Lasered regions of cardiac tissue were resected for histopathological analysis, which revealed no laser-related tissue damage, despite the use of 2.98 mJ per pulse at the fiber tip (379.2 mJ/cm2 fluence). In addition, there was a 19 dB difference in photoacoustic signal contrast when visualizing the catheter tip pre- and post-endocardial tissue contact, which is promising for contact confirmation during cardiac interventional procedures (e.g., cardiac radiofrequency ablation). These results are additionally promising for the use of photoacoustic imaging to guide cardiac interventions by providing depth information and enhanced visualization of catheter tip locations within blood vessels and within the beating heart.
Collapse
|
20
|
Piao D. Laparoscopic diffuse reflectance spectroscopy of an underlying tubular inclusion: a phantom study. APPLIED OPTICS 2019; 58:9689-9699. [PMID: 31873570 DOI: 10.1364/ao.58.009689] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 11/04/2019] [Indexed: 06/10/2023]
Abstract
We demonstrate diffuse reflectance spectroscopy (DRS) of a subsurface tubular inclusion by using a fiber probe having a single source-detector pair attached to a laparoscopic bipolar device. A forward model was also developed for DRS sensing of an underlying long absorbing tubular inclusion set in parallel to the tissue surface, normal to the line of sight of the source-detector pair, and equidistant from the source and the detector. The model agreed with measurements performed at 500 nm and using a 10 mm source-detector separation (SDS) on an aqueous tissue phantom embedding a tubing of 2 or 4 mm inner diameter that contained 9.1% to 33.3% red dye at a depth of up to 11.5 mm. When tested on solid phantoms using the 10 mm SDS, a tubular inclusion of $ \ge 3\;{\rm mm}$≥3mm inner diameter containing 0.05% red dye at a background absorption coefficient of $ 0.021\;{\rm mm}^{-1} $0.021mm-1 caused $ \ge 8\% $≥8% change of the signal at 500 nm versus the baseline when the inclusion was shallower than 5 mm. When assessed on avian muscle tissue having a 4 mm tubular inclusion embedded at an edge depth of 2 mm, DRS with the 10 mm SDS differentiated the following contents of the inclusion: 33.3% red dye (mimicking blood), 33.3% green dye, 33.3% yellow dye (mimicking bile), water (mimicking urine), and air.
Collapse
|