1
|
Wang R, Tan G, Liu X. Robust tip localization under continuous spatial and temporal constraints during 2D ultrasound-guided needle puncture. Int J Comput Assist Radiol Surg 2023; 18:2233-2242. [PMID: 37160581 DOI: 10.1007/s11548-023-02894-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 03/29/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE During ultrasound-guided (US-guided) needle puncture for minimally invasive procedures, automated needle tip localization can help clinicians capture small tips in US images easily and precisely, providing them with obvious tip indicators on the screen and bringing them more confidence during the procedures. However, automated needle tip localization in US images is challenging due to serious interferences arising from all kinds of echoes. METHODS We propose a method that localizes needle tips under continuous spatial and temporal constraints in the real-time US frame stream. A temporal constraint is firstly acquired by detecting translational tip motion in motion-enhanced US images with a deep learning-based (DL-based) detector. A spatial constraint and candidate tip locations are obtained by detecting needle shafts and tips in the raw grayscale B-mode images with another DL-based detector. To provide continuous constraints, estimated tip velocity from acquired temporal constraint is used to predict tip locations in frames where no temporal or spatial constraint is detected. Finally, tip coordinates are precisely localized among candidate tips under the spatial and temporal constraints. RESULTS Experimental results evaluated on 1121 US images from porcine organ punctures, and 895 images from human thyroid punctures demonstrate that the proposed method is effective and efficient, surpassing existing methods. On porcine organ data, a 97.2% recall rate and a 91.9% precision rate on tip detection and 0.88 ± 0.70 mm root-mean-square error (RMSE) on tip localization were achieved. On the human thyroid data, which was not involved in the training, 86.5% recall, 84.3% precision and 0.92 ± 0.78 mm RMSE were achieved separately. The running speed of 14.5 frames per second was achieved only using a CPU. CONCLUSION The proposed method provides a more reliable solution for automated needle tip localization during US-guided needle puncture, being more robust to interferences. Fast running speed leads to its practicability in the real-time US stream.
Collapse
Affiliation(s)
- Ruixin Wang
- College of Computer and Information, Hohai University, Nanjing, 210098, China
| | - Guoping Tan
- College of Computer and Information, Hohai University, Nanjing, 210098, China.
| | - Xiaohui Liu
- The First People's Hospital of Kunshan, Affiliated Kunshan Hospital of Jiangsu University, Kunshan, 215300, China
| |
Collapse
|
2
|
Arapi V, Hardt-Stremayr A, Weiss S, Steinbrener J. Bridging the simulation-to-real gap for AI-based needle and target detection in robot-assisted ultrasound-guided interventions. Eur Radiol Exp 2023; 7:30. [PMID: 37332035 DOI: 10.1186/s41747-023-00344-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 04/05/2023] [Indexed: 06/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered, robot-assisted, and ultrasound (US)-guided interventional radiology has the potential to increase the efficacy and cost-efficiency of interventional procedures while improving postsurgical outcomes and reducing the burden for medical personnel. METHODS To overcome the lack of available clinical data needed to train state-of-the-art AI models, we propose a novel approach for generating synthetic ultrasound data from real, clinical preoperative three-dimensional (3D) data of different imaging modalities. With the synthetic data, we trained a deep learning-based detection algorithm for the localization of needle tip and target anatomy in US images. We validated our models on real, in vitro US data. RESULTS The resulting models generalize well to unseen synthetic data and experimental in vitro data making the proposed approach a promising method to create AI-based models for applications of needle and target detection in minimally invasive US-guided procedures. Moreover, we show that by one-time calibration of the US and robot coordinate frames, our tracking algorithm can be used to accurately fine-position the robot in reach of the target based on 2D US images alone. CONCLUSIONS The proposed data generation approach is sufficient to bridge the simulation-to-real gap and has the potential to overcome data paucity challenges in interventional radiology. The proposed AI-based detection algorithm shows very promising results in terms of accuracy and frame rate. RELEVANCE STATEMENT This approach can facilitate the development of next-generation AI algorithms for patient anatomy detection and needle tracking in US and their application to robotics. KEY POINTS • AI-based methods show promise for needle and target detection in US-guided interventions. • Publicly available, annotated datasets for training AI models are limited. • Synthetic, clinical-like US data can be generated from magnetic resonance or computed tomography data. • Models trained with synthetic US data generalize well to real in vitro US data. • Target detection with an AI model can be used for fine positioning of the robot.
Collapse
Affiliation(s)
- Visar Arapi
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria.
| | - Alexander Hardt-Stremayr
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| | - Stephan Weiss
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| | - Jan Steinbrener
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| |
Collapse
|
3
|
Yang H, Shan C, Kolen AF, de With PHN. Medical instrument detection in ultrasound: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractMedical instrument detection is essential for computer-assisted interventions, since it facilitates clinicians to find instruments efficiently with a better interpretation, thereby improving clinical outcomes. This article reviews image-based medical instrument detection methods for ultrasound-guided (US-guided) operations. Literature is selected based on an exhaustive search in different sources, including Google Scholar, PubMed, and Scopus. We first discuss the key clinical applications of medical instrument detection in the US, including delivering regional anesthesia, biopsy taking, prostate brachytherapy, and catheterization. Then, we present a comprehensive review of instrument detection methodologies, including non-machine-learning and machine-learning methods. The conventional non-machine-learning methods were extensively studied before the era of machine learning methods. The principal issues and potential research directions for future studies are summarized for the computer-assisted intervention community. In conclusion, although promising results have been obtained by the current (non-) machine learning methods for different clinical applications, thorough clinical validations are still required.
Collapse
|
4
|
Zhao Y, Lu Y, Lu X, Jin J, Tao L, Chen X. Biopsy Needle Segmentation using Deep Networks on inhomogeneous Ultrasound Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:553-556. [PMID: 36086307 DOI: 10.1109/embc48229.2022.9871059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In minimally invasive interventional surgery, ultrasound imaging is usually used to provide real-time feedback in order to obtain the best diagnostic results or realize treatment plans, so how to accurately obtain the position of the medical biopsy needle is a problem worthy of study. 2D ultrasound simulation images containing the medical biopsy needle are generated, and our images background is from the real breast ultrasound image. Based on the deep learning network, the images containing the medical biopsy needle are used to analyze the effectiveness of different networks for needle localization for the purpose of returning needle positions in non-uniform ultrasound images. The results show that attention U-Net performed best and can accurately reflect the real position of the medical biopsy needle. The IoU and Precision can reach 90.19% and 96.25%, and the Angular Error is 0.40°. Clinical Relevance- Based on the deep network, for 2D ultrasound images containing medical biopsy needle, the localization precision can reach 96.25% and the Angular Error is 0.40°.
Collapse
|
5
|
Approaching automated applicator digitization from a new angle: Using sagittal images to improve deep learning accuracy and robustness in high-dose-rate prostate brachytherapy. Brachytherapy 2022; 21:520-531. [DOI: 10.1016/j.brachy.2022.02.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/07/2022] [Accepted: 02/26/2022] [Indexed: 11/17/2022]
|
6
|
Weakly-supervised learning for catheter segmentation in 3D frustum ultrasound. Comput Med Imaging Graph 2022; 96:102037. [DOI: 10.1016/j.compmedimag.2022.102037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/15/2021] [Accepted: 01/13/2022] [Indexed: 11/21/2022]
|
7
|
Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process. Int J Comput Assist Radiol Surg 2021; 17:295-303. [PMID: 34677747 DOI: 10.1007/s11548-021-02519-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/05/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Robot-assisted needle insertion guided by 2D ultrasound (US) can effectively improve the accuracy and success rate of clinical puncture. To this end, automatic and accurate needle-tracking methods are important for monitoring puncture processes, avoiding the needle deviating from the intended path, and reducing the risk of injury to surrounding tissues. This work aims to develop a framework for automatic and accurate detection of an inserted needle in 2D US image during the insertion process. METHODS We propose a novel convolutional neural network architecture comprising of a two-channel encoder and single-channel decoder for needle segmentation using needle motion information extracted from two adjacent US image frames. Based on the novel network, we further propose an automatic needle detection framework. According to the prediction result of the previous frame, a region of interest of the needle in the US image was extracted and fed into the proposed network to achieve finer and faster continuous needle localization. RESULTS The performance of our method was evaluated based on 1000 pairs of US images extracted from robot-assisted needle insertions on freshly excised bovine and porcine tissues. The needle segmentation network achieved 99.7% accuracy, 86.2% precision, 89.1% recall, and an F1-score of 0.87. The needle detection framework successfully localized the needle with a mean tip error of 0.45 ± 0.33 mm and a mean orientation error of 0.42° ± 0.34° and achieved a total processing time of 50 ms per image. CONCLUSION The proposed framework demonstrated the capability to realize robust, accurate, and real-time needle localization during robot-assisted needle insertion processes. It has a promising application in tracking the needle and ensuring the safety of robotic-assisted automatic puncture during challenging US-guided minimally invasive procedures.
Collapse
|
8
|
Yang H, Shan C, Kolen AF, de With PHN. Efficient Medical Instrument Detection in 3D Volumetric Ultrasound Data. IEEE Trans Biomed Eng 2021; 68:1034-1043. [PMID: 32746017 DOI: 10.1109/tbme.2020.2999729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Ultrasound-guided procedures have been applied in many clinical therapies, such as cardiac catheterization and regional anesthesia. Medical instrument detection in 3D Ultrasound (US) is highly desired, but the existing approaches are far from real-time performance. Our objective is to investigate an efficient instrument detection method in 3D US for practical clinical use. We propose a novel Multi-dimensional Mixed Network for efficient instrument detection in 3D US, which extracts the discriminating features at 3D full-image level by a 3D encoder, and then applies a specially designed dimension reduction block to reduce the spatial complexity of the feature maps by projecting from 3D space into 2D space. A 2D decoder is adopted to detect the instrument along the specified axes. By projecting the predicted 2D outputs, the instrument is detected or visualized in the 3D volume. Furthermore, to enable the network to better learn the discriminative information, we propose a multi-level loss function to capture both pixel- and image-level differences. We carried out extensive experiments on two datasets for two tasks: (1) catheter detection for cardiac RF-ablation and (2) needle detection for regional anesthesia. Our experiments show that our proposed method achieves a detection error of 2-3 voxels with an efficiency of about 0.12 sec per 3D US volume. The proposed method is 3-8 times faster than the state-of-the-art methods, leading to real-time performance. The results show that our proposed method has significant clinical value for real-time 3D US-guided intervention.
Collapse
|
9
|
Zhang Y, Tian Z, Lei Y, Wang T, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Automatic multi-needle localization in ultrasound images using large margin mask RCNN for ultrasound-guided prostate brachytherapy. ACTA ACUST UNITED AC 2020; 65:205003. [DOI: 10.1088/1361-6560/aba410] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
10
|
Efficient and Robust Instrument Segmentation in 3D Ultrasound Using Patch-of-Interest-FuseNet with Hybrid Loss. Med Image Anal 2020; 67:101842. [PMID: 33075639 DOI: 10.1016/j.media.2020.101842] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 09/11/2020] [Accepted: 09/24/2020] [Indexed: 11/20/2022]
Abstract
Instrument segmentation plays a vital role in 3D ultrasound (US) guided cardiac intervention. Efficient and accurate segmentation during the operation is highly desired since it can facilitate the operation, reduce the operational complexity, and therefore improve the outcome. Nevertheless, current image-based instrument segmentation methods are not efficient nor accurate enough for clinical usage. Lately, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, have been used in different volumetric segmentation tasks. However, 2D FCN cannot exploit the 3D contextual information in the volumetric data, while 3D FCN requires high computation cost and a large amount of training data. Moreover, with limited computation resources, 3D FCN is commonly applied with a patch-based strategy, which is therefore not efficient for clinical applications. To address these, we propose a POI-FuseNet, which consists of a patch-of-interest (POI) selector and a FuseNet. The POI selector can efficiently select the interested regions containing the instrument, while FuseNet can make use of 2D and 3D FCN features to hierarchically exploit contextual information. Furthermore, we propose a hybrid loss function, which consists of a contextual loss and a class-balanced focal loss, to improve the segmentation performance of the network. With the collected challenging ex-vivo dataset on RF-ablation catheter, our method achieved a Dice score of 70.5%, superior to the state-of-the-art methods. In addition, based on the pre-trained model from ex-vivo dataset, our method can be adapted to the in-vivo dataset on guidewire and achieves a Dice score of 66.5% for a different cardiac operation. More crucially, with POI-based strategy, segmentation efficiency is reduced to around 1.3 seconds per volume, which shows the proposed method is promising for clinical use.
Collapse
|
11
|
Zhang Y, He X, Tian Z, Jeong JJ, Lei Y, Wang T, Zeng Q, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-Needle Detection in 3D Ultrasound Images Using Unsupervised Order-Graph Regularized Sparse Dictionary Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2302-2315. [PMID: 31985414 PMCID: PMC7370243 DOI: 10.1109/tmi.2020.2968770] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow for multi-needle detection by considering the images without needles as auxiliary. Concretely, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we develop an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning. Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to yield centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments were conducted on a phantom dataset and a prostate dataset of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our method can correctly detect 95% of needles with a tip location error of 1.01 mm on the prostate dataset. This technique provides accurate multi-needle detection for US-guided HDR prostate brachytherapy, facilitating the clinical workflow.
Collapse
|
12
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
13
|
Zhang Y, Lei Y, Qiu RLJ, Wang T, Wang H, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-needle Localization with Attention U-Net in US-guided HDR Prostate Brachytherapy. Med Phys 2020; 47:2735-2745. [PMID: 32155666 DOI: 10.1002/mp.14128] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 02/17/2020] [Accepted: 03/04/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Ultrasound (US)-guided high dose rate (HDR) prostate brachytherapy requests the clinicians to place HDR needles (catheters) into the prostate gland under transrectal US (TRUS) guidance in the operating room. The quality of the subsequent radiation treatment plan is largely dictated by the needle placements, which varies upon the experience level of the clinicians and the procedure protocols. Real-time plan dose distribution, if available, could be a vital tool to provide more subjective assessment of the needle placements, hence potentially improving the radiation plan quality and the treatment outcome. However, due to low signal-to-noise ratio (SNR) in US imaging, real-time multi-needle segmentation in 3D TRUS, which is the major obstacle for real-time dose mapping, has not been realized to date. In this study, we propose a deep learning-based method that enables accurate and real-time digitization of the multiple needles in the 3D TRUS images of HDR prostate brachytherapy. METHODS A deep learning model based on the U-Net architecture was developed to segment multiple needles in the 3D TRUS images. Attention gates were considered in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient's TRUS images. We evaluated our proposed method based on the needle shaft and tip errors against manually defined ground truth and compared our method with other state-of-art methods (U-Net and deeply supervised attention U-Net). RESULTS Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.290 ± 0.236 mm at shaft error and 0.442 ± 0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference is observed (P = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deeply supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error (P < 0.05). CONCLUSIONS We proposed a new segmentation method to precisely localize the tips and shafts of multiple needles in 3D TRUS images of HDR prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time plan dose assessment tools that can further elevate the quality and outcome of HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, New York University, New York, NY, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
14
|
Lee JY, Islam M, Woh JR, Washeem TSM, Ngoh LYC, Wong WK, Ren H. Ultrasound needle segmentation and trajectory prediction using excitation network. Int J Comput Assist Radiol Surg 2020; 15:437-443. [PMID: 31960247 DOI: 10.1007/s11548-019-02113-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 12/30/2019] [Indexed: 10/25/2022]
Abstract
PURPOSE Ultrasound (US)-guided percutaneous kidney biopsy is a challenge for interventionists as US artefacts prevent accurate viewing of the biopsy needle tip. Automatic needle tracking and trajectory prediction can increase operator confidence in performing biopsies, reduce procedure time, minimize the risk of inadvertent biopsy bleedings, and enable future image-guided robotic procedures. METHODS In this paper, we propose a tracking-by-segmentation model with spatial and channel "Squeeze and Excitation" (scSE) for US needle detection and trajectory prediction. We adopt a light deep learning architecture (e.g. LinkNet) as our segmentation baseline network and integrate the scSE module to learn spatial information for better prediction. The proposed model is trained with the US images of anonymized kidney biopsy clips from 8 patients. The contour is obtained using the border-following algorithm and area calculated using Green formula. Trajectory prediction is made by extrapolating from the smallest bounding box that can capture the contour. RESULTS We train and test our model on a total of 996 images extracted from 102 short videos at a rate of 3 frames per second from each video. A set of 794 images is used for training and 202 images for testing. Our model has achieved IOU of 41.01%, dice accuracy of 56.65%, F1-score of 36.61%, and root-mean-square angle error of 13.3[Formula: see text]. We are thus able to predict and extrapolate the trajectory of the biopsy needle with decent accuracy for interventionists to better perform biopsies. CONCLUSION Our novel model combining LinkNet and scSE shows a promising result for kidney biopsy application, which implies potential to other similar ultrasound-guided biopsies that require needle tracking and trajectory prediction.
Collapse
Affiliation(s)
- Jia Yi Lee
- Faculty of Engineering, National University of Singapore, Singapore, Singapore
| | - Mobarakol Islam
- Faculty of Engineering, National University of Singapore, Singapore, Singapore.,Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,NUS Graduate School for Integrative Sciences and Engineering (NGS), NUS, Singapore, Singapore
| | - Jing Ru Woh
- Faculty of Engineering, National University of Singapore, Singapore, Singapore
| | - T S Mohamed Washeem
- Faculty of Engineering, National University of Singapore, Singapore, Singapore.,Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
| | - Lee Ying Clara Ngoh
- Division of Nephrology, National University Hospital, Singapore, Singapore.,Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Weng Kin Wong
- Division of Nephrology, National University Hospital, Singapore, Singapore
| | - Hongliang Ren
- Faculty of Engineering, National University of Singapore, Singapore, Singapore. .,Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
15
|
Arif M, Moelker A, van Walsum T. Automatic needle detection and real-time Bi-planar needle visualization during 3D ultrasound scanning of the liver. Med Image Anal 2019; 53:104-110. [DOI: 10.1016/j.media.2019.02.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 01/08/2019] [Accepted: 02/01/2019] [Indexed: 10/27/2022]
|
16
|
Learning needle tip localization from digital subtraction in 2D ultrasound. Int J Comput Assist Radiol Surg 2019; 14:1017-1026. [DOI: 10.1007/s11548-019-01951-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 03/18/2019] [Indexed: 12/19/2022]
|
17
|
Yang H, Shan C, Pourtaherian A, Kolen AF, de With PHN. Catheter segmentation in three-dimensional ultrasound images by feature fusion and model fitting. J Med Imaging (Bellingham) 2019; 6:015001. [PMID: 30662926 DOI: 10.1117/1.jmi.6.1.015001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 12/14/2018] [Indexed: 11/14/2022] Open
Abstract
Ultrasound (US) has been increasingly used during interventions, such as cardiac catheterization. To accurately identify the catheter inside US images, extra training for physicians and sonographers is needed. As a consequence, automated segmentation of the catheter in US images and optimized presentation viewing to the physician can be beneficial to accelerate the efficiency and safety of interventions and improve their outcome. For cardiac catheterization, a three-dimensional (3-D) US image is potentially attractive because of no radiation modality and richer spatial information. However, due to a limited spatial resolution of 3-D cardiac US and complex anatomical structures inside the heart, image-based catheter segmentation is challenging. We propose a cardiac catheter segmentation method in 3-D US data through image processing techniques. Our method first applies a voxel-based classification through newly designed multiscale and multidefinition features, which provide a robust catheter voxel segmentation in 3-D US. Second, a modified catheter model fitting is applied to segment the curved catheter in 3-D US images. The proposed method is validated with extensive experiments, using different in-vitro, ex-vivo, and in-vivo datasets. The proposed method can segment the catheter within an average tip-point error that is smaller than the catheter diameter (1.9 mm) in the volumetric images. Based on automated catheter segmentation and combined with optimal viewing, physicians do not have to interpret US images and can focus on the procedure itself to improve the quality of cardiac intervention.
Collapse
Affiliation(s)
- Hongxu Yang
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| | - Caifeng Shan
- Philips Research, In-Body Systems, Eindhoven, The Netherlands
| | - Arash Pourtaherian
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| | | | - Peter H N de With
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| |
Collapse
|
18
|
Daoud MI, Alshalalfah AL, Ait Mohamed O, Alazrai R. A hybrid camera- and ultrasound-based approach for needle localization and tracking using a 3D motorized curvilinear ultrasound probe. Med Image Anal 2018; 50:145-166. [PMID: 30336383 DOI: 10.1016/j.media.2018.09.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 08/11/2018] [Accepted: 09/25/2018] [Indexed: 10/28/2022]
Abstract
Three-dimensional (3D) motorized curvilinear ultrasound probes provide an effective, low-cost tool to guide needle interventions, but localizing and tracking the needle in 3D ultrasound volumes is often challenging. In this study, a new method is introduced to localize and track the needle using 3D motorized curvilinear ultrasound probes. In particular, a low-cost camera mounted on the probe is employed to estimate the needle axis. The camera-estimated axis is used to identify a volume of interest (VOI) in the ultrasound volume that enables high needle visibility. This VOI is analyzed using local phase analysis and the random sample consensus algorithm to refine the camera-estimated needle axis. The needle tip is determined by searching the localized needle axis using a probabilistic approach. Dynamic needle tracking in a sequence of 3D ultrasound volumes is enabled by iteratively applying a Kalman filter to estimate the VOI that includes the needle in the successive ultrasound volume and limiting the localization analysis to this VOI. A series of ex vivo animal experiments are conducted to evaluate the accuracy of needle localization and tracking. The results show that the proposed method can localize the needle in individual ultrasound volumes with maximum error rates of 0.7 mm for the needle axis, 1.7° for the needle angle, and 1.2 mm for the needle tip. Moreover, the proposed method can track the needle in a sequence of ultrasound volumes with maximum error rates of 1.0 mm for the needle axis, 2.0° for the needle angle, and 1.7 mm for the needle tip. These results suggest the feasibility of applying the proposed method to localize and track the needle using 3D motorized curvilinear ultrasound probes.
Collapse
Affiliation(s)
- Mohammad I Daoud
- Department of Computer Engineering, German Jordanian University, Amman, Jordan.
| | | | - Otmane Ait Mohamed
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada
| | - Rami Alazrai
- Department of Computer Engineering, German Jordanian University, Amman, Jordan
| |
Collapse
|
19
|
Katayama M, Zarbatany D, Cha SS, Fatemi M, Belohlavek M. Acoustically Active Catheter for Intracardiac Navigation by Color Doppler Ultrasonography. ULTRASOUND IN MEDICINE & BIOLOGY 2017; 43:1888-1896. [PMID: 28595853 PMCID: PMC5515670 DOI: 10.1016/j.ultrasmedbio.2017.04.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Revised: 04/10/2017] [Accepted: 04/19/2017] [Indexed: 06/07/2023]
Abstract
Navigation of intracardiac catheters by echocardiography is challenging because of the fundamental limitations of B-mode ultrasonography. We describe a catheter fitted with a piezoelectric crystal, which vibrates and produces an instantaneous marker in color flow Doppler scans. The navigation learning curve was explored first in six pigs. Accuracy and precision of targeting with the navigation marker "off" (i.e., B-mode imaging) and "on" were assessed in another six pigs. Paired comparisons confirmed significantly (p = 0.04) shorter mean distances achieved in each pig with the color Doppler marker. Pooled (mean ± standard deviation) distance of the catheter tip from the target crystal was 5.27 ± 1.62 mm by B-mode guidance and 3.66 ± 1.45 mm by color Doppler marker navigation. Dye injection targeted into the ischemic border zone was successful in 8 of 10 pigs. Intracardiac catheter navigation with color Doppler ultrasonography is more accurate compared with conventional guidance by B-mode imaging.
Collapse
Affiliation(s)
- Minako Katayama
- Department of Cardiovascular Diseases, Mayo Clinic, Scottsdale, Arizona, USA
| | - David Zarbatany
- Independent Engineering Consultant, Laguna Niguel, California, USA
| | - Stephen S Cha
- Department of Biostatistics, Mayo Clinic, Scottsdale, Arizona, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, Minnesota, USA
| | - Marek Belohlavek
- Department of Cardiovascular Diseases, Mayo Clinic, Scottsdale, Arizona, USA.
| |
Collapse
|
20
|
Pourtaherian A, Scholten HJ, Kusters L, Zinger S, Mihajlovic N, Kolen AF, Zuo F, Ng GC, Korsten HHM, de With PHN. Medical Instrument Detection in 3-Dimensional Ultrasound Data Volumes. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1664-1675. [PMID: 28410101 DOI: 10.1109/tmi.2017.2692302] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ultrasound-guided medical interventions are broadly applied in diagnostics and therapy, e.g., regional anesthesia or ablation. A guided intervention using 2-D ultrasound is challenging due to the poor instrument visibility, limited field of view, and the multi-fold coordination of the medical instrument and ultrasound plane. Recent 3-D ultrasound transducers can improve the quality of the image-guided intervention if an automated detection of the needle is used. In this paper, we present a novel method for detecting medical instruments in 3-D ultrasound data that is solely based on image processing techniques and validated on various ex vivo and in vivo data sets. In the proposed procedure, the physician is placing the 3-D transducer at the desired position, and the image processing will automatically detect the best instrument view, so that the physician can entirely focus on the intervention. Our method is based on the classification of instrument voxels using volumetric structure directions and robust approximation of the primary tool axis. A novel normalization method is proposed for the shape and intensity consistency of instruments to improve the detection. Moreover, a novel 3-D Gabor wavelet transformation is introduced and optimally designed for revealing the instrument voxels in the volume, while remaining generic to several medical instruments and transducer types. Experiments on diverse data sets, including in vivo data from patients, show that for a given transducer and an instrument type, high detection accuracies are achieved with position errors smaller than the instrument diameter in the 0.5-1.5-mm range on average.
Collapse
|
21
|
Beigi P, Rohling R, Salcudean T, Lessoway VA, Ng GC. Detection of an invisible needle in ultrasound using a probabilistic SVM and time-domain features. ULTRASONICS 2017; 78:18-22. [PMID: 28279882 DOI: 10.1016/j.ultras.2017.02.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Revised: 02/11/2017] [Accepted: 02/13/2017] [Indexed: 06/06/2023]
Abstract
We propose a novel learning-based approach to detect an imperceptible hand-held needle in ultrasound images using the natural tremor motion. The minute tremor induced on the needle however is also transferred to the tissue in contact with the needle, making the accurate needle detection a challenging task. The proposed learning-based framework is based on temporal analysis of the phase variations of pixels to classify them according to the motion characteristics. In addition to the classification, we also obtain a probability map of the segmented pixels by cross-validation. A Hough transform is then used on the probability map to localize the needle using the segmented needle and posterior probability estimate. The two-step probability-weighted localization on the segmented needle in a learning framework is the key innovation which results in localization improvement and adaptability to specific clinical applications. The method was tested in vivo for a standard 17 gauge needle inserted at 50-80° insertion angles and 40-60mm depths. The results showed an average accuracy of (2.12°, 1.69mm) and 81%±4% for localization and classification, respectively.
Collapse
Affiliation(s)
- Parmida Beigi
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada.
| | - Robert Rohling
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada; Mechanical Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Tim Salcudean
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | | | - Gary C Ng
- Philips Ultrasound, Bothell, WA, USA
| |
Collapse
|
22
|
Beigi P, Rohling R, Salcudean SE, Ng GC. CASPER: computer-aided segmentation of imperceptible motion-a learning-based tracking of an invisible needle in ultrasound. Int J Comput Assist Radiol Surg 2017. [PMID: 28647883 DOI: 10.1007/s11548-017-1631-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
PURPOSE This paper presents a new micro-motion-based approach to track a needle in ultrasound images captured by a handheld transducer. METHODS We propose a novel learning-based framework to track a handheld needle by detecting microscale variations of motion dynamics over time. The current state of the art on using motion analysis for needle detection uses absolute motion and hence work well only when the transducer is static. We have introduced and evaluated novel spatiotemporal and spectral features, obtained from the phase image, in a self-supervised tracking framework to improve the detection accuracy in the subsequent frames using incremental training. Our proposed tracking method involves volumetric feature selection and differential flow analysis to incorporate the neighboring pixels and mitigate the effects of the subtle tremor motion of a handheld transducer. To evaluate the detection accuracy, the method is tested on porcine tissue in-vivo, during the needle insertion in the biceps femoris muscle. RESULTS Experimental results show the mean, standard deviation and root-mean-square errors of [Formula: see text], [Formula: see text] and [Formula: see text] in the insertion angle, and 0.82, 1.21, 1.47 mm, in the needle tip, respectively. CONCLUSIONS Compared to the appearance-based detection approaches, the proposed method is especially suitable for needles with ultrasonic characteristics that are imperceptible in the static image and to the naked eye.
Collapse
Affiliation(s)
- Parmida Beigi
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada.
| | - Robert Rohling
- Electrical and Computer Engineering Department and Mechanical Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, BC, Canada
| | - Gary C Ng
- Philips Ultrasound, Bothell, WA, USA
| |
Collapse
|
23
|
Ikhsan M, Tan KK, Putra AS. Assistive technology for ultrasound-guided central venous catheter placement. J Med Ultrason (2001) 2017; 45:41-57. [DOI: 10.1007/s10396-017-0789-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 03/30/2017] [Indexed: 11/28/2022]
|
24
|
Zhao Y, Shen Y, Bernard A, Cachard C, Liebgott H. Evaluation and comparison of current biopsy needle localization and tracking methods using 3D ultrasound. ULTRASONICS 2017; 73:206-220. [PMID: 27668998 DOI: 10.1016/j.ultras.2016.09.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 08/21/2016] [Accepted: 09/07/2016] [Indexed: 06/06/2023]
Abstract
This article compares four different biopsy needle localization algorithms in both 3D and 4D situations to evaluate their accuracy and execution time. The localization algorithms were: Principle component analysis (PCA), random Hough transform (RHT), parallel integral projection (PIP) and ROI-RK (ROI based RANSAC and Kalman filter). To enhance the contrast of the biopsy needle and background tissue, a line filtering pre-processing step was implemented. To make the PCA, RHT and PIP algorithms comparable with the ROI-RK method, a region of interest (ROI) strategy was added. Simulated and ex-vivo data were used to evaluate the performance of the different biopsy needle localization algorithms. The resolutions of the sectorial and cylindrical volumes were 0.3mm×0.4mm×0.6mmand0.1mm×0.1mm×0.2mm (axial×lateral×azimuthal) respectively. In so far as the simulation and experimental results show, the ROI-RK method successfully located and tracked the biopsy needle in both 3D and 4D situations. The tip localization error was within 1.5mm and the axis accuracy was within 1.6mm. To the best of our knowledge, considering both localization accuracy and execution time, the ROI-RK was the most stable and time-saving method. Normally, accuracy comes at the expense of time. However, the ROI-RK method was able to locate the biopsy needle with high accuracy in real time, which makes it a promising method for clinical applications.
Collapse
Affiliation(s)
- Yue Zhao
- Control Theory and Engineering, School of Astronautics, Harbin Institute of Technology, China.
| | - Yi Shen
- Control Theory and Engineering, School of Astronautics, Harbin Institute of Technology, China
| | - Adeline Bernard
- CREAITS, Université de Lyon, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, France
| | - Christian Cachard
- CREAITS, Université de Lyon, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, France
| | - Hervé Liebgott
- CREAITS, Université de Lyon, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, France.
| |
Collapse
|
25
|
Magaraggia J, Wei W, Weiten M, Kleinszig G, Vetter S, Franke J, John A, Egli A, Barth K, Angelopoulou E, Hornegger J. Design and evaluation of a portable intra-operative unified-planning-and-guidance framework applied to distal radius fracture surgery. Int J Comput Assist Radiol Surg 2016; 12:77-90. [PMID: 27495998 DOI: 10.1007/s11548-016-1432-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 05/27/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE During a standard fracture reduction and fixation procedure of the distal radius, only fluoroscopic images are available for planning of the screw placement and monitoring of the drill bit trajectory. Our prototype intra-operative framework integrates planning and drill guidance for a simplified and improved planning transfer. METHODS Guidance information is extracted using a video camera mounted onto a surgical drill. Real-time feedback of the drill bit position is provided using an augmented view of the planning X-rays. We evaluate the accuracy of the placed screws on plastic bones and on healthy and fractured forearm specimens. We also investigate the difference in accuracy between guided screw placement versus freehand. Moreover, the accuracy of the real-time position feedback of the drill bit is evaluated. RESULTS A total of 166 screws were placed. On 37 plastic bones, our obtained accuracy was [Formula: see text] mm, [Formula: see text] and [Formula: see text] in tip position and orientation (azimuth and elevation), respectively. On the three healthy forearm specimens, our obtained accuracy was [Formula: see text] mm, [Formula: see text] and [Formula: see text]. On the two fractured specimens, we attained: [Formula: see text] mm, [Formula: see text] and [Formula: see text]. When screw plans were applied freehand (without our guidance system), the achieved accuracy was [Formula: see text] mm, [Formula: see text], while when they were transferred under guidance, we obtained [Formula: see text] mm, [Formula: see text]. CONCLUSIONS Our results show that our framework is expected to increase the accuracy in screw positioning and to improve robustness w.r.t. freehand placement.
Collapse
Affiliation(s)
- Jessica Magaraggia
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany. .,Graduiertenkolleg 1773 "Heterogene Bildsysteme", Cauerstr. 11, 91058, Erlangen, Germany.
| | - Wei Wei
- Siemens Healthcare GmbH, Roethelheimpark Alle 2, 91052, Erlangen, Germany
| | - Markus Weiten
- Siemens Healthcare GmbH, Roethelheimpark Alle 2, 91052, Erlangen, Germany
| | - Gerhard Kleinszig
- Siemens Healthcare GmbH, Roethelheimpark Alle 2, 91052, Erlangen, Germany
| | - Sven Vetter
- Klinik für Unfallchirurgie und Orthopädie, BG Klinik Ludwigshafen, Ludwig-Guttmann-Straße 13, 67071, Ludwigshafen, Germany
| | - Jochen Franke
- Klinik für Unfallchirurgie und Orthopädie, BG Klinik Ludwigshafen, Ludwig-Guttmann-Straße 13, 67071, Ludwigshafen, Germany
| | - Adrian John
- Siemens AG, Healthcare Sector, Erlangen, Germany
| | - Adrian Egli
- Siemens AG, Healthcare Sector, Erlangen, Germany
| | - Karl Barth
- Siemens Healthcare GmbH, Roethelheimpark Alle 2, 91052, Erlangen, Germany
| | - Elli Angelopoulou
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| | - Joachim Hornegger
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| |
Collapse
|
26
|
Xia W, Ginsberg Y, West SJ, Nikitichev DI, Ourselin S, David AL, Desjardins AE. Coded excitation ultrasonic needle tracking: An in vivo study. Med Phys 2016; 43:4065. [PMID: 27370125 PMCID: PMC5207306 DOI: 10.1118/1.4953205] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2015] [Revised: 04/25/2016] [Accepted: 05/21/2016] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Accurate and efficient guidance of medical devices to procedural targets lies at the heart of interventional procedures. Ultrasound imaging is commonly used for device guidance, but determining the location of the device tip can be challenging. Various methods have been proposed to track medical devices during ultrasound-guided procedures, but widespread clinical adoption has remained elusive. With ultrasonic tracking, the location of a medical device is determined by ultrasonic communication between the ultrasound imaging probe and a transducer integrated into the medical device. The signal-to-noise ratio (SNR) of the transducer data is an important determinant of the depth in tissue at which tracking can be performed. In this paper, the authors present a new generation of ultrasonic tracking in which coded excitation is used to improve the SNR without spatial averaging. METHODS A fiber optic hydrophone was integrated into the cannula of a 20 gauge insertion needle. This transducer received transmissions from the ultrasound imaging probe, and the data were processed to obtain a tracking image of the needle tip. Excitation using Barker or Golay codes was performed to improve the SNR, and conventional bipolar excitation was performed for comparison. The performance of the coded excitation ultrasonic tracking system was evaluated in an in vivo ovine model with insertions to the brachial plexus and the uterine cavity. RESULTS Coded excitation significantly increased the SNRs of the tracking images, as compared with bipolar excitation. During an insertion to the brachial plexus, the SNR was increased by factors of 3.5 for Barker coding and 7.1 for Golay coding. During insertions into the uterine cavity, these factors ranged from 2.9 to 4.2 for Barker coding and 5.4 to 8.5 for Golay coding. The maximum SNR was 670, which was obtained with Golay coding during needle withdrawal from the brachial plexus. Range sidelobe artifacts were observed in tracking images obtained with Barker coded excitation, and they were visually absent with Golay coded excitation. The spatial tracking accuracy was unaffected by coded excitation. CONCLUSIONS Coded excitation is a viable method for improving the SNR in ultrasonic tracking without compromising spatial accuracy. This method provided SNR increases that are consistent with theoretical expectations, even in the presence of physiological motion. With the ultrasonic tracking system in this study, the SNR increases will have direct clinical implications in a broad range of interventional procedures by improving visibility of medical devices at large depths.
Collapse
Affiliation(s)
- Wenfeng Xia
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, London WC1E 6BT, United Kingdom
| | - Yuval Ginsberg
- Institute for Women’s Health, University College London, 86-96 Chenies Mews, London WC1E 6HX, United Kingdom
| | - Simeon J. West
- Department of Anaesthesia, University College Hospital, Main Theaters, Maple Bridge Link Corridor, Podium 3, 235 Euston Road, London NW1 2BU, United Kingdom
| | - Daniil I. Nikitichev
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, London WC1E 6BT, United Kingdom
| | - Sebastien Ourselin
- Center for Medical Imaging Computing, University College London, Gower Street, London WC1E 6BT, United Kingdom
| | - Anna L. David
- Institute for Women’s Health, University College London, 86-96 Chenies Mews, London WC1E 6HX, United Kingdom
| | - Adrien E. Desjardins
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, London WC1E 6BT, United Kingdom
| |
Collapse
|
27
|
Liu J, Lin W, Alsaadi F, Hayat T. Nonlinear observer design for PEM fuel cell power systems via second order sliding mode technique. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.06.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|