1
|
Yang H, Shan C, Kolen AF, de With PHN. Medical instrument detection in ultrasound: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractMedical instrument detection is essential for computer-assisted interventions, since it facilitates clinicians to find instruments efficiently with a better interpretation, thereby improving clinical outcomes. This article reviews image-based medical instrument detection methods for ultrasound-guided (US-guided) operations. Literature is selected based on an exhaustive search in different sources, including Google Scholar, PubMed, and Scopus. We first discuss the key clinical applications of medical instrument detection in the US, including delivering regional anesthesia, biopsy taking, prostate brachytherapy, and catheterization. Then, we present a comprehensive review of instrument detection methodologies, including non-machine-learning and machine-learning methods. The conventional non-machine-learning methods were extensively studied before the era of machine learning methods. The principal issues and potential research directions for future studies are summarized for the computer-assisted intervention community. In conclusion, although promising results have been obtained by the current (non-) machine learning methods for different clinical applications, thorough clinical validations are still required.
Collapse
|
2
|
Shi M, Zhao T, West SJ, Desjardins AE, Vercauteren T, Xia W. Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets. PHOTOACOUSTICS 2022; 26:100351. [PMID: 35495095 PMCID: PMC9048160 DOI: 10.1016/j.pacs.2022.100351] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging has shown great potential for guiding minimally invasive procedures by accurate identification of critical tissue targets and invasive medical devices (such as metallic needles). The use of light emitting diodes (LEDs) as the excitation light sources accelerates its clinical translation owing to its high affordability and portability. However, needle visibility in LED-based photoacoustic imaging is compromised primarily due to its low optical fluence. In this work, we propose a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with a LED-based photoacoustic and ultrasound imaging system. To address the complexity of capturing ground truth for real data and the poor realism of purely simulated data, this framework included the generation of semi-synthetic training datasets combining both simulated data to represent features from the needles and in vivo measurements for tissue background. Evaluation of the trained neural network was performed with needle insertions into blood-vessel-mimicking phantoms, pork joint tissue ex vivo and measurements on human volunteers. This deep learning-based framework substantially improved the needle visibility in photoacoustic imaging in vivo compared to conventional reconstruction by suppressing background noise and image artefacts, achieving 5.8 and 4.5 times improvements in terms of signal-to-noise ratio and the modified Hausdorff distance, respectively. Thus, the proposed framework could be helpful for reducing complications during percutaneous needle insertions by accurate identification of clinical needles in photoacoustic imaging.
Collapse
Affiliation(s)
- Mengjie Shi
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Tianrui Zhao
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Simeon J. West
- Department of Anaesthesia, University College Hospital, London NW1 2BU, United Kingdom
| | - Adrien E. Desjardins
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1 W 7TY, United Kingdom
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Wenfeng Xia
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| |
Collapse
|
3
|
Weakly-supervised learning for catheter segmentation in 3D frustum ultrasound. Comput Med Imaging Graph 2022; 96:102037. [DOI: 10.1016/j.compmedimag.2022.102037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/15/2021] [Accepted: 01/13/2022] [Indexed: 11/21/2022]
|
4
|
Yang H, Shan C, Kolen AF, de With PHN. Efficient Medical Instrument Detection in 3D Volumetric Ultrasound Data. IEEE Trans Biomed Eng 2021; 68:1034-1043. [PMID: 32746017 DOI: 10.1109/tbme.2020.2999729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Ultrasound-guided procedures have been applied in many clinical therapies, such as cardiac catheterization and regional anesthesia. Medical instrument detection in 3D Ultrasound (US) is highly desired, but the existing approaches are far from real-time performance. Our objective is to investigate an efficient instrument detection method in 3D US for practical clinical use. We propose a novel Multi-dimensional Mixed Network for efficient instrument detection in 3D US, which extracts the discriminating features at 3D full-image level by a 3D encoder, and then applies a specially designed dimension reduction block to reduce the spatial complexity of the feature maps by projecting from 3D space into 2D space. A 2D decoder is adopted to detect the instrument along the specified axes. By projecting the predicted 2D outputs, the instrument is detected or visualized in the 3D volume. Furthermore, to enable the network to better learn the discriminative information, we propose a multi-level loss function to capture both pixel- and image-level differences. We carried out extensive experiments on two datasets for two tasks: (1) catheter detection for cardiac RF-ablation and (2) needle detection for regional anesthesia. Our experiments show that our proposed method achieves a detection error of 2-3 voxels with an efficiency of about 0.12 sec per 3D US volume. The proposed method is 3-8 times faster than the state-of-the-art methods, leading to real-time performance. The results show that our proposed method has significant clinical value for real-time 3D US-guided intervention.
Collapse
|
5
|
Synthesize and Segment: Towards Improved Catheter Segmentation via Adversarial Augmentation. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041638] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automatic catheter and guidewire segmentation plays an important role in robot-assisted interventions that are guided by fluoroscopy. Existing learning based methods addressing the task of segmentation or tracking are often limited by the scarcity of annotated samples and difficulty in data collection. In the case of deep learning based methods, the demand for large amounts of labeled data further impedes successful application. We propose a synthesize and segment approach with plug in possibilities for segmentation to address this. We show that an adversarially learned image-to-image translation network can synthesize catheters in X-ray fluoroscopy enabling data augmentation in order to alleviate a low data regime. To make realistic synthesized images, we train the translation network via a perceptual loss coupled with similarity constraints. Then existing segmentation networks are used to learn accurate localization of catheters in a semi-supervised setting with the generated images. The empirical results on collected medical datasets show the value of our approach with significant improvements over existing translation baseline methods.
Collapse
|
6
|
Efficient and Robust Instrument Segmentation in 3D Ultrasound Using Patch-of-Interest-FuseNet with Hybrid Loss. Med Image Anal 2020; 67:101842. [PMID: 33075639 DOI: 10.1016/j.media.2020.101842] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 09/11/2020] [Accepted: 09/24/2020] [Indexed: 11/20/2022]
Abstract
Instrument segmentation plays a vital role in 3D ultrasound (US) guided cardiac intervention. Efficient and accurate segmentation during the operation is highly desired since it can facilitate the operation, reduce the operational complexity, and therefore improve the outcome. Nevertheless, current image-based instrument segmentation methods are not efficient nor accurate enough for clinical usage. Lately, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, have been used in different volumetric segmentation tasks. However, 2D FCN cannot exploit the 3D contextual information in the volumetric data, while 3D FCN requires high computation cost and a large amount of training data. Moreover, with limited computation resources, 3D FCN is commonly applied with a patch-based strategy, which is therefore not efficient for clinical applications. To address these, we propose a POI-FuseNet, which consists of a patch-of-interest (POI) selector and a FuseNet. The POI selector can efficiently select the interested regions containing the instrument, while FuseNet can make use of 2D and 3D FCN features to hierarchically exploit contextual information. Furthermore, we propose a hybrid loss function, which consists of a contextual loss and a class-balanced focal loss, to improve the segmentation performance of the network. With the collected challenging ex-vivo dataset on RF-ablation catheter, our method achieved a Dice score of 70.5%, superior to the state-of-the-art methods. In addition, based on the pre-trained model from ex-vivo dataset, our method can be adapted to the in-vivo dataset on guidewire and achieves a Dice score of 66.5% for a different cardiac operation. More crucially, with POI-based strategy, segmentation efficiency is reduced to around 1.3 seconds per volume, which shows the proposed method is promising for clinical use.
Collapse
|
7
|
Rodgers JR, Hrinivich WT, Surry K, Velker V, D'Souza D, Fenster A. A semiautomatic segmentation method for interstitial needles in intraoperative 3D transvaginal ultrasound images for high-dose-rate gynecologic brachytherapy of vaginal tumors. Brachytherapy 2020; 19:659-668. [PMID: 32631651 DOI: 10.1016/j.brachy.2020.05.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 05/28/2020] [Indexed: 11/24/2022]
Abstract
PURPOSE The purpose of this study was to evaluate the use of a semiautomatic algorithm to simultaneously segment multiple high-dose-rate (HDR) gynecologic interstitial brachytherapy (ISBT) needles in three-dimensional (3D) transvaginal ultrasound (TVUS) images, with the aim of providing a clinically useful tool for intraoperative implant assessment. METHODS AND MATERIALS A needle segmentation algorithm previously developed for HDR prostate brachytherapy was adapted and extended to 3D TVUS images from gynecologic ISBT patients with vaginal tumors. Two patients were used for refining/validating the modified algorithm and five patients (8-12 needles/patient) were reserved as an unseen test data set. The images were filtered to enhance needle edges, using intensity peaks to generate feature points, and leveraged the randomized 3D Hough transform to identify candidate needle trajectories. Algorithmic segmentations were compared against manual segmentations and calculated dwell positions were evaluated. RESULTS All 50 test data set needles were successfully segmented with 96% of algorithmically segmented needles having angular differences <3° compared with manually segmented needles and the maximum Euclidean distance was <2.1 mm. The median distance between corresponding dwell positions was 0.77 mm with 86% of needles having maximum differences <3 mm. The mean segmentation time using the algorithm was <30 s/patient. CONCLUSIONS We successfully segmented multiple needles simultaneously in intraoperative 3D TVUS images from gynecologic HDR-ISBT patients with vaginal tumors and demonstrated the robustness of the algorithmic approach to image artifacts. This method provided accurate segmentations within a clinically efficient timeframe, providing the potential to be translated into intraoperative clinical use for implant assessment.
Collapse
MESH Headings
- Adenocarcinoma, Clear Cell/radiotherapy
- Adenocarcinoma, Clear Cell/secondary
- Aged
- Aged, 80 and over
- Algorithms
- Brachytherapy/instrumentation
- Brachytherapy/methods
- Carcinoma, Endometrioid/radiotherapy
- Carcinoma, Endometrioid/secondary
- Carcinoma, Squamous Cell/pathology
- Carcinoma, Squamous Cell/radiotherapy
- Carcinoma, Squamous Cell/secondary
- Endometrial Neoplasms/pathology
- Female
- Humans
- Image Processing, Computer-Assisted
- Imaging, Three-Dimensional/methods
- Middle Aged
- Needles
- Ovarian Neoplasms/pathology
- Prostate/diagnostic imaging
- Radiotherapy Planning, Computer-Assisted
- Ultrasonography/methods
- Vaginal Neoplasms/pathology
- Vaginal Neoplasms/radiotherapy
- Vaginal Neoplasms/secondary
Collapse
Affiliation(s)
- Jessica Robin Rodgers
- School of Biomedical Engineering, The University of Western Ontario, London, Ontario, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada.
| | - William Thomas Hrinivich
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD
| | - Kathleen Surry
- Department of Medical Physics, London Regional Cancer Program, London, Ontario, Canada
| | - Vikram Velker
- Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, Canada
| | - David D'Souza
- Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, Canada
| | - Aaron Fenster
- School of Biomedical Engineering, The University of Western Ontario, London, Ontario, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
8
|
Gillies DJ, Rodgers JR, Gyacskov I, Roy P, Kakani N, Cool DW, Fenster A. Deep learning segmentation of general interventional tools in two‐dimensional ultrasound images. Med Phys 2020; 47:4956-4970. [DOI: 10.1002/mp.14427] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 07/05/2020] [Accepted: 07/21/2020] [Indexed: 12/18/2022] Open
Affiliation(s)
- Derek J. Gillies
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Jessica R. Rodgers
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
| | - Igor Gyacskov
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Priyanka Roy
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Nirmal Kakani
- Department of Radiology Manchester Royal Infirmary ManchesterM13 9WL UK
| | - Derek W. Cool
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| | - Aaron Fenster
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| |
Collapse
|
9
|
Zhang Y, He X, Tian Z, Jeong JJ, Lei Y, Wang T, Zeng Q, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-Needle Detection in 3D Ultrasound Images Using Unsupervised Order-Graph Regularized Sparse Dictionary Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2302-2315. [PMID: 31985414 PMCID: PMC7370243 DOI: 10.1109/tmi.2020.2968770] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow for multi-needle detection by considering the images without needles as auxiliary. Concretely, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we develop an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning. Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to yield centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments were conducted on a phantom dataset and a prostate dataset of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our method can correctly detect 95% of needles with a tip location error of 1.01 mm on the prostate dataset. This technique provides accurate multi-needle detection for US-guided HDR prostate brachytherapy, facilitating the clinical workflow.
Collapse
|
10
|
Yang H, Shan C, Kolen AF, de With PHN. Catheter localization in 3D ultrasound using voxel-of-interest-based ConvNets for cardiac intervention. Int J Comput Assist Radiol Surg 2019; 14:1069-1077. [PMID: 30968351 PMCID: PMC6544608 DOI: 10.1007/s11548-019-01960-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 04/01/2019] [Indexed: 10/28/2022]
Abstract
PURPOSE Efficient image-based catheter localization in 3D US during cardiac interventions is highly desired, since it facilitates the operation procedure, reduces the patient risk and improves the outcome. Current image-based catheter localization methods are not efficient or accurate enough for real clinical use. METHODS We propose a catheter localization method for 3D cardiac ultrasound (US). The catheter candidate voxels are first pre-selected by the Frangi vesselness filter with adaptive thresholding, after which a triplanar-based ConvNet is applied to classify the remaining voxels as catheter or not. We propose a Share-ConvNet for 3D US, which reduces the computation complexity by sharing a single ConvNet for all orthogonal slices. To boost the performance of ConvNet, we also employ two-stage training with weighted cross-entropy. Using the classified voxels, the catheter is localized by a model fitting algorithm. RESULTS To validate our method, we have collected challenging ex vivo datasets. Extensive experiments show that the proposed method outperforms state-of-the-art methods and can localize the catheter with an average error of 2.1 mm in around 10 s per volume. CONCLUSION Our method can automatically localize the cardiac catheter in challenging 3D cardiac US images. The efficiency and accuracy localization of the proposed method are considered promising for catheter detection and localization during clinical interventions.
Collapse
Affiliation(s)
- Hongxu Yang
- Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | | | | |
Collapse
|
11
|
Arif M, Moelker A, van Walsum T. Automatic needle detection and real-time Bi-planar needle visualization during 3D ultrasound scanning of the liver. Med Image Anal 2019; 53:104-110. [DOI: 10.1016/j.media.2019.02.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 01/08/2019] [Accepted: 02/01/2019] [Indexed: 10/27/2022]
|
12
|
Yang H, Shan C, Pourtaherian A, Kolen AF, de With PHN. Catheter segmentation in three-dimensional ultrasound images by feature fusion and model fitting. J Med Imaging (Bellingham) 2019; 6:015001. [PMID: 30662926 DOI: 10.1117/1.jmi.6.1.015001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 12/14/2018] [Indexed: 11/14/2022] Open
Abstract
Ultrasound (US) has been increasingly used during interventions, such as cardiac catheterization. To accurately identify the catheter inside US images, extra training for physicians and sonographers is needed. As a consequence, automated segmentation of the catheter in US images and optimized presentation viewing to the physician can be beneficial to accelerate the efficiency and safety of interventions and improve their outcome. For cardiac catheterization, a three-dimensional (3-D) US image is potentially attractive because of no radiation modality and richer spatial information. However, due to a limited spatial resolution of 3-D cardiac US and complex anatomical structures inside the heart, image-based catheter segmentation is challenging. We propose a cardiac catheter segmentation method in 3-D US data through image processing techniques. Our method first applies a voxel-based classification through newly designed multiscale and multidefinition features, which provide a robust catheter voxel segmentation in 3-D US. Second, a modified catheter model fitting is applied to segment the curved catheter in 3-D US images. The proposed method is validated with extensive experiments, using different in-vitro, ex-vivo, and in-vivo datasets. The proposed method can segment the catheter within an average tip-point error that is smaller than the catheter diameter (1.9 mm) in the volumetric images. Based on automated catheter segmentation and combined with optimal viewing, physicians do not have to interpret US images and can focus on the procedure itself to improve the quality of cardiac intervention.
Collapse
Affiliation(s)
- Hongxu Yang
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| | - Caifeng Shan
- Philips Research, In-Body Systems, Eindhoven, The Netherlands
| | - Arash Pourtaherian
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| | | | - Peter H N de With
- Eindhoven University of Technology, VCA Research Group, Eindhoven, The Netherlands
| |
Collapse
|
13
|
Pourtaherian A, Ghazvinian Zanjani F, Zinger S, Mihajlovic N, Ng GC, Korsten HHM, de With PHN. Robust and semantic needle detection in 3D ultrasound using orthogonal-plane convolutional neural networks. Int J Comput Assist Radiol Surg 2018; 13:1321-1333. [PMID: 29855770 PMCID: PMC6132402 DOI: 10.1007/s11548-018-1798-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 05/21/2018] [Indexed: 12/30/2022]
Abstract
PURPOSE During needle interventions, successful automated detection of the needle immediately after insertion is necessary to allow the physician identify and correct any misalignment of the needle and the target at early stages, which reduces needle passes and improves health outcomes. METHODS We present a novel approach to localize partially inserted needles in 3D ultrasound volume with high precision using convolutional neural networks. We propose two methods based on patch classification and semantic segmentation of the needle from orthogonal 2D cross-sections extracted from the volume. For patch classification, each voxel is classified from locally extracted raw data of three orthogonal planes centered on it. We propose a bootstrap resampling approach to enhance the training in our highly imbalanced data. For semantic segmentation, parts of a needle are detected in cross-sections perpendicular to the lateral and elevational axes. We propose to exploit the structural information in the data with a novel thick-slice processing approach for efficient modeling of the context. RESULTS Our introduced methods successfully detect 17 and 22 G needles with a single trained network, showing a robust generalized approach. Extensive ex-vivo evaluations on datasets of chicken breast and porcine leg show 80 and 84% F1-scores, respectively. Furthermore, very short needles are detected with tip localization errors of less than 0.7 mm for lengths of only 5 and 10 mm at 0.2 and 0.36 mm voxel sizes, respectively. CONCLUSION Our method is able to accurately detect even very short needles, ensuring that the needle and its tip are maximally visible in the visualized plane during the entire intervention, thereby eliminating the need for advanced bi-manual coordination of the needle and transducer.
Collapse
Affiliation(s)
- Arash Pourtaherian
- Eindhoven University of Technology, 5612 AJ, Eindhoven, The Netherlands.
| | | | - Svitlana Zinger
- Eindhoven University of Technology, 5612 AJ, Eindhoven, The Netherlands
| | | | - Gary C Ng
- Philips Healthcare, Bothell, WA, 98021, USA
| | | | - Peter H N de With
- Eindhoven University of Technology, 5612 AJ, Eindhoven, The Netherlands
| |
Collapse
|
14
|
|
15
|
Pourtaherian A, Scholten HJ, Kusters L, Zinger S, Mihajlovic N, Kolen AF, Zuo F, Ng GC, Korsten HHM, de With PHN. Medical Instrument Detection in 3-Dimensional Ultrasound Data Volumes. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1664-1675. [PMID: 28410101 DOI: 10.1109/tmi.2017.2692302] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ultrasound-guided medical interventions are broadly applied in diagnostics and therapy, e.g., regional anesthesia or ablation. A guided intervention using 2-D ultrasound is challenging due to the poor instrument visibility, limited field of view, and the multi-fold coordination of the medical instrument and ultrasound plane. Recent 3-D ultrasound transducers can improve the quality of the image-guided intervention if an automated detection of the needle is used. In this paper, we present a novel method for detecting medical instruments in 3-D ultrasound data that is solely based on image processing techniques and validated on various ex vivo and in vivo data sets. In the proposed procedure, the physician is placing the 3-D transducer at the desired position, and the image processing will automatically detect the best instrument view, so that the physician can entirely focus on the intervention. Our method is based on the classification of instrument voxels using volumetric structure directions and robust approximation of the primary tool axis. A novel normalization method is proposed for the shape and intensity consistency of instruments to improve the detection. Moreover, a novel 3-D Gabor wavelet transformation is introduced and optimally designed for revealing the instrument voxels in the volume, while remaining generic to several medical instruments and transducer types. Experiments on diverse data sets, including in vivo data from patients, show that for a given transducer and an instrument type, high detection accuracies are achieved with position errors smaller than the instrument diameter in the 0.5-1.5-mm range on average.
Collapse
|
16
|
Hrinivich WT, Hoover DA, Surry K, Edirisinghe C, Montreuil J, D'Souza D, Fenster A, Wong E. Simultaneous automatic segmentation of multiple needles using 3D ultrasound for high-dose-rate prostate brachytherapy. Med Phys 2017; 44:1234-1245. [PMID: 28160517 DOI: 10.1002/mp.12148] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Revised: 01/10/2017] [Accepted: 01/29/2017] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Sagittally reconstructed 3D (SR3D) ultrasound imaging shows promise for improved needle localization for high-dose-rate prostate brachytherapy (HDR-BT); however, needles must be manually segmented intraoperatively while the patient is anesthetized to create a treatment plan. The purpose of this article was to describe and validate an automatic needle segmentation algorithm designed for HDR-BT, specifically capable of simultaneously segmenting all needles in an HDR-BT implant using a single SR3D image with ~5 mm interneedle spacing. MATERIALS AND METHODS The segmentation algorithm involves regularized feature point classification and line trajectory identification based on the randomized 3D Hough transform modified to handle multiple straight needles in a single image simultaneously. Needle tips are identified based on peaks in the derivative of the signal intensity profile along the needle trajectory. For algorithm validation, 12 prostate cancer patients underwent HDR-BT during which SR3D images were acquired with all needles in place. Needles present in each of the 12 images were segmented manually, providing a gold standard for comparison, and using the algorithm. Tip errors were assessed in terms of the 3D Euclidean distance between needle tips, and trajectory error was assessed in terms of 2D distance in the axial plane and angular deviation between trajectories. RESULTS In total, 190 needles were investigated. Mean execution time of the algorithm was 11.0 s per patient, or 0.7 s per needle. The algorithm identified 82% and 85% of needle tips with 3D errors ≤3 mm and ≤5 mm, respectively, 91% of needle trajectories with 2D errors in the axial plane ≤3 mm, and 83% of needle trajectories with angular errors ≤3°. The largest tip error component was in the needle insertion direction. CONCLUSIONS Previous work has indicated HDR-BT needles may be manually segmented using SR3D images with insertion depth errors ≤3 mm and ≤5 mm for 83% and 92% of needles, respectively. The algorithm shows promise for reducing the time required for the segmentation of straight HDR-BT needles, and future work involves improving needle tip localization performance through improved image quality and modeling curvilinear trajectories.
Collapse
Affiliation(s)
- William Thomas Hrinivich
- Department of Medical Biophysics, University of Western Ontario, London, Ontario, N6A 5C1, Canada.,Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario, N6A 5K8, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, University of Western Ontario, London, Ontario, N6A 5C1, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, N6A 4L6, Canada.,London Regional Cancer Program, London, Ontario, N6A 5W9, Canada
| | - Kathleen Surry
- Department of Medical Biophysics, University of Western Ontario, London, Ontario, N6A 5C1, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, N6A 4L6, Canada.,London Regional Cancer Program, London, Ontario, N6A 5W9, Canada
| | - Chandima Edirisinghe
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario, N6A 5K8, Canada
| | - Jacques Montreuil
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario, N6A 5K8, Canada
| | - David D'Souza
- Department of Oncology, University of Western Ontario, London, Ontario, N6A 4L6, Canada.,London Regional Cancer Program, London, Ontario, N6A 5W9, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, University of Western Ontario, London, Ontario, N6A 5C1, Canada.,Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario, N6A 5K8, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, N6A 4L6, Canada.,Department of Physics and Astronomy, University of Western Ontario, London, Ontario, N6A 3K7, Canada
| | - Eugene Wong
- Department of Medical Biophysics, University of Western Ontario, London, Ontario, N6A 5C1, Canada.,Department of Oncology, University of Western Ontario, London, Ontario, N6A 4L6, Canada.,London Regional Cancer Program, London, Ontario, N6A 5W9, Canada.,Department of Physics and Astronomy, University of Western Ontario, London, Ontario, N6A 3K7, Canada
| |
Collapse
|
17
|
Zhao Y, Shen Y, Bernard A, Cachard C, Liebgott H. Evaluation and comparison of current biopsy needle localization and tracking methods using 3D ultrasound. ULTRASONICS 2017; 73:206-220. [PMID: 27668998 DOI: 10.1016/j.ultras.2016.09.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 08/21/2016] [Accepted: 09/07/2016] [Indexed: 06/06/2023]
Abstract
This article compares four different biopsy needle localization algorithms in both 3D and 4D situations to evaluate their accuracy and execution time. The localization algorithms were: Principle component analysis (PCA), random Hough transform (RHT), parallel integral projection (PIP) and ROI-RK (ROI based RANSAC and Kalman filter). To enhance the contrast of the biopsy needle and background tissue, a line filtering pre-processing step was implemented. To make the PCA, RHT and PIP algorithms comparable with the ROI-RK method, a region of interest (ROI) strategy was added. Simulated and ex-vivo data were used to evaluate the performance of the different biopsy needle localization algorithms. The resolutions of the sectorial and cylindrical volumes were 0.3mm×0.4mm×0.6mmand0.1mm×0.1mm×0.2mm (axial×lateral×azimuthal) respectively. In so far as the simulation and experimental results show, the ROI-RK method successfully located and tracked the biopsy needle in both 3D and 4D situations. The tip localization error was within 1.5mm and the axis accuracy was within 1.6mm. To the best of our knowledge, considering both localization accuracy and execution time, the ROI-RK was the most stable and time-saving method. Normally, accuracy comes at the expense of time. However, the ROI-RK method was able to locate the biopsy needle with high accuracy in real time, which makes it a promising method for clinical applications.
Collapse
Affiliation(s)
- Yue Zhao
- Control Theory and Engineering, School of Astronautics, Harbin Institute of Technology, China.
| | - Yi Shen
- Control Theory and Engineering, School of Astronautics, Harbin Institute of Technology, China
| | - Adeline Bernard
- CREAITS, Université de Lyon, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, France
| | - Christian Cachard
- CREAITS, Université de Lyon, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, France
| | - Hervé Liebgott
- CREAITS, Université de Lyon, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Claude Bernard Lyon 1, France.
| |
Collapse
|
18
|
Menguy PY, Péry E, Ouchchane L, Guttmann A, Trésorier R, Combaret N, Motreff P, Sarry L. Preliminary results for the supervised detection of lumen and stent from OCT pullbacks. Ing Rech Biomed 2016. [DOI: 10.1016/j.irbm.2015.12.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|