1
|
Grube S, Latus S, Behrendt F, Riabova O, Neidhardt M, Schlaefer A. Needle tracking in low-resolution ultrasound volumes using deep learning. Int J Comput Assist Radiol Surg 2024; 19:1975-1981. [PMID: 39002100 PMCID: PMC11442564 DOI: 10.1007/s11548-024-03234-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 07/03/2024] [Indexed: 07/15/2024]
Abstract
PURPOSE Clinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes. METHODS We design an experimental setup with a robot inserting a needle into water and chicken liver tissue. In contrast to manual annotation, we assess the needle tip position from the known robot pose. During insertion, we acquire a large data set of low-resolution volumes using a 16 × 16 element matrix transducer with a volume rate of 4 Hz. We compare the performance of our deep learning approach with conventional needle segmentation. RESULTS Our experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. We achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning. CONCLUSION Our study underlines the strengths of deep learning to predict the 3D needle positions from low-resolution ultrasound volumes. This is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3D motion analysis.
Collapse
Affiliation(s)
- Sarah Grube
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany.
| | - Sarah Latus
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Oleksandra Riabova
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Maximilian Neidhardt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| |
Collapse
|
2
|
Hui X, Rajendran P, Ling T, Dai X, Xing L, Pramanik M. Ultrasound-guided needle tracking with deep learning: A novel approach with photoacoustic ground truth. PHOTOACOUSTICS 2023; 34:100575. [PMID: 38174105 PMCID: PMC10761306 DOI: 10.1016/j.pacs.2023.100575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/15/2023] [Accepted: 11/27/2023] [Indexed: 01/05/2024]
Abstract
Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net), trained on ex vivo tissue image datasets, has shown remarkable precision in localizing needles within US images. The evaluation of needle segmentation performance extends across previously unseen ex vivo data and in vivo human data (collected from an open-source data repository). Specifically, for human data, the Modified Hausdorff Distance (MHD) value stands at approximately 3.73, and the targeting error value is around 2.03, indicating the strong similarity and small needle orientation deviation between the predicted needle and actual needle location. A key advantage of our method is its applicability beyond US images captured from specific imaging systems, extending to images from other US imaging systems.
Collapse
Affiliation(s)
- Xie Hui
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore 637459, Singapore
| | - Praveenbalaji Rajendran
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Tong Ling
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore 637459, Singapore
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 637459, Singapore
| | - Xianjin Dai
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Lei Xing
- Stanford University, Department of Radiation Oncology, Stanford, California 94305, United States
| | - Manojit Pramanik
- Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, United States
| |
Collapse
|
3
|
Amiri Tehrani Zade A, Jalili Aziz M, Majedi H, Mirbagheri A, Ahmadian A. Spatiotemporal analysis of speckle dynamics to track invisible needle in ultrasound sequences using convolutional neural networks: a phantom study. Int J Comput Assist Radiol Surg 2023; 18:1373-1382. [PMID: 36745339 DOI: 10.1007/s11548-022-02812-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/13/2022] [Indexed: 02/07/2023]
Abstract
PURPOSE Accurate needle placement into the target point is critical for ultrasound interventions like biopsies and epidural injections. However, aligning the needle to the thin plane of the transducer is a challenging issue as it leads to the decay of visibility by the naked eye. Therefore, we have developed a CNN-based framework to track the needle using the spatiotemporal features of the speckle dynamics. METHODS There are three key techniques to optimize the network for our application. First, we used Gunnar-Farneback (GF) as a traditional motion field estimation technique to augment the model input with the spatiotemporal features extracted from the stack of consecutive frames. We also designed an efficient network based on the state-of-the-art Yolo framework (nYolo). Lastly, the Assisted Excitation (AE) module was added at the neck of the network to handle the imbalance problem. RESULTS Fourteen freehand ultrasound sequences were collected by inserting an injection needle steeply into the Ultrasound Compatible Lumbar Epidural Simulator and Femoral Vascular Access Ezono test phantoms. We divided the dataset into two sub-categories. In the second category, in which the situation is more challenging and the needle is totally invisible, the angle and tip localization error were 2.43 ± 1.14° and 2.3 ± 1.76 mm using Yolov3+GF+AE and 2.08 ± 1.18° and 2.12 ± 1.43 mm using nYolo+GF+AE. CONCLUSION The proposed method has the potential to track the needle in a more reliable operation compared to other state-of-the-art methods and can accurately localize it in 2D B-mode US images in real time, allowing it to be used in current ultrasound intervention procedures.
Collapse
Affiliation(s)
- Amin Amiri Tehrani Zade
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Maryam Jalili Aziz
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Hossein Majedi
- Pain Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
- Department of Anesthesiology, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Mirbagheri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Robotic Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran.
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
4
|
Arapi V, Hardt-Stremayr A, Weiss S, Steinbrener J. Bridging the simulation-to-real gap for AI-based needle and target detection in robot-assisted ultrasound-guided interventions. Eur Radiol Exp 2023; 7:30. [PMID: 37332035 DOI: 10.1186/s41747-023-00344-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 04/05/2023] [Indexed: 06/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered, robot-assisted, and ultrasound (US)-guided interventional radiology has the potential to increase the efficacy and cost-efficiency of interventional procedures while improving postsurgical outcomes and reducing the burden for medical personnel. METHODS To overcome the lack of available clinical data needed to train state-of-the-art AI models, we propose a novel approach for generating synthetic ultrasound data from real, clinical preoperative three-dimensional (3D) data of different imaging modalities. With the synthetic data, we trained a deep learning-based detection algorithm for the localization of needle tip and target anatomy in US images. We validated our models on real, in vitro US data. RESULTS The resulting models generalize well to unseen synthetic data and experimental in vitro data making the proposed approach a promising method to create AI-based models for applications of needle and target detection in minimally invasive US-guided procedures. Moreover, we show that by one-time calibration of the US and robot coordinate frames, our tracking algorithm can be used to accurately fine-position the robot in reach of the target based on 2D US images alone. CONCLUSIONS The proposed data generation approach is sufficient to bridge the simulation-to-real gap and has the potential to overcome data paucity challenges in interventional radiology. The proposed AI-based detection algorithm shows very promising results in terms of accuracy and frame rate. RELEVANCE STATEMENT This approach can facilitate the development of next-generation AI algorithms for patient anatomy detection and needle tracking in US and their application to robotics. KEY POINTS • AI-based methods show promise for needle and target detection in US-guided interventions. • Publicly available, annotated datasets for training AI models are limited. • Synthetic, clinical-like US data can be generated from magnetic resonance or computed tomography data. • Models trained with synthetic US data generalize well to real in vitro US data. • Target detection with an AI model can be used for fine positioning of the robot.
Collapse
Affiliation(s)
- Visar Arapi
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria.
| | - Alexander Hardt-Stremayr
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| | - Stephan Weiss
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| | - Jan Steinbrener
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| |
Collapse
|
5
|
Zhao Y, Lu Y, Lu X, Jin J, Tao L, Chen X. Biopsy Needle Segmentation using Deep Networks on inhomogeneous Ultrasound Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:553-556. [PMID: 36086307 DOI: 10.1109/embc48229.2022.9871059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In minimally invasive interventional surgery, ultrasound imaging is usually used to provide real-time feedback in order to obtain the best diagnostic results or realize treatment plans, so how to accurately obtain the position of the medical biopsy needle is a problem worthy of study. 2D ultrasound simulation images containing the medical biopsy needle are generated, and our images background is from the real breast ultrasound image. Based on the deep learning network, the images containing the medical biopsy needle are used to analyze the effectiveness of different networks for needle localization for the purpose of returning needle positions in non-uniform ultrasound images. The results show that attention U-Net performed best and can accurately reflect the real position of the medical biopsy needle. The IoU and Precision can reach 90.19% and 96.25%, and the Angular Error is 0.40°. Clinical Relevance- Based on the deep network, for 2D ultrasound images containing medical biopsy needle, the localization precision can reach 96.25% and the Angular Error is 0.40°.
Collapse
|
6
|
Shi M, Zhao T, West SJ, Desjardins AE, Vercauteren T, Xia W. Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets. PHOTOACOUSTICS 2022; 26:100351. [PMID: 35495095 PMCID: PMC9048160 DOI: 10.1016/j.pacs.2022.100351] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging has shown great potential for guiding minimally invasive procedures by accurate identification of critical tissue targets and invasive medical devices (such as metallic needles). The use of light emitting diodes (LEDs) as the excitation light sources accelerates its clinical translation owing to its high affordability and portability. However, needle visibility in LED-based photoacoustic imaging is compromised primarily due to its low optical fluence. In this work, we propose a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with a LED-based photoacoustic and ultrasound imaging system. To address the complexity of capturing ground truth for real data and the poor realism of purely simulated data, this framework included the generation of semi-synthetic training datasets combining both simulated data to represent features from the needles and in vivo measurements for tissue background. Evaluation of the trained neural network was performed with needle insertions into blood-vessel-mimicking phantoms, pork joint tissue ex vivo and measurements on human volunteers. This deep learning-based framework substantially improved the needle visibility in photoacoustic imaging in vivo compared to conventional reconstruction by suppressing background noise and image artefacts, achieving 5.8 and 4.5 times improvements in terms of signal-to-noise ratio and the modified Hausdorff distance, respectively. Thus, the proposed framework could be helpful for reducing complications during percutaneous needle insertions by accurate identification of clinical needles in photoacoustic imaging.
Collapse
Affiliation(s)
- Mengjie Shi
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Tianrui Zhao
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Simeon J. West
- Department of Anaesthesia, University College Hospital, London NW1 2BU, United Kingdom
| | - Adrien E. Desjardins
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1 W 7TY, United Kingdom
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Wenfeng Xia
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| |
Collapse
|
7
|
Weakly-supervised learning for catheter segmentation in 3D frustum ultrasound. Comput Med Imaging Graph 2022; 96:102037. [DOI: 10.1016/j.compmedimag.2022.102037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/15/2021] [Accepted: 01/13/2022] [Indexed: 11/21/2022]
|
8
|
Maneas E, Hauptmann A, Alles EJ, Xia W, Vercauteren T, Ourselin S, David AL, Arridge S, Desjardins AE. Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:543-552. [PMID: 34748488 DOI: 10.1109/tuffc.2021.3126530] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Collapse
|
9
|
Automatic and accurate needle detection in 2D ultrasound during robot-assisted needle insertion process. Int J Comput Assist Radiol Surg 2021; 17:295-303. [PMID: 34677747 DOI: 10.1007/s11548-021-02519-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/05/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Robot-assisted needle insertion guided by 2D ultrasound (US) can effectively improve the accuracy and success rate of clinical puncture. To this end, automatic and accurate needle-tracking methods are important for monitoring puncture processes, avoiding the needle deviating from the intended path, and reducing the risk of injury to surrounding tissues. This work aims to develop a framework for automatic and accurate detection of an inserted needle in 2D US image during the insertion process. METHODS We propose a novel convolutional neural network architecture comprising of a two-channel encoder and single-channel decoder for needle segmentation using needle motion information extracted from two adjacent US image frames. Based on the novel network, we further propose an automatic needle detection framework. According to the prediction result of the previous frame, a region of interest of the needle in the US image was extracted and fed into the proposed network to achieve finer and faster continuous needle localization. RESULTS The performance of our method was evaluated based on 1000 pairs of US images extracted from robot-assisted needle insertions on freshly excised bovine and porcine tissues. The needle segmentation network achieved 99.7% accuracy, 86.2% precision, 89.1% recall, and an F1-score of 0.87. The needle detection framework successfully localized the needle with a mean tip error of 0.45 ± 0.33 mm and a mean orientation error of 0.42° ± 0.34° and achieved a total processing time of 50 ms per image. CONCLUSION The proposed framework demonstrated the capability to realize robust, accurate, and real-time needle localization during robot-assisted needle insertion processes. It has a promising application in tracking the needle and ensuring the safety of robotic-assisted automatic puncture during challenging US-guided minimally invasive procedures.
Collapse
|
10
|
Wijata A, Andrzejewski J, Pyciński B. An Automatic Biopsy Needle Detection and Segmentation on Ultrasound Images Using a Convolutional Neural Network. ULTRASONIC IMAGING 2021; 43:262-272. [PMID: 34180737 DOI: 10.1177/01617346211025267] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Needle visualization in the ultrasound image is essential to successfully perform the ultrasound-guided core needle biopsy. Automatic needle detection can significantly reduce the procedure time, false-negative rate, and highly improve the diagnosis. In this paper, we present a CNN-based, fully automatic method for detection of core needle in 2D ultrasound images. Adaptive moment estimation optimizer is proposed as CNN architecture. Radon transform is applied to locate the needle. The network's model was trained and tested on the total of 619 2D images from 91 cases of breast cancer. The model has achieved an average weighted intersection over union (the weighted Jaccard Index) of 0.986, F1 Score of 0.768, and angle RMSE of 3.73°. The obtained results exceed the other solutions by at least 0.27 and 7° in case of F1 score and angle RMSE, respectively. Finally, the needle is detected in a single frame averagely in 21.6 ms on a modern PC.
Collapse
Affiliation(s)
- Agata Wijata
- Faculty of Biomedical Engineering, Silesian University of Technology, Zabrze, Poland
| | - Jacek Andrzejewski
- Faculty of Biomedical Engineering, Silesian University of Technology, Zabrze, Poland
| | - Bartłomiej Pyciński
- Faculty of Biomedical Engineering, Silesian University of Technology, Zabrze, Poland
| |
Collapse
|
11
|
Time-aware deep neural networks for needle tip localization in 2D ultrasound. Int J Comput Assist Radiol Surg 2021; 16:819-827. [PMID: 33840037 DOI: 10.1007/s11548-021-02361-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Accepted: 03/25/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Accurate placement of the needle is critical in interventions like biopsies and regional anesthesia, during which incorrect needle insertion can lead to procedure failure and complications. Therefore, ultrasound guidance is widely used to improve needle placement accuracy. However, at steep and deep insertions, the visibility of the needle is lost. Computational methods for automatic needle tip localization could improve the clinical success rate in these scenarios. METHODS We propose a novel algorithm for needle tip localization during challenging ultrasound-guided insertions when the shaft may be invisible, and the tip has a low intensity. There are two key steps in our approach. First, we enhance the needle tip features in consecutive ultrasound frames using a detection scheme which recognizes subtle intensity variations caused by needle tip movement. We then employ a hybrid deep neural network comprising a convolutional neural network and long short-term memory recurrent units. The input to the network is a consecutive plurality of fused enhanced frames and the corresponding original B-mode frames, and this spatiotemporal information is used to predict the needle tip location. RESULTS We evaluate our approach on an ex vivo dataset collected with in-plane and out-of-plane insertion of 17G and 22G needles in bovine, porcine, and chicken tissue, acquired using two different ultrasound systems. We train the model with 5000 frames from 42 video sequences. Evaluation on 600 frames from 30 sequences yields a tip localization error of [Formula: see text] mm and an overall inference time of 0.064 s (15 fps). Comparison against prior art on challenging datasets reveals a 30% improvement in tip localization accuracy. CONCLUSION The proposed method automatically models temporal dynamics associated with needle tip motion and is more accurate than state-of-the-art methods. Therefore, it has the potential for improving needle tip localization in challenging ultrasound-guided interventions.
Collapse
|
12
|
Yang H, Shan C, Kolen AF, de With PHN. Efficient Medical Instrument Detection in 3D Volumetric Ultrasound Data. IEEE Trans Biomed Eng 2021; 68:1034-1043. [PMID: 32746017 DOI: 10.1109/tbme.2020.2999729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Ultrasound-guided procedures have been applied in many clinical therapies, such as cardiac catheterization and regional anesthesia. Medical instrument detection in 3D Ultrasound (US) is highly desired, but the existing approaches are far from real-time performance. Our objective is to investigate an efficient instrument detection method in 3D US for practical clinical use. We propose a novel Multi-dimensional Mixed Network for efficient instrument detection in 3D US, which extracts the discriminating features at 3D full-image level by a 3D encoder, and then applies a specially designed dimension reduction block to reduce the spatial complexity of the feature maps by projecting from 3D space into 2D space. A 2D decoder is adopted to detect the instrument along the specified axes. By projecting the predicted 2D outputs, the instrument is detected or visualized in the 3D volume. Furthermore, to enable the network to better learn the discriminative information, we propose a multi-level loss function to capture both pixel- and image-level differences. We carried out extensive experiments on two datasets for two tasks: (1) catheter detection for cardiac RF-ablation and (2) needle detection for regional anesthesia. Our experiments show that our proposed method achieves a detection error of 2-3 voxels with an efficiency of about 0.12 sec per 3D US volume. The proposed method is 3-8 times faster than the state-of-the-art methods, leading to real-time performance. The results show that our proposed method has significant clinical value for real-time 3D US-guided intervention.
Collapse
|
13
|
Andersén C, Rydén T, Thunberg P, Lagerlöf JH. Deep learning-based digitization of prostate brachytherapy needles in ultrasound images. Med Phys 2020; 47:6414-6420. [PMID: 33012023 PMCID: PMC7821271 DOI: 10.1002/mp.14508] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 09/12/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To develop, and evaluate the performance of, a deep learning-based three-dimensional (3D) convolutional neural network (CNN) artificial intelligence (AI) algorithm aimed at finding needles in ultrasound images used in prostate brachytherapy. METHODS Transrectal ultrasound (TRUS) image volumes from 1102 treatments were used to create a clinical ground truth (CGT) including 24422 individual needles that had been manually digitized by medical physicists during brachytherapy procedures. A 3D CNN U-net with 128 × 128 × 128 TRUS image volumes as input was trained using 17215 needle examples. Predictions of voxels constituting a needle were combined to yield a 3D linear function describing the localization of each needle in a TRUS volume. Manual and AI digitizations were compared in terms of the root-mean-square distance (RMSD) along each needle, expressed as median and interquartile range (IQR). The method was evaluated on a data set including 7207 needle examples. A subgroup of the evaluation data set (n = 188) was created, where the needles were digitized once more by a medical physicist (G1) trained in brachytherapy. The digitization procedure was timed. RESULTS The RMSD between the AI and CGT was 0.55 (IQR: 0.35-0.86) mm. In the smaller subset, the RMSD between AI and CGT was similar (0.52 [IQR: 0.33-0.79] mm) but significantly smaller (P < 0.001) than the difference of 0.75 (IQR: 0.49-1.20) mm between AI and G1. The difference between CGT and G1 was 0.80 (IQR: 0.48-1.18) mm, implying that the AI performed as well as the CGT in relation to G1. The mean time needed for human digitization was 10 min 11 sec, while the time needed for the AI was negligible. CONCLUSIONS A 3D CNN can be trained to identify needles in TRUS images. The performance of the network was similar to that of a medical physicist trained in brachytherapy. Incorporating a CNN for needle identification can shorten brachytherapy treatment procedures substantially.
Collapse
Affiliation(s)
- Christoffer Andersén
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Tobias Rydén
- Department of Medical Physics and Biomedical EngineeringSahlgrenska University HospitalGothenburgSweden
| | - Per Thunberg
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
| | - Jakob H. Lagerlöf
- Department of Medical PhysicsFaculty of Medicine and HealthÖrebro UniversityÖrebroSweden
- Department of Medical PhysicsKarlstad Central HospitalKarlstadSweden
| |
Collapse
|
14
|
Zhang Y, Tian Z, Lei Y, Wang T, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Automatic multi-needle localization in ultrasound images using large margin mask RCNN for ultrasound-guided prostate brachytherapy. Phys Med Biol 2020; 65:205003. [PMID: 32640435 DOI: 10.1088/1361-6560/aba410] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Multi-needle localization in ultrasound (US) images is a crucial step of treatment planning for US-guided prostate brachytherapy. However, current computer-aided technologies are mostly focused on single-needle digitization, while manual digitization is labor intensive and time consuming. In this paper, we proposed a deep learning-based workflow for fast automatic multi-needle digitization, including needle shaft detection and needle tip detection. The major workflow is composed of two components: a large margin mask R-CNN model (LMMask R-CNN), which adopts the lager margin loss to reformulate Mask R-CNN for needle shaft localization, and a needle based density-based spatial clustering of application with noise algorithm which integrates priors to model a needle in an iteration for a needle shaft refinement and tip detections. Besides, we use the skipping connection in neural network architecture to improve the supervision in hidden layers. Our workflow was evaluated on 23 patients who underwent US-guided high-dose-rate (HDR) prostrate brachytherapy with 339 needles being tested in total. Our method detected 98% of the needles with 0.091 ± 0.043 mm shaft error and 0.330 ± 0.363 mm tip error. Compared with only using Mask R-CNN and only using LMMask R-CNN, the proposed method gains a significant improvement on both shaft error and tip error. The proposed method automatically digitizes needles per patient with in a second. It streamlines the workflow of transrectal ultrasound-guided HDR prostate brachytherapy and paves the way for the development of real-time treatment planning system that is expected to further elevate the quality and outcome of HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
15
|
Efficient and Robust Instrument Segmentation in 3D Ultrasound Using Patch-of-Interest-FuseNet with Hybrid Loss. Med Image Anal 2020; 67:101842. [PMID: 33075639 DOI: 10.1016/j.media.2020.101842] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 09/11/2020] [Accepted: 09/24/2020] [Indexed: 11/20/2022]
Abstract
Instrument segmentation plays a vital role in 3D ultrasound (US) guided cardiac intervention. Efficient and accurate segmentation during the operation is highly desired since it can facilitate the operation, reduce the operational complexity, and therefore improve the outcome. Nevertheless, current image-based instrument segmentation methods are not efficient nor accurate enough for clinical usage. Lately, fully convolutional neural networks (FCNs), including 2D and 3D FCNs, have been used in different volumetric segmentation tasks. However, 2D FCN cannot exploit the 3D contextual information in the volumetric data, while 3D FCN requires high computation cost and a large amount of training data. Moreover, with limited computation resources, 3D FCN is commonly applied with a patch-based strategy, which is therefore not efficient for clinical applications. To address these, we propose a POI-FuseNet, which consists of a patch-of-interest (POI) selector and a FuseNet. The POI selector can efficiently select the interested regions containing the instrument, while FuseNet can make use of 2D and 3D FCN features to hierarchically exploit contextual information. Furthermore, we propose a hybrid loss function, which consists of a contextual loss and a class-balanced focal loss, to improve the segmentation performance of the network. With the collected challenging ex-vivo dataset on RF-ablation catheter, our method achieved a Dice score of 70.5%, superior to the state-of-the-art methods. In addition, based on the pre-trained model from ex-vivo dataset, our method can be adapted to the in-vivo dataset on guidewire and achieves a Dice score of 66.5% for a different cardiac operation. More crucially, with POI-based strategy, segmentation efficiency is reduced to around 1.3 seconds per volume, which shows the proposed method is promising for clinical use.
Collapse
|
16
|
Rodgers JR, Hrinivich WT, Surry K, Velker V, D'Souza D, Fenster A. A semiautomatic segmentation method for interstitial needles in intraoperative 3D transvaginal ultrasound images for high-dose-rate gynecologic brachytherapy of vaginal tumors. Brachytherapy 2020; 19:659-668. [PMID: 32631651 DOI: 10.1016/j.brachy.2020.05.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 05/28/2020] [Indexed: 11/24/2022]
Abstract
PURPOSE The purpose of this study was to evaluate the use of a semiautomatic algorithm to simultaneously segment multiple high-dose-rate (HDR) gynecologic interstitial brachytherapy (ISBT) needles in three-dimensional (3D) transvaginal ultrasound (TVUS) images, with the aim of providing a clinically useful tool for intraoperative implant assessment. METHODS AND MATERIALS A needle segmentation algorithm previously developed for HDR prostate brachytherapy was adapted and extended to 3D TVUS images from gynecologic ISBT patients with vaginal tumors. Two patients were used for refining/validating the modified algorithm and five patients (8-12 needles/patient) were reserved as an unseen test data set. The images were filtered to enhance needle edges, using intensity peaks to generate feature points, and leveraged the randomized 3D Hough transform to identify candidate needle trajectories. Algorithmic segmentations were compared against manual segmentations and calculated dwell positions were evaluated. RESULTS All 50 test data set needles were successfully segmented with 96% of algorithmically segmented needles having angular differences <3° compared with manually segmented needles and the maximum Euclidean distance was <2.1 mm. The median distance between corresponding dwell positions was 0.77 mm with 86% of needles having maximum differences <3 mm. The mean segmentation time using the algorithm was <30 s/patient. CONCLUSIONS We successfully segmented multiple needles simultaneously in intraoperative 3D TVUS images from gynecologic HDR-ISBT patients with vaginal tumors and demonstrated the robustness of the algorithmic approach to image artifacts. This method provided accurate segmentations within a clinically efficient timeframe, providing the potential to be translated into intraoperative clinical use for implant assessment.
Collapse
MESH Headings
- Adenocarcinoma, Clear Cell/radiotherapy
- Adenocarcinoma, Clear Cell/secondary
- Aged
- Aged, 80 and over
- Algorithms
- Brachytherapy/instrumentation
- Brachytherapy/methods
- Carcinoma, Endometrioid/radiotherapy
- Carcinoma, Endometrioid/secondary
- Carcinoma, Squamous Cell/pathology
- Carcinoma, Squamous Cell/radiotherapy
- Carcinoma, Squamous Cell/secondary
- Endometrial Neoplasms/pathology
- Female
- Humans
- Image Processing, Computer-Assisted
- Imaging, Three-Dimensional/methods
- Middle Aged
- Needles
- Ovarian Neoplasms/pathology
- Prostate/diagnostic imaging
- Radiotherapy Planning, Computer-Assisted
- Ultrasonography/methods
- Vaginal Neoplasms/pathology
- Vaginal Neoplasms/radiotherapy
- Vaginal Neoplasms/secondary
Collapse
Affiliation(s)
- Jessica Robin Rodgers
- School of Biomedical Engineering, The University of Western Ontario, London, Ontario, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada.
| | - William Thomas Hrinivich
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD
| | - Kathleen Surry
- Department of Medical Physics, London Regional Cancer Program, London, Ontario, Canada
| | - Vikram Velker
- Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, Canada
| | - David D'Souza
- Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, Canada
| | - Aaron Fenster
- School of Biomedical Engineering, The University of Western Ontario, London, Ontario, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
17
|
Gillies DJ, Rodgers JR, Gyacskov I, Roy P, Kakani N, Cool DW, Fenster A. Deep learning segmentation of general interventional tools in two‐dimensional ultrasound images. Med Phys 2020; 47:4956-4970. [DOI: 10.1002/mp.14427] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 07/05/2020] [Accepted: 07/21/2020] [Indexed: 12/18/2022] Open
Affiliation(s)
- Derek J. Gillies
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Jessica R. Rodgers
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
| | - Igor Gyacskov
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Priyanka Roy
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Nirmal Kakani
- Department of Radiology Manchester Royal Infirmary ManchesterM13 9WL UK
| | - Derek W. Cool
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| | - Aaron Fenster
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| |
Collapse
|
18
|
Zhang Y, He X, Tian Z, Jeong JJ, Lei Y, Wang T, Zeng Q, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-Needle Detection in 3D Ultrasound Images Using Unsupervised Order-Graph Regularized Sparse Dictionary Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2302-2315. [PMID: 31985414 PMCID: PMC7370243 DOI: 10.1109/tmi.2020.2968770] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow for multi-needle detection by considering the images without needles as auxiliary. Concretely, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we develop an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning. Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to yield centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments were conducted on a phantom dataset and a prostate dataset of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our method can correctly detect 95% of needles with a tip location error of 1.01 mm on the prostate dataset. This technique provides accurate multi-needle detection for US-guided HDR prostate brachytherapy, facilitating the clinical workflow.
Collapse
|
19
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
20
|
Zhang Y, Lei Y, Qiu RLJ, Wang T, Wang H, Jani AB, Curran WJ, Patel P, Liu T, Yang X. Multi-needle Localization with Attention U-Net in US-guided HDR Prostate Brachytherapy. Med Phys 2020; 47:2735-2745. [PMID: 32155666 DOI: 10.1002/mp.14128] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 02/17/2020] [Accepted: 03/04/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Ultrasound (US)-guided high dose rate (HDR) prostate brachytherapy requests the clinicians to place HDR needles (catheters) into the prostate gland under transrectal US (TRUS) guidance in the operating room. The quality of the subsequent radiation treatment plan is largely dictated by the needle placements, which varies upon the experience level of the clinicians and the procedure protocols. Real-time plan dose distribution, if available, could be a vital tool to provide more subjective assessment of the needle placements, hence potentially improving the radiation plan quality and the treatment outcome. However, due to low signal-to-noise ratio (SNR) in US imaging, real-time multi-needle segmentation in 3D TRUS, which is the major obstacle for real-time dose mapping, has not been realized to date. In this study, we propose a deep learning-based method that enables accurate and real-time digitization of the multiple needles in the 3D TRUS images of HDR prostate brachytherapy. METHODS A deep learning model based on the U-Net architecture was developed to segment multiple needles in the 3D TRUS images. Attention gates were considered in our model to improve the prediction on the small needle points. Furthermore, the spatial continuity of needles was encoded into our model with total variation (TV) regularization. The combined network was trained on 3D TRUS patches with the deep supervision strategy, where the binary needle annotation images were provided as ground truth. The trained network was then used to localize and segment the HDR needles for a new patient's TRUS images. We evaluated our proposed method based on the needle shaft and tip errors against manually defined ground truth and compared our method with other state-of-art methods (U-Net and deeply supervised attention U-Net). RESULTS Our method detected 96% needles of 339 needles from 23 HDR prostate brachytherapy patients with 0.290 ± 0.236 mm at shaft error and 0.442 ± 0.831 mm at tip error. For shaft localization, our method resulted in 96% localizations with less than 0.8 mm error (needle diameter is 1.67 mm), while for tip localization, our method resulted in 75% needles with 0 mm error and 21% needles with 2 mm error (TRUS image slice thickness is 2 mm). No significant difference is observed (P = 0.83) on tip localization between our results with the ground truth. Compared with U-Net and deeply supervised attention U-Net, the proposed method delivers a significant improvement on both shaft error and tip error (P < 0.05). CONCLUSIONS We proposed a new segmentation method to precisely localize the tips and shafts of multiple needles in 3D TRUS images of HDR prostate brachytherapy. The 3D rendering of the needles could help clinicians to evaluate the needle placements. It paves the way for the development of real-time plan dose assessment tools that can further elevate the quality and outcome of HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, New York University, New York, NY, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
21
|
Simultaneous reconstruction of multiple stiff wires from a single X-ray projection for endovascular aortic repair. Int J Comput Assist Radiol Surg 2019; 14:1891-1899. [PMID: 31440962 DOI: 10.1007/s11548-019-02052-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 08/05/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Endovascular repair of aortic aneurysms (EVAR) can be supported by fusing pre- and intraoperative data to allow for improved navigation and to reduce the amount of contrast agent needed during the intervention. However, stiff wires and delivery devices can deform the vasculature severely, which reduces the accuracy of the fusion. Knowledge about the 3D position of the inserted instruments can help to transfer these deformations to the preoperative information. METHOD We propose a method to simultaneously reconstruct the stiff wires in both iliac arteries based on only a single monoplane acquisition, thereby avoiding interference with the clinical workflow. In the available X-ray projection, the 2D course of the wire is extracted. Then, a virtual second view of each wire orthogonal to the real projection is estimated using the preoperative vessel anatomy from a computed tomography angiography as prior information. Based on the real and virtual 2D wire courses, the wires can then be reconstructed in 3D using epipolar geometry. RESULTS We achieve a mean modified Hausdorff distance of 4.2 mm between the estimated 3D position and the true wire course for the contralateral side and 4.5 mm for the ipsilateral side. CONCLUSION The accuracy and speed of the proposed method allow for use in an intraoperative setting of deformation correction for EVAR.
Collapse
|
22
|
Learning needle tip localization from digital subtraction in 2D ultrasound. Int J Comput Assist Radiol Surg 2019; 14:1017-1026. [DOI: 10.1007/s11548-019-01951-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 03/18/2019] [Indexed: 12/19/2022]
|
23
|
|