1
|
He W, Zhao B, Zhou Y, Wu R, Wu G, Li Y, Lu M, Zhu L, Gao Y. Freehand 3D Ultrasound Imaging Based on Probe-mounted Vision and IMU System. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1143-1154. [PMID: 38702284 DOI: 10.1016/j.ultrasmedbio.2024.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 03/24/2024] [Accepted: 03/31/2024] [Indexed: 05/06/2024]
Abstract
OBJECTIVES Freehand three-dimensional (3D) ultrasound (US) is of great significance for clinical diagnosis and treatment, it is often achieved with the aid of external devices (optical and/or electromagnetic, etc.) that monitor the location and orientation of the US probe. However, this external monitoring is often impacted by imaging environment such as optical occlusions and/or electromagnetic (EM) interference. METHODS To address the above issues, we integrated a binocular camera and an inertial measurement unit (IMU) on a US probe. Subsequently, we built a tight coupling model utilizing the unscented Kalman algorithm based on Lie groups (UKF-LG), combining vision and inertial information to infer the probe's movement, through which the position and orientation of the US image frame are calculated. Finally, the volume data was reconstructed with the voxel-based hole-filling method. RESULTS The experiments including calibration experiments, tracking performance evaluation, phantom scans, and real scenarios scans have been conducted. The results show that the proposed system achieved the accumulated frame position error of 3.78 mm and the orientation error of 0.36° and reconstructed 3D US images with high quality in both phantom and real scenarios. CONCLUSIONS The proposed method has been demonstrated to enhance the robustness and effectiveness of freehand 3D US. Follow-up research will focus on improving the accuracy and stability of multi-sensor fusion to make the system more practical in clinical environments.
Collapse
Affiliation(s)
- Weizhen He
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Bingshuai Zhao
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Yongjin Zhou
- Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University, Shenzhen, China
| | - Guangyao Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University, Shenzhen, China
| | - Ye Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen. China
| | - Minhua Lu
- Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | | | - Yi Gao
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen, China.
| |
Collapse
|
2
|
Seitel A, Groener D, Eisenmann M, Aguilera Saiz L, Pekdemir B, Sridharan P, Nguyen CT, Häfele S, Feldmann C, Everitt B, Happel C, Herrmann E, Sabet A, Grünwald F, Franz AM, Maier-Hein L. Miniaturized electromagnetic tracking enables efficient ultrasound-navigated needle insertions. Sci Rep 2024; 14:14161. [PMID: 38898086 PMCID: PMC11187124 DOI: 10.1038/s41598-024-64530-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 06/10/2024] [Indexed: 06/21/2024] Open
Abstract
Ultrasound (US) has gained popularity as a guidance modality for percutaneous needle insertions because it is widely available and non-ionizing. However, coordinating scanning and needle insertion still requires significant experience. Current assistance solutions utilize optical or electromagnetic tracking (EMT) technology directly integrated into the US device or probe. This results in specialized devices or introduces additional hardware, limiting the ergonomics of both the scanning and insertion process. We developed the first ultrasound (US) navigation solution designed to be used as a non-permanent accessory for existing US devices while maintaining the ergonomics during the scanning process. A miniaturized EMT source is reversibly attached to the US probe, temporarily creating a combined modality that provides real-time anatomical imaging and instrument tracking at the same time. Studies performed with 11 clinical operators show that the proposed navigation solution can guide needle insertions with a targeting accuracy of about 5 mm, which is comparable to existing approaches and unaffected by repeated attachment and detachment of the miniaturized tracking solution. The assistance proved particularly helpful for non-expert users and needle insertions performed outside of the US plane. The small size and reversible attachability of the proposed navigation solution promises streamlined integration into the clinical workflow and widespread access to US navigated punctures.
Collapse
Affiliation(s)
- Alexander Seitel
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), a partnership between DKFZ and Heidelberg University Hospital, 69120, Heidelberg, Germany.
| | - Daniel Groener
- Department of Nuclear Medicine, Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Matthias Eisenmann
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Laura Aguilera Saiz
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bünyamin Pekdemir
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Patmaa Sridharan
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Cam Tu Nguyen
- Department of Nuclear Medicine, Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Sebastian Häfele
- Department of Nuclear Medicine, Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Carolin Feldmann
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Brittaney Everitt
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Christian Happel
- Department of Nuclear Medicine, Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Eva Herrmann
- Department of Medicine, Institute for Biostatistics, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Amir Sabet
- Department of Nuclear Medicine, Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Frank Grünwald
- Department of Nuclear Medicine, Clinic for Radiology and Nuclear Medicine, University Hospital, Goethe University Frankfurt, 60596, Frankfurt, Germany
| | - Alfred Michael Franz
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
- Institute for Computer Science, Ulm University of Applied Sciences, 89075, Ulm, Germany.
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), a partnership between DKFZ and Heidelberg University Hospital, 69120, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, 69120, Heidelberg, Germany
- Medical Faculty, Heidelberg University, 69120, Heidelberg, Germany
- Helmholtz Information and Data Science School for Health, Karlsruhe/Heidelberg, Germany
| |
Collapse
|
3
|
Li Q, Shen Z, Li Q, Barratt DC, Dowrick T, Clarkson MJ, Vercauteren T, Hu Y. Long-term Dependency for 3D Reconstruction of Freehand Ultrasound Without External Tracker. IEEE Trans Biomed Eng 2023; PP:1033-1042. [PMID: 37856260 DOI: 10.1109/tbme.2023.3325551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
OBJECTIVE Reconstructing freehand ultrasound in 3D without any external tracker has been a long-standing challenge in ultrasound-assisted procedures. We aim to define new ways of parameterising long-term dependencies, and evaluate the performance. METHODS First, long-term dependency is encoded by transformation positions within a frame sequence. This is achieved by combining a sequence model with a multi-transformation prediction. Second, two dependency factors are proposed, anatomical image content and scanning protocol, for contributing towards accurate reconstruction. Each factor is quantified experimentally by reducing respective training variances. RESULTS 1) The added long-term dependency up to 400 frames at 20 frames per second (fps) indeed improved reconstruction, with an up to 82.4% lowered accumulated error, compared with the baseline performance. The improvement was found to be dependent on sequence length, transformation interval and scanning protocol and, unexpectedly, not on the use of recurrent networks with long-short term modules; 2) Decreasing either anatomical or protocol variance in training led to poorer reconstruction accuracy. Interestingly, greater performance was gained from representative protocol patterns, than from representative anatomical features. CONCLUSION The proposed algorithm uses hyperparameter tuning to effectively utilise long-term dependency. The proposed dependency factors are of practical significance in collecting diverse training data, regulating scanning protocols and developing efficient networks. SIGNIFICANCE The proposed new methodology with publicly available volunteer data and code for parametersing the long-term dependency, experimentally shown to be valid sources of performance improvement, which could potentially lead to better model development and practical optimisation of the reconstruction application.
Collapse
|
4
|
Lu W, Chen J, Wang Y, Chang W, Wang Y, Chen C, Dong L, Liang P, Kong D. Coplanarity Constrained Ultrasound Probe Calibration Based on N-Wire Phantom. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2316-2324. [PMID: 37541788 DOI: 10.1016/j.ultrasmedbio.2023.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/15/2023] [Accepted: 05/26/2023] [Indexed: 08/06/2023]
Abstract
OBJECTIVE N-wire phantom-based ultrasound probe calibration has been used widely in many freehand tracked ultrasound imaging systems. The calibration matrix is obtained by registering the coplanar point cloud in ultrasound space and non-coplanar point cloud in tracking sensor space based on the least squares method. This method is sensitive to outliers and loses the coplanar information of the fiducial points. In this article, we describe a coplanarity-constrained calibration algorithm focusing on these issues. METHODS We verified that the out-of-plane error along the oblique wire in the N-wire phantom followed a normal distribution and used it to remove the experimental outliers and fit the plane with the Levenberg-Marquardt algorithm. Then, we projected the points to the plane along the oblique wire. Coplanarity-constrained point cloud registration was used to calculate the transformation matrix. RESULTS Compared with the other two commonly used methods, our method had the best calibration precision and achieved 25% and 36% improvement of the mean calibration accuracy than the closed-form solution and in-plane error method respectively at depth 16. Experiments at different depths revealed that our algorithm had better performance in our setup. CONCLUSION Our proposed coplanarity-constrained calibration algorithm achieved significant improvement in both precision and accuracy compared with existing algorithms with the same N-wire phantom. It is expected that calibration accuracy will improve when the algorithm is applied to all other N-wire phantom-based calibration procedures.
Collapse
Affiliation(s)
- Wenliang Lu
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Jiye Chen
- Fifth Medical Center, Chinese PLA General Hospital, Beijing, China; Chinese PLA Medical School, Beijing, China
| | - Yuan Wang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Wanru Chang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Yun Wang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | | | - Linan Dong
- Department of Interventional Ultrasound, First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Ping Liang
- Fifth Medical Center, Chinese PLA General Hospital, Beijing, China; Chinese PLA Medical School, Beijing, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China.
| |
Collapse
|
5
|
Luo M, Yang X, Wang H, Dou H, Hu X, Huang Y, Ravikumar N, Xu S, Zhang Y, Xiong Y, Xue W, Frangi AF, Ni D, Sun L. RecON: Online learning for sensorless freehand 3D ultrasound reconstruction. Med Image Anal 2023; 87:102810. [PMID: 37054648 DOI: 10.1016/j.media.2023.102810] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/11/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023]
Abstract
Sensorless freehand 3D ultrasound (US) reconstruction based on deep networks shows promising advantages, such as large field of view, relatively high resolution, low cost, and ease of use. However, existing methods mainly consider vanilla scan strategies with limited inter-frame variations. These methods thus are degraded on complex but routine scan sequences in clinics. In this context, we propose a novel online learning framework for freehand 3D US reconstruction under complex scan strategies with diverse scanning velocities and poses. First, we devise a motion-weighted training loss in training phase to regularize the scan variation frame-by-frame and better mitigate the negative effects of uneven inter-frame velocity. Second, we effectively drive online learning with local-to-global pseudo supervisions. It mines both the frame-level contextual consistency and the path-level similarity constraint to improve the inter-frame transformation estimation. We explore a global adversarial shape before transferring the latent anatomical prior as supervision. Third, we build a feasible differentiable reconstruction approximation to enable the end-to-end optimization of our online learning. Experimental results illustrate that our freehand 3D US reconstruction framework outperformed current methods on two large, simulated datasets and one real dataset. In addition, we applied the proposed framework to clinical scan videos to further validate its effectiveness and generalizability.
Collapse
|
6
|
Wu C, Fu T, Chen X, Xiao J, Ai D, Fan J, Lin Y, Song H, Yang J. Automatic spatial calibration of freehand ultrasound probe with a multilayer N-wire phantom. ULTRASONICS 2023; 128:106862. [PMID: 36240539 DOI: 10.1016/j.ultras.2022.106862] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 08/25/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
The classic N-wire phantom has been widely used in the calibration of freehand ultrasound probes. One of the main challenges of the phantom is accurately identifying N-fiducials in ultrasound images, especially with multiple N-wire structures. In this study, a method using a multilayer N-wire phantom for the automatic spatial calibration of ultrasound images is proposed. All dots in the ultrasound image are segmented, scored, and classified according to the unique geometric features of the multilayer N-wire phantom. A recognition method for identifying N-fiducials from the dots is proposed for calibrating the spatial transformation of the ultrasound probe. At depths of 9, 11, 13, and 15 cm, the reconstruction error of 50 points is 1.24 ± 0.16, 1.09 ± 0.06, 0.95 ± 0.08, 1.02 ± 0.05 mm, respectively. The reconstruction mockup test shows that the distance accuracy is 1.11 ± 0.82 mm at a depth of 15 cm.
Collapse
Affiliation(s)
- Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Xinyu Chen
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
7
|
Marcadent S, Heches J, Favre J, Desseauve D, Thiran JP. 3-D Freehand Ultrasound Calibration Using a Tissue-Mimicking Phantom With Parallel Wires. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:165-177. [PMID: 36257837 DOI: 10.1016/j.ultrasmedbio.2022.08.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 07/18/2022] [Accepted: 08/14/2022] [Indexed: 06/16/2023]
Abstract
This article describes a method used to calibrate 3-D freehand ultrasound systems based on phantoms with parallel wires forming two perpendicular planes, such as the usual general-purpose commercial phantoms. In our algorithm, the phantom pose is co-optimized with the calibration to avoid the need to precisely track the phantom. We provide a geometrical analysis to explain the proposed acquisition protocol. Finally, we give an estimate of the system accuracy and precision based on measurements acquired on an independent test phantom. We obtained error norms of 1.6 mm up to 6 cm of depth and 3.5 mm between 6 and 14 cm of depth, in total average. In conclusion, it is possible to calibrate ultrasound tracked-probe systems with a reasonable accuracy based on a general-purpose phantom. Contrarily to most calibration methods that imply the construction of the phantom, the present algorithm is based on a standard phantom geometry that is commercially available.
Collapse
Affiliation(s)
- Sandra Marcadent
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory 5 (LTS5), Lausanne, Switzerland.
| | - Johann Heches
- Lausanne University Hospital (CHUV), Swiss BioMotion Lab, Lausanne, Switzerland; University of Lausanne (UNIL), Lausanne, Switzerland
| | - Julien Favre
- Lausanne University Hospital (CHUV), Swiss BioMotion Lab, Lausanne, Switzerland; University of Lausanne (UNIL), Lausanne, Switzerland
| | - David Desseauve
- Lausanne University Hospital (CHUV), Department of Woman-Mother-Child, Lausanne, Switzerland
| | - Jean-Philippe Thiran
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory 5 (LTS5), Lausanne, Switzerland; University of Lausanne (UNIL), Lausanne, Switzerland; Lausanne University Hospital (CHUV), Department of Radiology, Lausanne, Switzerland
| |
Collapse
|
8
|
Umehara J, Fukuda N, Konda S, Hirashima M. Validity of Freehand 3-D Ultrasound System in Measurement of the 3-D Surface Shape of Shoulder Muscles. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:1966-1976. [PMID: 35831210 DOI: 10.1016/j.ultrasmedbio.2022.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 04/02/2022] [Accepted: 06/02/2022] [Indexed: 06/15/2023]
Abstract
Freehand 3-D ultrasound (3DUS) system is a promising technique for accurately assessing muscle morphology. However, its accuracy has been validated mainly in terms of volume by examining lower limb muscles. This study was aimed at validating 3DUS in the measurements of 3-D surface shape and volume by comparing them with magnetic resonance imaging (MRI) measurements while ensuring the reproducibility of participant posture by focusing on the shoulder muscles. The supraspinatus, infraspinatus and posterior deltoid muscles of 10 healthy men were scanned using 3DUS and MRI while secured by an immobilization support customized for each participant. A 3-D surface model of each muscle was created from the 3DUS and MRI methods, and the agreement between them was assessed. For the muscle volume, the mean difference between the two models was within -0.51 cm3. For the 3-D surface shape, the distances between the closest points of the two models and the Dice similarity coefficient were calculated. The results indicated that the median surface distance was less than 1.12 mm and the Dice similarity coefficient was larger than 0.85. These results suggest that, given the aforementioned error is permitted, 3DUS can be used as an alternative to MRI in measuring volume and surface shape, even for the shoulder muscles.
Collapse
Affiliation(s)
- Jun Umehara
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan; Japan Society for the Promotion of Science, Chiyoda-ku, Tokyo, Japan; Human Health Sciences, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Norio Fukuda
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan
| | - Shoji Konda
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan; Department of Health and Sport Sciences, Graduate School of Medicine, Osaka University, Toyonaka, Osaka, Japan
| | - Masaya Hirashima
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan; Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan.
| |
Collapse
|
9
|
Foiret J, Cai X, Bendjador H, Park EY, Kamaya A, Ferrara KW. Improving plane wave ultrasound imaging through real-time beamformation across multiple arrays. Sci Rep 2022; 12:13386. [PMID: 35927389 PMCID: PMC9352764 DOI: 10.1038/s41598-022-16961-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 07/19/2022] [Indexed: 11/09/2022] Open
Abstract
Ultrasound imaging is a widely used diagnostic tool but has limitations in the imaging of deep lesions or obese patients where the large depth to aperture size ratio (f-number) reduces image quality. Reducing the f-number can improve image quality, and in this work, we combined three commercial arrays to create a large imaging aperture of 100 mm and 384 elements. To maintain the frame rate given the large number of elements, plane wave imaging was implemented with all three arrays transmitting a coherent wavefront. On wire targets at a depth of 100 mm, the lateral resolution is significantly improved; the lateral resolution was 1.27 mm with one array (1/3 of the aperture) and 0.37 mm with the full aperture. After creating virtual receiving elements to fill the inter-array gaps, an autoregressive filter reduced the grating lobes originating from the inter-array gaps by − 5.2 dB. On a calibrated commercial phantom, the extended field-of-view and improved spatial resolution were verified. The large aperture facilitates aberration correction using a singular value decomposition-based beamformer. Finally, after approval of the Stanford Institutional Review Board, the three-array configuration was applied in imaging the liver of a volunteer, validating the potential for enhanced resolution.
Collapse
Affiliation(s)
| | - Xiran Cai
- Stanford University, Palo Alto, CA, USA
| | | | | | | | | |
Collapse
|
10
|
A novel ultrasound probe calibration method for multimodal image guidance of needle placement in cervical cancer brachytherapy. Phys Med 2022; 100:81-89. [PMID: 35759943 DOI: 10.1016/j.ejmp.2022.06.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 03/10/2022] [Accepted: 06/13/2022] [Indexed: 01/11/2023] Open
Abstract
PURPOSE Interstitial needles placement is a critical component of combined intracavitary/interstitial (IC/IS) brachytherapy (BT). To ensure precise placement of interstitial needles, we proposed a novel ultrasonic (US) probe calibration method to accurately register the US image in the magnetic resonance imaging (MRI) image and provide multimodal image guidance for needle placement. METHODS A wire-based calibration phantom combined with the stylus was developed for the calibration of US probe. The calibration phantom helps to quickly align the imaging plane of the US probe with the fiducial points to obtain US images of these points. The coordinates of fiducial points in US images were located automatically by feature extraction algorithms and were further corrected by the proposed correction method. Ingenious structures were designed on both sides of the calibration phantom to accurately obtain the coordinates of the fiducial points relative to the tracking device. Marker validation and pelvic phantom study were performed to evaluate the accuracy of the proposed calibration method. RESULTS In the marker validation, the US probe calibration method with corrected transformation achieves a registration accuracy of 0.694 ± 0.014 mm, and the uncorrected one is 0.746 ± 0.018 mm. In the pelvic phantom study, the needle tip difference was 1.096 ± 0.225 mm and trajectory difference was 1.416 ± 0.284 degrees. CONCLUSION The proposed US probe calibration method is helpful to achieve more accurate multimodality image guidance for needle placement.
Collapse
|
11
|
Chen S, Li Z, Lin Y, Wang F, Cao Q. Automatic ultrasound scanning robotic system with optical waveguide-based force measurement. Int J Comput Assist Radiol Surg 2021; 16:1015-1025. [PMID: 33939078 DOI: 10.1007/s11548-021-02385-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Accepted: 04/19/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE The three-dimensional (3D) ultrasound (US) imaging realized by continuous scanning of a region is of great value for medical diagnosis and robot-assisted needle insertion. During scanning, the contact force and posture between the probe and skin of the patient are crucial factors that determine the quality of US imaging. We propose a robotic system for automatic scanning of curved surfaces with a stable contact force and vertical contact posture (the probe is parallel to the normal of the surface at the contact point). METHODS A 6-DOF robotic arm is used to hold and drive a two-dimensional (2D) US probe to complete automatic scanning. Further, a path-planning strategy is proposed to generate the scan path covering the selected area automatically. We also developed a novel force-measuring device based on optical waveguides to measure the distributed contact force and contact posture. Based on the measured force and posture, the robotic arm automatically adjusts the position and orientation of the probe and maintains a stable contact force and vertical contact posture at each scan point. RESULTS The novel force-measuring device is easy to fabricate, integrates with the probe and has the capacity of measuring the force distributed on the contact surface and estimating the contact posture. The experimental results of automatic scanning of a US phantom and parts of the human body demonstrate that the proposed system performs well in automatically scanning curved surfaces, maintaining a stable contact force and vertical contact posture and producing a good quality 3D US volume. CONCLUSION An automatic US scanning robotic system with an optical waveguide-based force-measuring device was developed and tested successfully. Experimental results demonstrated the feasibility of the proposed system to scan the human body.
Collapse
Affiliation(s)
- Shihang Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Zhaojun Li
- Shanghai General Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Yanping Lin
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China. .,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China.
| | - Fang Wang
- Shanghai General Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qixin Cao
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
12
|
Groves LA, VanBerlo B, Peters TM, Chen ECS. Deep learning approach for automatic out-of-plane needle localisation for semi-automatic ultrasound probe calibration. Healthc Technol Lett 2019; 6:204-209. [PMID: 32038858 PMCID: PMC6952243 DOI: 10.1049/htl.2019.0075] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 10/02/2019] [Indexed: 12/25/2022] Open
Abstract
The authors present a deep learning algorithm for the automatic centroid localisation of out-of-plane US needle reflections to produce a semi-automatic ultrasound (US) probe calibration algorithm. A convolutional neural network was trained on a dataset of 3825 images at a 6 cm imaging depth to predict the position of the centroid of a needle reflection. Applying the automatic centroid localisation algorithm to a test set of 614 annotated images produced a root mean squared error of 0.62 and 0.74 mm (6.08 and 7.62 pixels) in the axial and lateral directions, respectively. The mean absolute errors associated with the test set were 0.50 ± 0.40 mm and 0.51 ± 0.54 mm (4.9 ± 3.96 pixels and 5.24 ± 5.52 pixels) for the axial and lateral directions, respectively. The trained model was able to produce visually validated US probe calibrations at imaging depths on the range of 4–8 cm, despite being solely trained at 6 cm. This work has automated the pixel localisation required for the guided-US calibration algorithm producing a semi-automatic implementation available open-source through 3D Slicer. The automatic needle centroid localisation improves the usability of the algorithm and has the potential to decrease the fiducial localisation and target registration errors associated with the guided-US calibration method.
Collapse
Affiliation(s)
- Leah A Groves
- School of Biomedical Engineering, University of Western Ontario, London, Ontario, Canada.,Robarts Research Institute, University of Western Ontario, London, Ontario, Canada
| | - Blake VanBerlo
- Schulich School of Medicine, University of Western Ontario, London, Ontario, Canada
| | - Terry M Peters
- School of Biomedical Engineering, University of Western Ontario, London, Ontario, Canada.,Robarts Research Institute, University of Western Ontario, London, Ontario, Canada.,Medical Biophysics, University of Western Ontario, London, Ontario, Canada
| | - Elvis C S Chen
- School of Biomedical Engineering, University of Western Ontario, London, Ontario, Canada.,Robarts Research Institute, University of Western Ontario, London, Ontario, Canada.,Medical Biophysics, University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
13
|
Peralta L, Gomez A, Luan Y, Kim BH, Hajnal JV, Eckersley RJ. Coherent Multi-Transducer Ultrasound Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2019; 66:1316-1330. [PMID: 31180847 PMCID: PMC7115943 DOI: 10.1109/tuffc.2019.2921103] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This work extends the effective aperture size by coherently compounding the received radio frequency data from multiple transducers. As a result, it is possible to obtain an improved image, with enhanced resolution, an extended field of view (FoV), and high-acquisition frame rates. A framework is developed in which an ultrasound imaging system consisting of N synchronized matrix arrays, each with partly shared FoV, take turns to transmit plane waves (PWs). Only one individual transducer transmits at each time while all N transducers simultaneously receive. The subwavelength localization accuracy required to combine information from multiple transducers is achieved without the use of any external tracking device. The method developed in this study is based on the study of the backscattered echoes received by the same transducer and resulting from a targeted scatterer point in the medium insonated by the multiple ultrasound probes of the system. The current transducer locations along with the speed of sound in the medium are deduced by optimizing the cross correlation between these echoes. The method is demonstrated experimentally in 2-D for two linear arrays using point targets and anechoic lesion phantoms. The first demonstration of a free-hand experiment is also shown. Results demonstrate that the coherent multi-transducer ultrasound imaging method has the potential to improve ultrasound image quality, improving resolution, and target detectability. Compared with coherent PW compounding using a single probe, lateral resolution improved from 1.56 to 0.71 mm in the coherent multi-transducer imaging method without acquisition frame rate sacrifice (acquisition frame rate 5350 Hz).
Collapse
|
14
|
Cenni F, Monari D, Schless SH, Aertbeliën E, Desloovere K, Bruyninckx H. Efficient image based method using water-filled balloons for improving probe spatial calibration in 3D freehand ultrasonography. ULTRASONICS 2019; 94:124-130. [PMID: 30558809 DOI: 10.1016/j.ultras.2018.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 11/26/2018] [Accepted: 11/26/2018] [Indexed: 06/09/2023]
Abstract
The ultrasound (US) probe spatial calibration is a key prerequisite for enabling the use of the 3D freehand US technique. Several methods have been proposed for achieving an accurate and precise calibration, although these methods still require specialised equipment. This equipment is often not available in research or clinical facilities. Therefore, the present investigation aimed to propose an efficient US probe calibration method that is accessible in terms of cost, easy to apply and capable of achieving results suitable for clinical applications. The data acquisition was carried out by performing two perpendicular US sweeps over water filled balloon phantoms. The data analysis was carried out by computing the similarity measures between 2D images from the first sweep and the corresponding images of the 3D reconstruction of the second sweep. These measures were maximized by using the Nelder-Mead algorithm, to find the optimal solution for the calibration parameters. The calibration results were evaluated in terms of accuracy and precision by comparing known phantom geometries with those extracted from the US images. The accuracy and the precision after applying the calibration method were improved. By using the parameters obtained from the plane phantom method as initialization of the calibration parameters, the accuracy and the precision in the best scenario was 0.4 mm and 1.5 mm, respectively. These results were in line with the methods requiring specialised equipment. However, the applied method was unable to consistently produce this level of accuracy and precision. The calibration parameters were also tested in a musculoskeletal application, revealing sufficient matching of the relevant anatomical features when multiple US sweeps are combined in a 3D reconstruction. To improve the current results and increase the reproducibility of this research, the developed software is made available.
Collapse
Affiliation(s)
- Francesco Cenni
- KU Leuven, Department of Movement Sciences, Tervuursevest 101, 3001 Leuven, Belgium; Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium.
| | - Davide Monari
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300b, 3001 Leuven, Belgium
| | - Simon-Henri Schless
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; KU Leuven, Department of Rehabilitation Sciences, Tervuursevest 101, 3001 Leuven, Belgium
| | - Erwin Aertbeliën
- KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300b, 3001 Leuven, Belgium
| | - Kaat Desloovere
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; KU Leuven, Department of Rehabilitation Sciences, Tervuursevest 101, 3001 Leuven, Belgium
| | - Herman Bruyninckx
- KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300b, 3001 Leuven, Belgium
| |
Collapse
|
15
|
Esposito M, Hennersperger C, Gobl R, Demaret L, Storath M, Navab N, Baust M, Weinmann A. Total Variation Regularization of Pose Signals with an Application to 3D Freehand Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2245-2258. [PMID: 30762538 DOI: 10.1109/tmi.2019.2898480] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Three-dimensional freehand imaging techniques are gaining wider adoption due to their ?exibility and cost ef?ciency. Typical examples for such a combination of a tracking system with an imaging device are freehand SPECT or freehand 3D ultrasound. However, the quality of the resulting image data is heavily dependent on the skill of the human operator and on the level of noise of the tracking data. The latter aspect can introduce blur or strong artifacts, which can signi?cantly hamper the interpretation of image data. Unfortunately, the most commonly used tracking systems to date, i.e. optical and electromagnetic, present a trade-off between invading the surgeon's workspace (due to line-of-sight requirements) and higher levels of noise and sensitivity due to the interference of surrounding metallic objects. In this work, we propose a novel approach for total variation regularization of data from tracking systems (which we term pose signals) based on a variational formulation in the manifold of Euclidean transformations. The performance of the proposed approach was evaluated using synthetic data as well as real ultrasound sweeps executed on both a Lego phantom and human anatomy, showing signi?cant improvement in terms of tracking data quality and compounded ultrasound images. Source code can be found at https://github.com/IFL-CAMP/pose_regularization.
Collapse
|
16
|
Shen J, Zemiti N, Dillenseger JL, Poignet P. Fast And Simple Automatic 3D Ultrasound Probe Calibration Based On 3D Printed Phantom And An Untracked Marker. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:878-882. [PMID: 30440531 DOI: 10.1109/embc.2018.8512406] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Tracking the pose of an ultrasound (US) probe is essential for an intraoperative US-based navigation system. The tracking requires mounting a marker on the US probe and calibrating the probe. The goal of the US probe calibration is to determine the rigid transformation between the coordinate system (CS) of the image and the CS of the marker mounted on the probe. We present a fast and automatic calibration method based on a 3D printed phantom and an untracked marker for three-dimensional (3D) US probe calibration. To simplify the conventional calibration procedures using and tracking at least two markers, we used only one marker and did not track it in the whole calibration process. Our automatic calibration method is fast, simple and does not require any experience from the user. The performance of our calibration method was evaluated by point reconstruction tests. The root mean square (RMS) of the point reconstruction errors was 1.39 mm.
Collapse
|
17
|
Xiao G, Bonmati E, Thompson S, Evans J, Hipwell J, Nikitichev D, Gurusamy K, Ourselin S, Hawkes DJ, Davidson B, Clarkson MJ. Electromagnetic tracking in image-guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system. Med Phys 2018; 45:5094-5104. [PMID: 30247765 PMCID: PMC6282846 DOI: 10.1002/mp.13210] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 09/07/2018] [Accepted: 09/07/2018] [Indexed: 11/23/2022] Open
Abstract
PURPOSE In image-guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in image-guided laparoscopic surgery and a feasibility study of a combined, EM-tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EM-tracked laparoscope and an EM-tracked LUS probe. RESULTS In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EM-tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrow-baseline stereo laparoscope. CONCLUSIONS The errors incurred by optical trackers, due to the lever-arm effect and variation in tracking accuracy in the depth direction, would make EM-tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope.
Collapse
Affiliation(s)
- Guofang Xiao
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Ester Bonmati
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Stephen Thompson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Joe Evans
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - John Hipwell
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Daniil Nikitichev
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Sébastien Ourselin
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - David J. Hawkes
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Brian Davidson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| |
Collapse
|
18
|
Lediju Bell MA, Shubert J. Photoacoustic-based visual servoing of a needle tip. Sci Rep 2018; 8:15519. [PMID: 30341371 PMCID: PMC6195562 DOI: 10.1038/s41598-018-33931-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 10/08/2018] [Indexed: 12/15/2022] Open
Abstract
In intraoperative settings, the presence of acoustic clutter and reflection artifacts from metallic surgical tools often reduces the effectiveness of ultrasound imaging and complicates the localization of surgical tool tips. We propose an alternative approach for tool tracking and navigation in these challenging acoustic environments by augmenting ultrasound systems with a light source (to perform photoacoustic imaging) and a robot (to autonomously and robustly follow a surgical tool regardless of the tissue medium). The robotically controlled ultrasound probe continuously visualizes the location of the tool tip by segmenting and tracking photoacoustic signals generated from an optical fiber inside the tool. System validation in the presence of fat, muscle, brain, skull, and liver tissue with and without the presence of an additional clutter layer resulted in mean signal tracking errors <2 mm, mean probe centering errors <1 mm, and successful recovery from ultrasound perturbations, representing either patient motion or switching from photoacoustic images to ultrasound images to search for a target of interest. A detailed analysis of channel SNR in controlled experiments with and without significant acoustic clutter revealed that the detection of a needle tip is possible with photoacoustic imaging, particularly in cases where ultrasound imaging traditionally fails. Results show promise for guiding surgeries and procedures in acoustically challenging environments with this novel robotic and photoacoustic system combination.
Collapse
Affiliation(s)
- Muyinatu A Lediju Bell
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, MD, 21218, USA. .,Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, 21218, USA. .,Johns Hopkins University, Department of Computer Science, Baltimore, MD, 21218, USA.
| | - Joshua Shubert
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, MD, 21218, USA
| |
Collapse
|
19
|
Abstract
Ultrasound is a real-time, non-radiation-based imaging modality with an ability to acquire two-dimensional (2D) and three-dimensional (3D) data. Due to these capabilities, research has been carried out in order to incorporate it as an intraoperative imaging modality for various orthopedic surgery procedures. However, high levels of noise, different imaging artifacts, and bone surfaces appearing blurred with several mm in thickness have prohibited the widespread use of ultrasound as a standard of care imaging modality in orthopedics. In this chapter, we provided a detailed overview of numerous applications of 3D ultrasound in the domain of orthopedic surgery. Specifically, we discuss the advantages and disadvantages of methods proposed for segmentation and enhancement of bone ultrasound data and the successful application of these methods in clinical domain. Finally, a number of challenges are identified which need to be overcome in order for ultrasound to become a preferred imaging modality in orthopedics.
Collapse
|
20
|
FCN-based approach for the automatic segmentation of bone surfaces in ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:1707-1716. [DOI: 10.1007/s11548-018-1856-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 09/03/2018] [Indexed: 01/17/2023]
|
21
|
Ameri G, Bainbridge D, Peters TM, Chen ECS. Quantitative Analysis of Needle Navigation under Ultrasound Guidance in a Simulated Central Venous Line Procedure. ULTRASOUND IN MEDICINE & BIOLOGY 2018; 44:1891-1900. [PMID: 29858126 DOI: 10.1016/j.ultrasmedbio.2018.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 04/20/2018] [Accepted: 05/01/2018] [Indexed: 06/08/2023]
Abstract
Complications in ultrasound-guided central line insertions are associated with the expertise level of the operator. However, a lack of standards for teaching, training and evaluation of ultrasound guidance results in various levels of competency during training. To address such shortcomings, there has been a paradigm shift in medical education toward competency-based training, promoting the use of simulators and quantitative skills assessment. It is therefore necessary to develop reliable quantitative metrics to establish standards for the attainment and maintenance of competence. This work identifies such a metric for simulated central line procedures. The distance between the needle tip and ultrasound image plane was quantified as a metric of efficacy in ultrasound guidance implementation. In a simulated procedure, performed by experienced physicians, this distance was significantly greater in unsuccessful procedures (p = 0.04). The use of this metric has the potential to enhance the teaching, training and skills assessment of ultrasound-guided central line insertions.
Collapse
Affiliation(s)
- Golafsoun Ameri
- Biomedical Engineering Graduate Program, Western University, London, Ontario, Canada; Robarts Research Institute, London, Ontario, Canada.
| | - Daniel Bainbridge
- Department of Anesthesiology and Perioperative Medicine, University Hospital-London Health Sciences Centre, Western University, London, Ontario, Canada
| | - Terry M Peters
- Biomedical Engineering Graduate Program, Western University, London, Ontario, Canada; Robarts Research Institute, London, Ontario, Canada
| | - Elvis C S Chen
- Biomedical Engineering Graduate Program, Western University, London, Ontario, Canada; Robarts Research Institute, London, Ontario, Canada
| |
Collapse
|
22
|
Zhou XY, Yang GZ, Lee SL. A real-time and registration-free framework for dynamic shape instantiation. Med Image Anal 2018; 44:86-97. [DOI: 10.1016/j.media.2017.11.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 10/16/2017] [Accepted: 11/22/2017] [Indexed: 11/16/2022]
|
23
|
Toews M, Wells WM. Phantomless Auto-Calibration and Online Calibration Assessment for a Tracked Freehand 2-D Ultrasound Probe. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:262-272. [PMID: 28910761 PMCID: PMC5808952 DOI: 10.1109/tmi.2017.2750978] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents a method for automatically calibrating and assessing the calibration quality of an externally tracked 2-D ultrasound (US) probe by scanning arbitrary, natural tissues, as opposed a specialized calibration phantom as is the typical practice. A generative topic model quantifies the posterior probability of calibration parameters conditioned on local 2-D image features arising from a generic underlying substrate. Auto-calibration is achieved by identifying the maximum a-posteriori image-to-probe transform, and calibration quality is assessed online in terms of the posterior probability of the current image-to-probe transform. Both are closely linked to the 3-D point reconstruction error (PRE) in aligning feature observations arising from the same underlying physical structure in different US images. The method is of practical importance in that it operates simply by scanning arbitrary textured echogenic structures, e.g., in-vivo tissues in the context of the US-guided procedures, without requiring specialized calibration procedures or equipment. Observed data take the form of local scale-invariant features that can be extracted and fit to the model in near real-time. Experiments demonstrate the method on a public data set of in vivo human brain scans of 14 unique subjects acquired in the context of neurosurgery. Online calibration assessment can be performed at approximately 3 Hz for the US images of pixels. Auto-calibration achieves an internal mean PRE of 1.2 mm and a discrepancy of [2 mm, 6 mm] in comparison to the calibration via a standard phantom-based method.
Collapse
|
24
|
Park S, Jang J, Kim J, Kim YS, Kim C. Real-time Triple-modal Photoacoustic, Ultrasound, and Magnetic Resonance Fusion Imaging of Humans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1912-1921. [PMID: 28436857 DOI: 10.1109/tmi.2017.2696038] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Imaging that fuses multiple modes has become a useful tool for diagnosis and therapeutic monitoring. As a next step, real-time fusion imaging has attracted interest as for a tool to guide surgery. One widespread fusion imaging technique in surgery combines real-time ultrasound (US) imaging and pre-acquired magnetic resonance (MR) imaging. However, US imaging visualizes only structural information with relatively low contrast. Here, we present a photoacoustic (PA), US, and MR fusion imaging system which integrates a clinical PA and US imaging system with an optical tracking-based navigation sub-system. Through co-registration of pre-acquired MR and real-time PA/US images, overlaid PA, US, and MR images can be concurrently displayed in real time. We successfully acquired fusion images from a phantom and a blood vessel in a human forearm. This fusion imaging can complementarily delineate the morphological and vascular structure of tissues with good contrast and sensitivity, has a well-established user interface, and can be flexibly integrated with clinical environments. As a novel fusion imaging, the proposed triple-mode imaging can provide comprehensive image guidance in real time, and can potentially assist various surgeries.
Collapse
|
25
|
Pourtaherian A, Scholten HJ, Kusters L, Zinger S, Mihajlovic N, Kolen AF, Zuo F, Ng GC, Korsten HHM, de With PHN. Medical Instrument Detection in 3-Dimensional Ultrasound Data Volumes. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1664-1675. [PMID: 28410101 DOI: 10.1109/tmi.2017.2692302] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ultrasound-guided medical interventions are broadly applied in diagnostics and therapy, e.g., regional anesthesia or ablation. A guided intervention using 2-D ultrasound is challenging due to the poor instrument visibility, limited field of view, and the multi-fold coordination of the medical instrument and ultrasound plane. Recent 3-D ultrasound transducers can improve the quality of the image-guided intervention if an automated detection of the needle is used. In this paper, we present a novel method for detecting medical instruments in 3-D ultrasound data that is solely based on image processing techniques and validated on various ex vivo and in vivo data sets. In the proposed procedure, the physician is placing the 3-D transducer at the desired position, and the image processing will automatically detect the best instrument view, so that the physician can entirely focus on the intervention. Our method is based on the classification of instrument voxels using volumetric structure directions and robust approximation of the primary tool axis. A novel normalization method is proposed for the shape and intensity consistency of instruments to improve the detection. Moreover, a novel 3-D Gabor wavelet transformation is introduced and optimally designed for revealing the instrument voxels in the volume, while remaining generic to several medical instruments and transducer types. Experiments on diverse data sets, including in vivo data from patients, show that for a given transducer and an instrument type, high detection accuracies are achieved with position errors smaller than the instrument diameter in the 0.5-1.5-mm range on average.
Collapse
|
26
|
Muradore R, Fiorini P, Akgun G, Barkana DE, Bonfe M, Boriero F, Caprara A, De Rossi G, Dodi R, Elle OJ, Ferraguti F, Gasperotti L, Gassert R, Mathiassen K, Handini D, Lambercy O, Li L, Kruusmaa M, Manurung AO, Meruzzi G, Nguyen HQP, Preda N, Riolfo G, Ristolainen A, Sanna A, Secchi C, Torsello M, Yantac AE. Development of a Cognitive Robotic System for Simple Surgical Tasks. INT J ADV ROBOT SYST 2017. [DOI: 10.5772/60137] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Affiliation(s)
| | - Paolo Fiorini
- Department of Computer Science, University of Verona, Italy
| | - Gokhan Akgun
- Cognitive Science Department, Yeditepe University, Istanbul, Turkey
| | - Duygun Erol Barkana
- Electrical and Electronics Engineering Department, Yeditepe University, Istanbul, Turkey
| | | | | | - Andrea Caprara
- Department of Legal Studies, School of Law, University of Verona, Italy
| | | | - Riccardo Dodi
- e-Services for Life and Health Research Department, Fondazione Centro San Raffaele, Italy
| | - Ole Jakob Elle
- Department of Informatics, University of Oslo, and The Intervention Center, Oslo University Hospital, Oslo, Norway
| | - Federica Ferraguti
- Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Italy
| | | | - Roger Gassert
- Rehabilitation Engineering Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Switzerland
| | - Kim Mathiassen
- Department of Informatics, University of Oslo, and The Intervention Center, Oslo University Hospital, Oslo, Norway
| | - Dilla Handini
- The Intervention Center, Oslo University Hospital, Rikshospitalet, Norway
| | - Olivier Lambercy
- Rehabilitation Engineering Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Switzerland
| | - Lin Li
- Tallinn University of Technology, Faculty of Information Technology, Centre for Biorobotics, Tallinn, Estonia
| | - Maarja Kruusmaa
- Tallinn University of Technology, Faculty of Information Technology, Centre for Biorobotics, Tallinn, Estonia
| | - Auralius Oberman Manurung
- Rehabilitation Engineering Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Switzerland
| | - Giovanni Meruzzi
- Department of Legal Studies, School of Law, University of Verona, Italy
| | | | - Nicola Preda
- Engineering Department, University of Ferrara, Italy
| | - Gianluca Riolfo
- Department of Legal Studies, School of Law, University of Verona, Italy
| | - Asko Ristolainen
- Tallinn University of Technology, Faculty of Information Technology, Centre for Biorobotics, Tallinn, Estonia
| | - Alberto Sanna
- e-Services for Life and Health Research Department, Fondazione Centro San Raffaele, Italy
| | - Cristian Secchi
- Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Italy
| | - Marco Torsello
- Department of Legal Studies, School of Law, University of Verona, Italy
| | | |
Collapse
|
27
|
Ameri G, Baxter JSH, McLeod AJ, Peters TM, Chen ECS. Effects of line fiducial parameters and beamforming on ultrasound calibration. J Med Imaging (Bellingham) 2017; 4:015002. [PMID: 28331886 DOI: 10.1117/1.jmi.4.1.015002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Accepted: 02/08/2017] [Indexed: 11/14/2022] Open
Abstract
Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.
Collapse
Affiliation(s)
- Golafsoun Ameri
- Robarts Research Institute, London, Ontario, Canada; Western University, Biomedical Engineering Graduate Program, London, Ontario, Canada
| | - John S H Baxter
- Robarts Research Institute, London, Ontario, Canada; Western University, Biomedical Engineering Graduate Program, London, Ontario, Canada
| | - A Jonathan McLeod
- Robarts Research Institute, London, Ontario, Canada; Western University, Biomedical Engineering Graduate Program, London, Ontario, Canada
| | - Terry M Peters
- Robarts Research Institute, London, Ontario, Canada; Western University, Biomedical Engineering Graduate Program, London, Ontario, Canada
| | - Elvis C S Chen
- Robarts Research Institute, London, Ontario, Canada; Western University, Biomedical Engineering Graduate Program, London, Ontario, Canada
| |
Collapse
|
28
|
On the reproducibility of expert-operated and robotic ultrasound acquisitions. Int J Comput Assist Radiol Surg 2017; 12:1003-1011. [DOI: 10.1007/s11548-017-1561-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 03/07/2017] [Indexed: 10/19/2022]
|
29
|
Chung SW, Shih CC, Huang CC. Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology. ULTRASONICS 2017; 74:11-20. [PMID: 27721196 DOI: 10.1016/j.ultras.2016.09.020] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 09/13/2016] [Accepted: 09/26/2016] [Indexed: 05/22/2023]
Abstract
Ultrasound imaging has been extensively used for determining the severity of carotid atherosclerotic stenosis. In particular, the morphological characterization of carotid plaques can be performed for risk stratification of patients. However, using 2D ultrasound imaging for detecting morphological changes in plaques has several limitations. Due to the scan was performed on a single longitudinal cross-section, the selected 2D image is difficult to represent the entire morphology and volume of plaque and vessel lumen. In addition, the precise positions of 2D ultrasound images highly depend on the radiologists' experience, it makes the serial long-term exams of anti-atherosclerotic therapies are difficult to relocate the same corresponding planes by using 2D B-mode images. This has led to the recent development of three-dimensional (3D) ultrasound imaging, which offers improved visualization and quantification of complex morphologies of carotid plaques. In the present study, a freehand 3D ultrasound imaging technique based on optical motion tracking technology is proposed. Unlike other optical tracking systems, the marker is a small rigid body that is attached to the ultrasound probe and is tracked by eight high-performance digital cameras. The probe positions in 3D space coordinates are then calibrated at spatial and temporal resolutions of 10μm and 0.01s, respectively. The image segmentation procedure involves Otsu's and the active contour model algorithms and accurately detects the contours of the carotid arteries. The proposed imaging technique was verified using normal artery and atherosclerotic stenosis phantoms. Human experiments involving freehand scanning of the carotid artery of a volunteer were also performed. The results indicated that compared with manual segmentation, the lowest percentage errors of the proposed segmentation procedure were 7.8% and 9.1% for the external and internal carotid arteries, respectively. Finally, the effect of handshaking was calibrated using the optical tracking system for reconstructing a 3D image.
Collapse
Affiliation(s)
- Shao-Wen Chung
- Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Cho-Chiang Shih
- Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Chih-Chung Huang
- Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
30
|
Hennersperger C, Fuerst B, Virga S, Zettinig O, Frisch B, Neff T, Navab N. Towards MRI-Based Autonomous Robotic US Acquisitions: A First Feasibility Study. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:538-548. [PMID: 27831861 DOI: 10.1109/tmi.2016.2620723] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.
Collapse
|
31
|
Oeri M, Bost W, Tretbar S, Fournelle M. Calibrated Linear Array-Driven Photoacoustic/Ultrasound Tomography. ULTRASOUND IN MEDICINE & BIOLOGY 2016; 42:2697-2707. [PMID: 27523424 DOI: 10.1016/j.ultrasmedbio.2016.06.028] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Revised: 05/25/2016] [Accepted: 06/29/2016] [Indexed: 05/07/2023]
Abstract
The anisotropic resolution of linear arrays, tools that are widely used in diagnostics, can be overcome by compounding approaches. We investigated the ability of a recently developed calibration and a novel algorithm to determine the actual radial transducer array distance and its misalignment (tilt) with respect to the center of rotation in a 2-D and 3-D tomographic setup. By increasing the time-of-flight accuracy, we force in-phase summation during the reconstruction. Our setup is composed of a linear transducer and a rotation and translation axis enabling multidimensional imaging in ultrasound and photoacoustic mode. Our approach is validated on phantoms and young mice ex vivo. The results indicate that application of the proposed analytical calibration algorithms prevents image artifacts. The spatial resolution achieved was 160 and 250 μm in photoacoustic mode of 2-D and 3-D tomography, respectively.
Collapse
Affiliation(s)
- Milan Oeri
- Fraunhofer Institute for Biomedical Engineering (IBMT), Medical Ultrasound Group, St. Ingbert, Germany.
| | - Wolfgang Bost
- Fraunhofer Institute for Biomedical Engineering (IBMT), Medical Ultrasound Group, St. Ingbert, Germany
| | - Steffen Tretbar
- Fraunhofer Institute for Biomedical Engineering (IBMT), Medical Ultrasound Group, St. Ingbert, Germany
| | - Marc Fournelle
- Fraunhofer Institute for Biomedical Engineering (IBMT), Medical Ultrasound Group, St. Ingbert, Germany
| |
Collapse
|
32
|
Cenni F, Monari D, Desloovere K, Aertbeliën E, Schless SH, Bruyninckx H. The reliability and validity of a clinical 3D freehand ultrasound system. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 136:179-187. [PMID: 27686714 DOI: 10.1016/j.cmpb.2016.09.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 08/28/2016] [Accepted: 09/02/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Acquiring large anatomical volumes in a feasible manner is useful for clinical decision-making. A relatively new technique called 3D freehand ultrasonography is capable of this by combining a conventional 2D ultrasonography system. Currently, a thorough analysis of this technique is lacking, as the analyses are dependent on the software implementation details and the choice of measurement systems. Therefore this study starts by making this implementation available under the form of an open-source software library to perform 3D freehand ultrasonography. Following that, reliability and validity analyses of extracting volumes and lengths will be carried out using two independent motion-tracking systems. METHODS A PC-based ultrasonography device and two optical motion-tracking systems were used for data acquisition. An in-house software library called Py3DFreeHandUS was developed to reconstruct (off-line) the corresponding data into one 3D data set. Reliability and validity analyses of the entire experimental set-up were performed by estimating the volumes and lengths of ground truth objects. Ten water-filled balloons and six cross-wires were used. Repeat measurements were also performed by two experienced operators. RESULTS The software library Py3DFreeHandUS is available online, along with the relevant documentation. The reliability analyses showed high intra- and inter-operator intra-class correlation coefficient results for both volumes and lengths. The accuracy analysis revealed a discrepancy in all cases of around 3%, which corresponded to 3 ml and 1 mm for volume and length measurements, respectively. Similar results were found for both of the motion-tracking systems. CONCLUSIONS The undertaken analyses for estimating volume and lengths acquired with 3D freehand ultrasonography demonstrated reliable design measurements and suitable performance for applications that do not require sub-mm and -ml accuracy.
Collapse
Affiliation(s)
- Francesco Cenni
- Department of Mechanical Engineering, KU Leuven, Celestijnenlaan 300b, 3001 Leuven, Belgium; Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium.
| | - Davide Monari
- Department of Mechanical Engineering, KU Leuven, Celestijnenlaan 300b, 3001 Leuven, Belgium; Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium
| | - Kaat Desloovere
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; Department of Rehabilitation Sciences, KU Leuven, Tervuursevest 101, 3001 Leuven, Belgium
| | - Erwin Aertbeliën
- Department of Mechanical Engineering, KU Leuven, Celestijnenlaan 300b, 3001 Leuven, Belgium
| | - Simon-Henri Schless
- Clinical Motion Analysis Laboratory, University Hospital, Pellenberg, Weligerveld 1, 3212 Pellenberg, Belgium; Department of Rehabilitation Sciences, KU Leuven, Tervuursevest 101, 3001 Leuven, Belgium
| | - Herman Bruyninckx
- Department of Mechanical Engineering, KU Leuven, Celestijnenlaan 300b, 3001 Leuven, Belgium
| |
Collapse
|
33
|
Kojcev R, Fuerst B, Zettinig O, Fotouhi J, Lee SC, Frisch B, Taylor R, Sinibaldi E, Navab N. Dual-robot ultrasound-guided needle placement: closing the planning-imaging-action loop. Int J Comput Assist Radiol Surg 2016; 11:1173-81. [DOI: 10.1007/s11548-016-1408-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/31/2016] [Indexed: 10/21/2022]
|
34
|
Quantitative Assessment of Variational Surface Reconstruction from Sparse Point Clouds in Freehand 3D Ultrasound Imaging during Image-Guided Tumor Ablation. APPLIED SCIENCES-BASEL 2016. [DOI: 10.3390/app6040114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
35
|
Zhang HK, Cheng A, Bottenus N, Guo X, Trahey GE, Boctor EM. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation. J Med Imaging (Bellingham) 2016; 3:027001. [PMID: 27088108 DOI: 10.1117/1.jmi.3.2.027001] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Accepted: 03/18/2016] [Indexed: 11/14/2022] Open
Abstract
Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality.
Collapse
Affiliation(s)
- Haichong K Zhang
- Johns Hopkins University , Department of Computer Science, 3400 North Charles Street, Baltimore, Maryland 21218, United States
| | - Alexis Cheng
- Johns Hopkins University , Department of Computer Science, 3400 North Charles Street, Baltimore, Maryland 21218, United States
| | - Nick Bottenus
- Duke University , Department of Biomedical Engineering, 101 Science Drive Campus Box 90281, Durham, North Carolina 27708, United States
| | - Xiaoyu Guo
- Johns Hopkins University , Department of Electrical and Computer Engineering, 3400 North Charles Street, Baltimore, Maryland 21218, United States
| | - Gregg E Trahey
- Duke University , Department of Biomedical Engineering, 101 Science Drive Campus Box 90281, Durham, North Carolina 27708, United States
| | - Emad M Boctor
- Johns Hopkins University, Department of Computer Science, 3400 North Charles Street, Baltimore, Maryland 21218, United States; Johns Hopkins University, Department of Electrical and Computer Engineering, 3400 North Charles Street, Baltimore, Maryland 21218, United States; Johns Hopkins University, Department of Radiology, 601 North Caroline Street, Baltimore, Maryland 21287, United States
| |
Collapse
|
36
|
Vasconcelos F, Peebles D, Ourselin S, Stoyanov D. Spatial calibration of a 2D/3D ultrasound using a tracked needle. Int J Comput Assist Radiol Surg 2016; 11:1091-9. [PMID: 27059023 PMCID: PMC4893368 DOI: 10.1007/s11548-016-1392-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/17/2016] [Indexed: 11/30/2022]
Abstract
Purpose Spatial calibration between a 2D/3D ultrasound and a pose tracking system requires a complex and time-consuming procedure. Simplifying this procedure without compromising the calibration accuracy is still a challenging problem. Method We propose a new calibration method for both 2D and 3D ultrasound probes that involves scanning an arbitrary region of a tracked needle in different poses. This approach is easier to perform than most alternative methods that require a precise alignment between US scans and a calibration phantom. Results Our calibration method provides an average accuracy of 2.49 mm for a 2D US probe with 107 mm scanning depth, and an average accuracy of 2.39 mm for a 3D US with 107 mm scanning depth. Conclusion Our method proposes a unified calibration framework for 2D and 3D probes using the same phantom object, work-flow, and algorithm. Our method significantly improves the accuracy of needle-based methods for 2D US probes as well as extends its use for 3D US probes.
Collapse
Affiliation(s)
| | - Donald Peebles
- />Department of Obstetrics and Gynecology, UCL, London, UK
| | | | | |
Collapse
|
37
|
Guided ultrasound calibration: where, how, and how many calibration fiducials. Int J Comput Assist Radiol Surg 2016; 11:889-98. [DOI: 10.1007/s11548-016-1390-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 03/16/2016] [Indexed: 10/22/2022]
|
38
|
Aalamifar F, Khurana R, Cheng A, Guo X, Iordachita I, Boctor EM. Enabling technologies for robot assisted ultrasound tomography. Int J Med Robot 2016; 13. [PMID: 27028676 DOI: 10.1002/rcs.1746] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Revised: 02/15/2016] [Accepted: 02/16/2016] [Indexed: 11/06/2022]
Abstract
Currently available ultrasound (US) tomography systems suggest utilizing cylindrical transducers that can be used for a specific organ. In this paper, our focus is on an alternative way of creating US tomographic images that could be used for other anatomies and more general applications. This system consists of two conventional US probes facing each other while one or several of the transducers in one probe can act as the transmitter and the rest as the receiver. Aligning the two US probes is a challenging task. To address this issue, we propose a robot assisted US tomography system in which one probe is operated freehanded and another by a robotic arm. In this paper, enabling technologies for this system are described. With the current prototype, a reconstruction precision of 4.12, 1.73, and 2.23 mm for the three calibrations, and an overall alignment repeatability in the range of 5-9 mm were achieved. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Fereshteh Aalamifar
- Johns Hopkins University, Electrical & Computer Eng. Dept., 3400 N. Charles St., Baltimore, MD, 21218, USA
| | - Rishabh Khurana
- Johns Hopkins University, Mechanical Eng. Dept., 3400 N. Charles St., Baltimore, MD, 21218, USA
| | - Alexis Cheng
- Johns Hopkins University, Dept. of Computer Science., 3400 N. Charles St., Baltimore, MD, 21218, USA
| | - Xiaoyu Guo
- Johns Hopkins University, Electrical & Computer Eng. Dept., 3400 N. Charles St., Baltimore, MD, 21218, USA
| | - Iulian Iordachita
- Johns Hopkins University, Mechanical Eng. Dept., 3400 N. Charles St., Baltimore, MD, 21218, USA
| | - Emad M Boctor
- Johns Hopkins University, Electrical & Computer Eng. Dept., 3400 N. Charles St., Baltimore, MD, 21218, USA.,Johns Hopkins University, Dept. of Computer Science., 3400 N. Charles St., Baltimore, MD, 21218, USA.,Johns Hopkins University, Dept. of Radiology, 601 North Caroline Street., Baltimore, MD, 21287, USA
| |
Collapse
|
39
|
Schneider C, Nguan C, Rohling R, Salcudean S. Tracked “Pick-Up” Ultrasound for Robot-Assisted Minimally Invasive Surgery. IEEE Trans Biomed Eng 2016; 63:260-8. [DOI: 10.1109/tbme.2015.2453173] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
40
|
Robot-assisted automatic ultrasound calibration. Int J Comput Assist Radiol Surg 2016; 11:1821-9. [PMID: 26754446 DOI: 10.1007/s11548-015-1341-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2015] [Accepted: 12/23/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Ultrasound (US) calibration is the process of determining the unknown transformation from a coordinate frame such as the robot's tooltip to the US image frame and is a necessary task for any robotic or tracked US system. US calibration requires submillimeter-range accuracy for most applications, but it is a time-consuming and repetitive task. We provide a new framework for automatic US calibration with robot assistance and without the need for temporal calibration. METHOD US calibration based on active echo (AE) phantom was previously proposed, and its superiority over conventional cross-wire phantom-based calibration was shown. In this work, we use AE to guide the robotic arm motion through the process of data collection; we combine the capability of the AE point to localize itself in the frame of the US image with the automatic motion of the robotic arm to provide a framework for calibrating the arm to the US image automatically. RESULTS We demonstrated the efficacy of the automated method compared to the manual method through experiments. To highlight the necessity of frequent ultrasound calibration, it is demonstrated that the calibration precision changed from 1.67 to 3.20 mm if the data collection is not repeated after a dismounting/mounting of the probe holder. In a large data set experiment, similar reconstruction precision of automatic and manual data collection was observed, while the time was reduced by 58 %. In addition, we compared ten automatic calibrations with ten manual ones, each performed in 15 min, and showed that all the automatic ones could converge in the case of setting the initial matrix as identity, while this was not achieved by manual data sets. Given the same initial matrix, the repeatability of the automatic was [0.46, 0.34, 0.80, 0.47] versus [0.42, 0.51, 0.98, 1.15] mm in the manual case for the US image four corners. CONCLUSIONS The submillimeter accuracy requirement of US calibration makes frequent data collections unavoidable. We proposed an automated calibration setup and showed feasibility by implementing it for a robot tooltip to US image calibration. The automated method showed a similar reconstruction precision as well as repeatability compared to the manual method, while the time consumed for data collection was reduced. The automatic method also reduces the burden of data collection for the user. Thus, the automated method can be a viable solution for applications that require frequent calibrations.
Collapse
|
41
|
State of the Art of Ultrasound-Based Registration in Computer Assisted Orthopedic Interventions. COMPUTATIONAL RADIOLOGY FOR ORTHOPAEDIC INTERVENTIONS 2016. [DOI: 10.1007/978-3-319-23482-3_14] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
42
|
|
43
|
Song Y, Totz J, Thompson S, Johnsen S, Barratt D, Schneider C, Gurusamy K, Davidson B, Ourselin S, Hawkes D, Clarkson MJ. Locally rigid, vessel-based registration for laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2015; 10:1951-61. [PMID: 26092658 PMCID: PMC4642598 DOI: 10.1007/s11548-015-1236-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 05/30/2015] [Indexed: 12/05/2022]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet is difficult for most lesions due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but is challenging in a soft deforming organ such as the liver. In this paper, we therefore propose a laparoscopic ultrasound (LUS) image guidance system and study the feasibility of a locally rigid registration for laparoscopic liver surgery. METHODS We developed a real-time segmentation method to extract vessel centre points from calibrated, freehand, electromagnetically tracked, 2D LUS images. Using landmark-based initial registration and an optional iterative closest point (ICP) point-to-line registration, a vessel centre-line model extracted from preoperative computed tomography (CT) is registered to the ultrasound data during surgery. RESULTS Using the locally rigid ICP method, the RMS residual error when registering to a phantom was 0.7 mm, and the mean target registration error (TRE) for two in vivo porcine studies was 3.58 and 2.99 mm, respectively. Using the locally rigid landmark-based registration method gave a mean TRE of 4.23 mm using vessel centre lines derived from CT scans taken with pneumoperitoneum and 6.57 mm without pneumoperitoneum. CONCLUSION In this paper we propose a practical image-guided surgery system based on locally rigid registration of a CT-derived model to vascular structures located with LUS. In a physical phantom and during porcine laparoscopic liver resection, we demonstrate accuracy of target location commensurate with surgical requirements. We conclude that locally rigid registration could be sufficient for practically useful image guidance in the near future.
Collapse
Affiliation(s)
- Yi Song
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| | - Johannes Totz
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Steve Thompson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Stian Johnsen
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Dean Barratt
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Brian Davidson
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - David Hawkes
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| |
Collapse
|
44
|
Gomez A, de Vecchi A, Jantsch M, Shi W, Pushparajah K, Simpson JM, Smith NP, Rueckert D, Schaeffter T, Penney GP. 4D Blood Flow Reconstruction Over the Entire Ventricle From Wall Motion and Blood Velocity Derived From Ultrasound Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:2298-2308. [PMID: 25955584 PMCID: PMC7115944 DOI: 10.1109/tmi.2015.2428932] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We demonstrate a new method to recover 4D blood flow over the entire ventricle from partial blood velocity measurements using multiple 3D+t colour Doppler images and ventricular wall motion estimated using 3D+t BMode images. We apply our approach to realistic simulated data to ascertain the ability of the method to deal with incomplete data, as typically happens in clinical practice. Experiments using synthetic data show that the use of wall motion improves velocity reconstruction, shows more accurate flow patterns and improves mean accuracy particularly when coverage of the ventricle is poor. The method was applied to patient data from 6 congenital cases, producing results consistent with the simulations. The use of wall motion produced more plausible flow patterns and reduced the reconstruction error in all patients.
Collapse
|
45
|
Askeland C, Solberg OV, Bakeng JBL, Reinertsen I, Tangen GA, Hofstad EF, Iversen DH, Våpenstad C, Selbekk T, Langø T, Hernes TAN, Olav Leira H, Unsgård G, Lindseth F. CustusX: an open-source research platform for image-guided therapy. Int J Comput Assist Radiol Surg 2015; 11:505-19. [PMID: 26410841 PMCID: PMC4819973 DOI: 10.1007/s11548-015-1292-0] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Accepted: 08/31/2015] [Indexed: 12/14/2022]
Abstract
Purpose CustusX is an image-guided therapy (IGT) research platform dedicated to intraoperative navigation and ultrasound imaging. In this paper, we present CustusX as a robust, accurate, and extensible platform with full access to data and algorithms and show examples of application in technological and clinical IGT research. Methods CustusX has been developed continuously for more than 15 years based on requirements from clinical and technological researchers within the framework of a well-defined software quality process. The platform was designed as a layered architecture with plugins based on the CTK/OSGi framework, a superbuild that manages dependencies and features supporting the IGT workflow. We describe the use of the system in several different clinical settings and characterize major aspects of the system such as accuracy, frame rate, and latency. Results The validation experiments show a navigation system accuracy of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$<$$\end{document}<1.1 mm, a frame rate of 20 fps, and latency of 285 ms for a typical setup. The current platform is extensible, user-friendly and has a streamlined architecture and quality process. CustusX has successfully been used for IGT research in neurosurgery, laparoscopic surgery, vascular surgery, and bronchoscopy. Conclusions CustusX is now a mature research platform for intraoperative navigation and ultrasound imaging and is ready for use by the IGT research community. CustusX is open-source and freely available at http://www.custusx.org.
Collapse
Affiliation(s)
- Christian Askeland
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway. .,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway.
| | - Ole Vegard Solberg
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway
| | | | - Ingerid Reinertsen
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway
| | - Geir Arne Tangen
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway
| | | | - Daniel Høyer Iversen
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Cecilie Våpenstad
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Tormod Selbekk
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Thomas Langø
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Toril A Nagelhus Hernes
- Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Håkon Olav Leira
- Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Geirmund Unsgård
- Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| | - Frank Lindseth
- Department of Medical Technology, SINTEF Technology and Society, Trondheim, Norway.,Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Norwegian National Advisory Unit on Ultrasound and Image-Guided Therapy, St. Olavs Hospital - Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
46
|
Zhao L, Joubair A, Bigras P, Bonev IA. Metrological Evaluation of a Novel Medical Robot and Its Kinematic Calibration. INT J ADV ROBOT SYST 2015. [DOI: 10.5772/60881] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The vessels are twisted in a longitudinal 3D space in the lower limbs of humans. Thus, it is difficult to perform an ultrasound scanning examination in this area. In this paper, a new medical parallel robot is introduced to effectively diagnose vessel disease in the lower limbs. The robot's position repeatability and accuracy are evaluated. Furthermore, the robot's accuracy is improved through a calibration process in which the kinematic parameters are identified through a simple identification approach.
Collapse
Affiliation(s)
- Longfei Zhao
- Ecole de technologie superieure, Montreal, Quebec, Canada
| | - Ahmed Joubair
- Ecole de technologie superieure, Montreal, Quebec, Canada
| | - Pascal Bigras
- Ecole de technologie superieure, Montreal, Quebec, Canada
| | - Ilian A. Bonev
- Ecole de technologie superieure, Montreal, Quebec, Canada
| |
Collapse
|
47
|
Lasso A, Heffter T, Rankin A, Pinter C, Ungi T, Fichtinger G. PLUS: open-source toolkit for ultrasound-guided intervention systems. IEEE Trans Biomed Eng 2014; 61:2527-37. [PMID: 24833412 DOI: 10.1109/tbme.2014.2322864] [Citation(s) in RCA: 183] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.
Collapse
|
48
|
Hong HG, Nam GP, Lee HC, Park KR, Kim SM. New system for tracking a device for diagnosing scalp skin. SENSORS 2014; 14:6516-34. [PMID: 24721768 PMCID: PMC4029724 DOI: 10.3390/s140406516] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Revised: 03/13/2014] [Accepted: 03/30/2014] [Indexed: 11/16/2022]
Abstract
In scalp skin examinations, it is difficult to find a previously treated region on a patient's scalp through images captured by a camera attached to a diagnostic device because the zoom lens on camera has a small field of view. Thus, doctors manually record the region on a chart or manually mark the region. However, this process is slow and inconveniences the patient. Thus, we propose a new system for tracking the diagnostic device for the scalp skin of patients. Our research is novel in four ways. First, our proposed system consists of two cameras to capture the face and the diagnostic device. Second, the user can easily set the position of camera to capture the diagnostic device by manually moving a frame to which the camera is attached. Third, the position of patient's nostrils and corners of the eyes are detected to align the position of his/her head more accurately with the recorded position from previous sessions. Fourth, the position of the diagnostic device is continuously tracked during the examination through images that help detect the position of the color marker attached to the device. Experimental results show that our system has a higher performance than conventional method.
Collapse
Affiliation(s)
- Hyung Gil Hong
- Division of Electronics and Electrical Engineering, Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Korea.
| | - Gi Pyo Nam
- Division of Electronics and Electrical Engineering, Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Korea.
| | - Hyeon Chang Lee
- Division of Electronics and Electrical Engineering, Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Korea.
| | - Sung Min Kim
- Department of Medical Bio Engineering, Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
49
|
Schlosser J, Kirmizibayrak C, Shamdasani V, Metz S, Hristov D. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration. Phys Med Biol 2013; 58:7481-96. [PMID: 24099806 DOI: 10.1088/0031-9155/58/21/7481] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the 'hand-eye' calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795).
Collapse
Affiliation(s)
- Jeffrey Schlosser
- Department of Mechanical Engineering, Stanford University, Stanford, CA 94305, USA. Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | | | | | | | | |
Collapse
|
50
|
Nadeau C, Krupa A. Intensity-Based Ultrasound Visual Servoing: Modeling and Validation With 2-D and 3-D Probes. IEEE T ROBOT 2013. [DOI: 10.1109/tro.2013.2256690] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|